Sample records for sources estimation methodology

  1. A novel integrated approach for the hazardous radioactive dust source terms estimation in future nuclear fusion power plants.

    PubMed

    Poggi, L A; Malizia, A; Ciparisse, J F; Gaudio, P

    2016-10-01

    An open issue still under investigation by several international entities working on the safety and security field for the foreseen nuclear fusion reactors is the estimation of source terms that are a hazard for the operators and public, and for the machine itself in terms of efficiency and integrity in case of severe accident scenarios. Source term estimation is a crucial key safety issue to be addressed in the future reactors safety assessments, and the estimates available at the time are not sufficiently satisfactory. The lack of neutronic data along with the insufficiently accurate methodologies used until now, calls for an integrated methodology for source term estimation that can provide predictions with an adequate accuracy. This work proposes a complete methodology to estimate dust source terms starting from a broad information gathering. The wide number of parameters that can influence dust source term production is reduced with statistical tools using a combination of screening, sensitivity analysis, and uncertainty analysis. Finally, a preliminary and simplified methodology for dust source term production prediction for future devices is presented.

  2. Recent Approaches to Estimate Associations Between Source-Specific Air Pollution and Health.

    PubMed

    Krall, Jenna R; Strickland, Matthew J

    2017-03-01

    Estimating health effects associated with source-specific exposure is important for better understanding how pollution impacts health and for developing policies to better protect public health. Although epidemiologic studies of sources can be informative, these studies are challenging to conduct because source-specific exposures (e.g., particulate matter from vehicles) often are not directly observed and must be estimated. We reviewed recent studies that estimated associations between pollution sources and health to identify methodological developments designed to address important challenges. Notable advances in epidemiologic studies of sources include approaches for (1) propagating uncertainty in source estimation into health effect estimates, (2) assessing regional and seasonal variability in emissions sources and source-specific health effects, and (3) addressing potential confounding in estimated health effects. Novel methodological approaches to address challenges in studies of pollution sources, particularly evaluation of source-specific health effects, are important for determining how source-specific exposure impacts health.

  3. VALIDATION OF A METHOD FOR ESTIMATING POLLUTION EMISSION RATES FROM AREA SOURCES USING OPEN-PATH FTIR SEPCTROSCOPY AND DISPERSION MODELING TECHNIQUES

    EPA Science Inventory

    The paper describes a methodology developed to estimate emissions factors for a variety of different area sources in a rapid, accurate, and cost effective manner. he methodology involves using an open-path Fourier transform infrared (FTIR) spectrometer to measure concentrations o...

  4. REVIEW AND EVALUATION OF CURRENT METHODS AND USER NEEDS FOR OTHER STATIONARY COMBUSTION SOURCES

    EPA Science Inventory

    The report gives results of Phase 1 of an effort to develop improved methodologies for estimating area source emissions of air pollutants from stationary combustion sources. The report (1) evaluates Area and Mobile Source (AMS) subsystem methodologies; (2) compares AMS results w...

  5. METHODOLOGIES FOR ESTIMATING AIR EMISSIONS FROM THREE NON-TRADITIONAL SOURCE CATEGORIES: OIL SPILLS, PETROLEUM VESSEL LOADING & UNLOADING, AND COOLING TOWERS

    EPA Science Inventory

    The report discusses part of EPA's program to identify and characterize emissions sources not currently accounted for by either the existing Aerometric Information Retrieval System (AIRS) or State Implementation Plan (sip) area source methodologies and to develop appropriate emis...

  6. REVISED EMISSIONS ESTIMATION METHODOLOGIES FOR INDUSTRIAL, RESIDENTIAL, AND ELECTRIC UTILITY STATIONARY COMBUSTION SOURCES

    EPA Science Inventory

    The report describes the development of improved and streamlined EPA emission estimation methods for stationary combustion area sources by the Joint Emissions Inventory Oversight Group (JEIOG) research program. These sources include categories traditionally labeled "other statio...

  7. An approach to software cost estimation

    NASA Technical Reports Server (NTRS)

    Mcgarry, F.; Page, J.; Card, D.; Rohleder, M.; Church, V.

    1984-01-01

    A general procedure for software cost estimation in any environment is outlined. The basic concepts of work and effort estimation are explained, some popular resource estimation models are reviewed, and the accuracy of source estimates is discussed. A software cost prediction procedure based on the experiences of the Software Engineering Laboratory in the flight dynamics area and incorporating management expertise, cost models, and historical data is described. The sources of information and relevant parameters available during each phase of the software life cycle are identified. The methodology suggested incorporates these elements into a customized management tool for software cost prediction. Detailed guidelines for estimation in the flight dynamics environment developed using this methodology are presented.

  8. Cross Time-Frequency Analysis for Combining Information of Several Sources: Application to Estimation of Spontaneous Respiratory Rate from Photoplethysmography

    PubMed Central

    Peláez-Coca, M. D.; Orini, M.; Lázaro, J.; Bailón, R.; Gil, E.

    2013-01-01

    A methodology that combines information from several nonstationary biological signals is presented. This methodology is based on time-frequency coherence, that quantifies the similarity of two signals in the time-frequency domain. A cross time-frequency analysis method, based on quadratic time-frequency distribution, has been used for combining information of several nonstationary biomedical signals. In order to evaluate this methodology, the respiratory rate from the photoplethysmographic (PPG) signal is estimated. The respiration provokes simultaneous changes in the pulse interval, amplitude, and width of the PPG signal. This suggests that the combination of information from these sources will improve the accuracy of the estimation of the respiratory rate. Another target of this paper is to implement an algorithm which provides a robust estimation. Therefore, respiratory rate was estimated only in those intervals where the features extracted from the PPG signals are linearly coupled. In 38 spontaneous breathing subjects, among which 7 were characterized by a respiratory rate lower than 0.15 Hz, this methodology provided accurate estimates, with the median error {0.00; 0.98} mHz ({0.00; 0.31}%) and the interquartile range error {4.88; 6.59} mHz ({1.60; 1.92}%). The estimation error of the presented methodology was largely lower than the estimation error obtained without combining different PPG features related to respiration. PMID:24363777

  9. 2dFLenS and KiDS: determining source redshift distributions with cross-correlations

    NASA Astrophysics Data System (ADS)

    Johnson, Andrew; Blake, Chris; Amon, Alexandra; Erben, Thomas; Glazebrook, Karl; Harnois-Deraps, Joachim; Heymans, Catherine; Hildebrandt, Hendrik; Joudaki, Shahab; Klaes, Dominik; Kuijken, Konrad; Lidman, Chris; Marin, Felipe A.; McFarland, John; Morrison, Christopher B.; Parkinson, David; Poole, Gregory B.; Radovich, Mario; Wolf, Christian

    2017-03-01

    We develop a statistical estimator to infer the redshift probability distribution of a photometric sample of galaxies from its angular cross-correlation in redshift bins with an overlapping spectroscopic sample. This estimator is a minimum-variance weighted quadratic function of the data: a quadratic estimator. This extends and modifies the methodology presented by McQuinn & White. The derived source redshift distribution is degenerate with the source galaxy bias, which must be constrained via additional assumptions. We apply this estimator to constrain source galaxy redshift distributions in the Kilo-Degree imaging survey through cross-correlation with the spectroscopic 2-degree Field Lensing Survey, presenting results first as a binned step-wise distribution in the range z < 0.8, and then building a continuous distribution using a Gaussian process model. We demonstrate the robustness of our methodology using mock catalogues constructed from N-body simulations, and comparisons with other techniques for inferring the redshift distribution.

  10. Global carbon monoxide cycle: Modeling and data analysis

    NASA Astrophysics Data System (ADS)

    Arellano, Avelino F., Jr.

    The overarching goal of this dissertation is to develop robust, spatially and temporally resolved CO sources, using global chemical transport modeling, CO measurements from Climate Monitoring and Diagnostic Laboratory (CMDL) and Measurement of Pollution In The Troposphere (MOPITT), under the framework of Bayesian synthesis inversion. To rigorously quantify the CO sources, I conducted five sets of inverse analyses, with each set investigating specific methodological and scientific issues. The first two inverse analyses separately explored two different CO observations to estimate CO sources by region and sector. Under a range of scenarios relating to inverse methodology and data quality issues, top-down estimates using CMDL CO surface and MOPITT CO remote-sensed measurements show consistent results particularly on a significantly large fossil fuel/biofuel (FFBF) emission in East Asia than present bottom-up estimates. The robustness of this estimate is strongly supported by forward and inverse modeling studies in the region particularly from TRansport and Chemical Evolution over the Pacific (TRACE-P) campaign. The use of high-resolution measurement for the first time in CO inversion also draws attention to a methodology issue that the range of estimates from the scenarios is larger than posterior uncertainties, suggesting that estimate uncertainties may be underestimated. My analyses highlight the utility of top-down approach to provide additional constraints on present global estimates by also pointing to other discrepancies including apparent underestimation of FFBF from Africa/Latin America and biomass burning (BIOM) sources in Africa, southeast Asia and north-Latin America, indicating inconsistencies on our current understanding of fuel use and land-use patterns in these regions. Inverse analysis using MOPITT is extended to determine the extent of MOPITT information and estimate monthly regional CO sources. A major finding, which is consistent with other atmospheric observations but differ with satellite area-burned observations, is a significant overestimation in southern Africa for June/July relative to satellite-and-model-constrained BIOM emissions of CO. Sensitivity inverse analyses on observation error covariance and structure, and sequential inversion using NOAA CMDL to fully exploit available information, confirm the robustness of the estimates and further recognize the limitations of the approach, implying the need to further improve the methodology and to reconcile discrepancies.

  11. The Demand for Scientific and Technical Manpower in Selected Energy-Related Industries, 1970-85: A Methodology Applied to a Selected Scenario of Energy Output. A Summary.

    ERIC Educational Resources Information Center

    Gutmanis, Ivars; And Others

    The primary purpose of the study was to develop and apply a methodology for estimating the need for scientists and engineers by specialty in energy and energy-related industries. The projections methodology was based on the Case 1 estimates by the National Petroleum Council of the results of "maximum efforts" to develop domestic fuel sources by…

  12. A new wave front shape-based approach for acoustic source localization in an anisotropic plate without knowing its material properties.

    PubMed

    Sen, Novonil; Kundu, Tribikram

    2018-07-01

    Estimating the location of an acoustic source in a structure is an important step towards passive structural health monitoring. Techniques for localizing an acoustic source in isotropic structures are well developed in the literature. Development of similar techniques for anisotropic structures, however, has gained attention only in the recent years and has a scope of further improvement. Most of the existing techniques for anisotropic structures either assume a straight line wave propagation path between the source and an ultrasonic sensor or require the material properties to be known. This study considers different shapes of the wave front generated during an acoustic event and develops a methodology to localize the acoustic source in an anisotropic plate from those wave front shapes. An elliptical wave front shape-based technique was developed first, followed by the development of a parametric curve-based technique for non-elliptical wave front shapes. The source coordinates are obtained by minimizing an objective function. The proposed methodology does not assume a straight line wave propagation path and can predict the source location without any knowledge of the elastic properties of the material. A numerical study presented here illustrates how the proposed methodology can accurately estimate the source coordinates. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Nonpoint source pollution of urban stormwater runoff: a methodology for source analysis.

    PubMed

    Petrucci, Guido; Gromaire, Marie-Christine; Shorshani, Masoud Fallah; Chebbo, Ghassan

    2014-09-01

    The characterization and control of runoff pollution from nonpoint sources in urban areas are a major issue for the protection of aquatic environments. We propose a methodology to quantify the sources of pollutants in an urban catchment and to analyze the associated uncertainties. After describing the methodology, we illustrate it through an application to the sources of Cu, Pb, Zn, and polycyclic aromatic hydrocarbons (PAH) from a residential catchment (228 ha) in the Paris region. In this application, we suggest several procedures that can be applied for the analysis of other pollutants in different catchments, including an estimation of the total extent of roof accessories (gutters and downspouts, watertight joints and valleys) in a catchment. These accessories result as the major source of Pb and as an important source of Zn in the example catchment, while activity-related sources (traffic, heating) are dominant for Cu (brake pad wear) and PAH (tire wear, atmospheric deposition).

  14. Utility of Capture-Recapture Methodology to Estimate Prevalence of Congenital Heart Defects Among Adolescents in 11 New York State Counties: 2008 to 2010.

    PubMed

    Akkaya-Hocagil, Tugba; Hsu, Wan-Hsiang; Sommerhalter, Kristin; McGarry, Claire; Van Zutphen, Alissa

    2017-11-01

    Congenital heart defects (CHDs) are the most common birth defects in the United States, and the population of individuals living with CHDs is growing. Though CHD prevalence in infancy has been well characterized, better prevalence estimates among children and adolescents in the United States are still needed. We used capture-recapture methods to estimate CHD prevalence among adolescents residing in 11 New York counties. The three data sources used for analysis included Statewide Planning and Research Cooperative System (SPARCS) hospital inpatient records, SPARCS outpatient records, and medical records provided by seven pediatric congenital cardiac clinics from 2008 to 2010. Bayesian log-linear models were fit using the R package Conting to account for dataset dependencies and heterogeneous catchability. A total of 2537 adolescent CHD cases were captured in our three data sources. Forty-four cases were identified in all data sources, 283 cases were identified in two of three data sources, and 2210 cases were identified in a single data source. The final model yielded an estimated total adolescent CHD population of 3845, indicating that 66% of the cases in the catchment area were identified in the case-identifying data sources. Based on 2010 Census estimates, we estimated adolescent CHD prevalence as 6.4 CHD cases per 1000 adolescents (95% confidence interval: 6.2-6.6). We used capture-recapture methodology with a population-based surveillance system in New York to estimate CHD prevalence among adolescents. Future research incorporating additional data sources may improve prevalence estimates in this population. Birth Defects Research 109:1423-1429, 2017.© 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  15. Accounting Methodology for Source Energy of Non-Combustible Renewable Electricity Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donohoo-Vallett, Paul

    As non-combustible sources of renewable power (wind, solar, hydro, and geothermal) do not consume fuel, the “source” (or “primary”) energy from these sources cannot be accounted for in the same manner as it is for fossil fuel sources. The methodology chosen for these technologies is important as it affects the perception of the relative size of renewable source energy to fossil energy, affects estimates of source-based building energy use, and overall source energy based metrics such as energy productivity. This memo reviews the methodological choices, outlines implications of each choice, summarizes responses to a request for information on this topic,more » and presents guiding principles for the U.S. Department of Energy, (DOE) Office of Energy Efficiency and Renewable Energy (EERE) to use to determine where modifying the current renewable source energy accounting method used in EERE products and analyses would be appropriate to address the issues raised above.« less

  16. RERANKING OF AREA SOURCES IN LIGHT OF SEASONAL/ REGIONAL EMISSION FACTORS AND STATE/LOCAL NEEDS

    EPA Science Inventory

    The report gives results of an effort to provide a better understanding of air pollution area sources and their emissions, to prioritize their importance as emitters of volatile organic compounds (VOCs), and to identify sources for which better emission estimation methodologies a...

  17. Temporal variability patterns in solar radiation estimations

    NASA Astrophysics Data System (ADS)

    Vindel, José M.; Navarro, Ana A.; Valenzuela, Rita X.; Zarzalejo, Luis F.

    2016-06-01

    In this work, solar radiation estimations obtained from a satellite and a numerical weather prediction model in mainland Spain have been compared. Similar comparisons have been formerly carried out, but in this case, the methodology used is different: the temporal variability of both sources of estimation has been compared with the annual evolution of the radiation associated to the different study climate zones. The methodology is based on obtaining behavior patterns, using a Principal Component Analysis, following the annual evolution of solar radiation estimations. Indeed, the adjustment degree to these patterns in each point (assessed from maps of correlation) may be associated with the annual radiation variation (assessed from the interquartile range), which is associated, in turn, to different climate zones. In addition, the goodness of each estimation source has been assessed comparing it with data obtained from the radiation measurements in ground by pyranometers. For the study, radiation data from Satellite Application Facilities and data corresponding to the reanalysis carried out by the European Centre for Medium-Range Weather Forecasts have been used.

  18. Development of a source-exposure matrix for occupational exposure assessment of electromagnetic fields in the INTEROCC study

    PubMed Central

    Vila, Javier; Bowman, Joseph D; Figuerola, Jordi; Moriña, David; Kincl, Laurel; Richardson, Lesley; Cardis, Elisabeth

    2017-01-01

    Introduction To estimate occupational exposures to electromagnetic fields (EMF) for the INTEROCC study, a database of source-based measurements extracted from published and unpublished literature resources had been previously constructed. The aim of the current work was to summarize these measurements into a source-exposure matrix (SEM), accounting for their quality and relevance. Methods A novel methodology for combining available measurements was developed, based on order statistics and log-normal distribution characteristics. Arithmetic and geometric means, and estimates of variability and maximum exposure were calculated by EMF source, frequency band and dosimetry type. Mean estimates were weighted by our confidence on the pooled measurements. Results The SEM contains confidence-weighted mean and maximum estimates for 312 EMF exposure sources (from 0 Hz to 300 GHz). Operator position geometric mean electric field levels for RF sources ranged between 0.8 V/m (plasma etcher) and 320 V/m (RF sealer), while magnetic fields ranged from 0.02 A/m (speed radar) to 0.6 A/m (microwave heating). For ELF sources, electric fields ranged between 0.2 V/m (electric forklift) and 11,700 V/m (HVTL-hotsticks), while magnetic fields ranged between 0.14 μT (visual display terminals) and 17 μT (TIG welding). Conclusion The methodology developed allowed the construction of the first EMF-SEM and may be used to summarize similar exposure data for other physical or chemical agents. PMID:27827378

  19. An almost-parameter-free harmony search algorithm for groundwater pollution source identification.

    PubMed

    Jiang, Simin; Zhang, Yali; Wang, Pei; Zheng, Maohui

    2013-01-01

    The spatiotemporal characterization of unknown sources of groundwater pollution is frequently encountered in environmental problems. This study adopts a simulation-optimization approach that combines a contaminant transport simulation model with a heuristic harmony search algorithm to identify unknown pollution sources. In the proposed methodology, an almost-parameter-free harmony search algorithm is developed. The performance of this methodology is evaluated on an illustrative groundwater pollution source identification problem, and the identified results indicate that the proposed almost-parameter-free harmony search algorithm-based optimization model can give satisfactory estimations, even when the irregular geometry, erroneous monitoring data, and prior information shortage of potential locations are considered.

  20. Using State Estimation Residuals to Detect Abnormal SCADA Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Jian; Chen, Yousu; Huang, Zhenyu

    2010-06-14

    Detection of manipulated supervisory control and data acquisition (SCADA) data is critically important for the safe and secure operation of modern power systems. In this paper, a methodology of detecting manipulated SCADA data based on state estimation residuals is presented. A framework of the proposed methodology is described. Instead of using original SCADA measurements as the bad data sources, the residuals calculated based on the results of the state estimator are used as the input for the outlier detection process. The BACON algorithm is applied to detect outliers in the state estimation residuals. The IEEE 118-bus system is used asmore » a test case to evaluate the effectiveness of the proposed methodology. The accuracy of the BACON method is compared with that of the 3-σ method for the simulated SCADA measurements and residuals.« less

  1. Safety assessment methodology in management of spent sealed sources.

    PubMed

    Mahmoud, Narmine Salah

    2005-02-14

    Environmental hazards can be caused from radioactive waste after their disposal. It was therefore important that safety assessment methodologies be developed and established to study and estimate the possible hazards, and institute certain safety methodologies that lead and prevent the evolution of these hazards. Spent sealed sources are specific type of radioactive waste. According to IAEA definition, spent sealed sources are unused sources because of activity decay, damage, misuse, loss, or theft. Accidental exposure of humans from spent sealed sources can occur at the moment they become spent and before their disposal. Because of that reason, safety assessment methodologies were tailored to suit the management of spent sealed sources. To provide understanding and confidence of this study, validation analysis was undertaken by considering the scenario of an accident that occurred in Egypt, June 2000 (the Meet-Halfa accident from an iridium-192 source). The text of this work includes consideration related to the safety assessment approaches of spent sealed sources which constitutes assessment context, processes leading an active source to be spent, accident scenarios, mathematical models for dose calculations, and radiological consequences and regulatory criteria. The text also includes a validation study, which was carried out by evaluating a theoretical scenario compared to the real scenario of Meet-Halfa accident depending on the clinical assessment of affected individuals.

  2. A Methodology for the Estimation of the Wind Generator Economic Efficiency

    NASA Astrophysics Data System (ADS)

    Zaleskis, G.

    2017-12-01

    Integration of renewable energy sources and the improvement of the technological base may not only reduce the consumption of fossil fuel and environmental load, but also ensure the power supply in regions with difficult fuel delivery or power failures. The main goal of the research is to develop the methodology of evaluation of the wind turbine economic efficiency. The research has demonstrated that the electricity produced from renewable sources may be much more expensive than the electricity purchased from the conventional grid.

  3. Verification of Agricultural Methane Emission Inventories

    NASA Astrophysics Data System (ADS)

    Desjardins, R. L.; Pattey, E.; Worth, D. E.; VanderZaag, A.; Mauder, M.; Srinivasan, R.; Worthy, D.; Sweeney, C.; Metzger, S.

    2017-12-01

    It is estimated that agriculture contributes more than 40% of anthropogenic methane (CH4) emissions in North America. However, these estimates, which are either based on the Intergovernmental Panel on Climate Change (IPCC) methodology or inverse modeling techniques, are poorly validated due to the challenges of separating interspersed CH4 sources within agroecosystems. A flux aircraft, instrumented with a fast-response Picarro CH4 analyzer for the eddy covariance (EC) technique and a sampling system for the relaxed eddy accumulation technique (REA), was flown at an altitude of about 150 m along several 20-km transects over an agricultural region in Eastern Canada. For all flight days, the top-down CH4 flux density measurements were compared to the footprint adjusted bottom-up estimates based on an IPCC Tier II methodology. Information on the animal population, land use type and atmospheric and surface variables were available for each transect. Top-down and bottom-up estimates of CH4 emissions were found to be poorly correlated, and wetlands were the most frequent confounding source of CH4; however, there were other sources such as waste treatment plants and biodigesters. Spatially resolved wavelet covariance estimates of CH4 emissions helped identify the contribution of wetlands to the overall CH4 flux, and the dependence of these emissions on temperature. When wetland contribution in the flux footprint was minimized, top-down and bottom-up estimates agreed to within measurement error. This research demonstrates that although existing aircraft-based technology can be used to verify regional ( 100 km2) agricultural CH4 emissions, it remains challenging due to diverse sources of CH4 present in many regions. The use of wavelet covariance to generate spatially-resolved flux estimates was found to be the best way to separate interspersed sources of CH4.

  4. Collective odor source estimation and search in time-variant airflow environments using mobile robots.

    PubMed

    Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming

    2011-01-01

    This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots' search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot's detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection-diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method.

  5. Collective Odor Source Estimation and Search in Time-Variant Airflow Environments Using Mobile Robots

    PubMed Central

    Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming

    2011-01-01

    This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots’ search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot’s detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection–diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method. PMID:22346650

  6. Development of a source-exposure matrix for occupational exposure assessment of electromagnetic fields in the INTEROCC study.

    PubMed

    Vila, Javier; Bowman, Joseph D; Figuerola, Jordi; Moriña, David; Kincl, Laurel; Richardson, Lesley; Cardis, Elisabeth

    2017-07-01

    To estimate occupational exposures to electromagnetic fields (EMF) for the INTEROCC study, a database of source-based measurements extracted from published and unpublished literature resources had been previously constructed. The aim of the current work was to summarize these measurements into a source-exposure matrix (SEM), accounting for their quality and relevance. A novel methodology for combining available measurements was developed, based on order statistics and log-normal distribution characteristics. Arithmetic and geometric means, and estimates of variability and maximum exposure were calculated by EMF source, frequency band and dosimetry type. The mean estimates were weighted by our confidence in the pooled measurements. The SEM contains confidence-weighted mean and maximum estimates for 312 EMF exposure sources (from 0 Hz to 300 GHz). Operator position geometric mean electric field levels for radiofrequency (RF) sources ranged between 0.8 V/m (plasma etcher) and 320 V/m (RF sealer), while magnetic fields ranged from 0.02 A/m (speed radar) to 0.6 A/m (microwave heating). For extremely low frequency sources, electric fields ranged between 0.2 V/m (electric forklift) and 11,700 V/m (high-voltage transmission line-hotsticks), whereas magnetic fields ranged between 0.14 μT (visual display terminals) and 17 μT (tungsten inert gas welding). The methodology developed allowed the construction of the first EMF-SEM and may be used to summarize similar exposure data for other physical or chemical agents.

  7. Using State Estimation Residuals to Detect Abnormal SCADA Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Jian; Chen, Yousu; Huang, Zhenyu

    2010-04-30

    Detection of abnormal supervisory control and data acquisition (SCADA) data is critically important for safe and secure operation of modern power systems. In this paper, a methodology of abnormal SCADA data detection based on state estimation residuals is presented. Preceded with a brief overview of outlier detection methods and bad SCADA data detection for state estimation, the framework of the proposed methodology is described. Instead of using original SCADA measurements as the bad data sources, the residuals calculated based on the results of the state estimator are used as the input for the outlier detection algorithm. The BACON algorithm ismore » applied to the outlier detection task. The IEEE 118-bus system is used as a test base to evaluate the effectiveness of the proposed methodology. The accuracy of the BACON method is compared with that of the 3-σ method for the simulated SCADA measurements and residuals.« less

  8. Estimating air emissions from ships: Meta-analysis of modelling approaches and available data sources

    NASA Astrophysics Data System (ADS)

    Miola, Apollonia; Ciuffo, Biagio

    2011-04-01

    Maritime transport plays a central role in the transport sector's sustainability debate. Its contribution to air pollution and greenhouse gases is significant. An effective policy strategy to regulate air emissions requires their robust estimation in terms of quantification and location. This paper provides a critical analysis of the ship emission modelling approaches and data sources available, identifying their limits and constraints. It classifies the main methodologies on the basis of the approach followed (bottom-up or top-down) for the evaluation and geographic characterisation of emissions. The analysis highlights the uncertainty of results from the different methods. This is mainly due to the level of uncertainty connected with the sources of information that are used as inputs to the different studies. This paper describes the sources of the information required for these analyses, paying particular attention to AIS data and to the possible problems associated with their use. One way of reducing the overall uncertainty in the results could be the simultaneous use of different sources of information. This paper presents an alternative methodology based on this approach. As a final remark, it can be expected that new approaches to the problem together with more reliable data sources over the coming years could give more impetus to the debate on the global impact of maritime traffic on the environment that, currently, has only reached agreement via the "consensus" estimates provided by IMO (2009).

  9. A synthesis of convenience survey and other data to estimate undiagnosed HIV infection among men who have sex with men in England and Wales.

    PubMed

    Walker, Kate; Seaman, Shaun R; De Angelis, Daniela; Presanis, Anne M; Dodds, Julie P; Johnson, Anne M; Mercey, Danielle; Gill, O Noel; Copas, Andrew J

    2011-10-01

    Hard-to-reach population subgroups are typically investigated using convenience sampling, which may give biased estimates. Combining information from such surveys, a probability survey and clinic surveillance, can potentially minimize the bias. We developed a methodology to estimate the prevalence of undiagnosed HIV infection among men who have sex with men (MSM) in England and Wales aged 16-44 years in 2003, making fuller use of the available data than earlier work. We performed a synthesis of three data sources: genitourinary medicine clinic surveillance (11 380 tests), a venue-based convenience survey including anonymous HIV testing (3702 MSM) and a general population sexual behaviour survey (134 MSM). A logistic regression model to predict undiagnosed infection was fitted to the convenience survey data and then applied to the MSMs in the population survey to estimate the prevalence of undiagnosed infection in the general MSM population. This estimate was corrected for selection biases in the convenience survey using clinic surveillance data. A sensitivity analysis addressed uncertainty in our assumptions. The estimated prevalence of undiagnosed HIV in MSM was 2.4% [95% confidence interval (95% CI 1.7-3.0%)], and between 1.6% (95% CI 1.1-2.0%) and 3.3% (95% CI 2.4-4.1%) depending on assumptions; corresponding to 5500 (3390-7180), 3610 (2180-4740) and 7570 (4790-9840) men, and undiagnosed fractions of 33, 24 and 40%, respectively. Our estimates are consistent with earlier work that did not make full use of data sources. Reconciling data from multiple sources, including probability-, clinic- and venue-based convenience samples can reduce bias in estimates. This methodology could be applied in other settings to take full advantage of multiple imperfect data sources.

  10. Hot emission model for mobile sources: application to the metropolitan region of the city of Santiago, Chile.

    PubMed

    Corvalán, Roberto M; Osses, Mauricio; Urrutia, Cristian M

    2002-02-01

    Depending on the final application, several methodologies for traffic emission estimation have been developed. Emission estimation based on total miles traveled or other average factors is a sufficient approach only for extended areas such as national or worldwide areas. For road emission control and strategies design, microscale analysis based on real-world emission estimations is often required. This involves actual driving behavior and emission factors of the local vehicle fleet under study. This paper reports on a microscale model for hot road emissions and its application to the metropolitan region of the city of Santiago, Chile. The methodology considers the street-by-street hot emission estimation with its temporal and spatial distribution. The input data come from experimental emission factors based on local driving patterns and traffic surveys of traffic flows for different vehicle categories. The methodology developed is able to estimate hourly hot road CO, total unburned hydrocarbons (THCs), particulate matter (PM), and NO(x) emissions for predefined day types and vehicle categories.

  11. A Bayesian methodological framework for accommodating interannual variability of nutrient loading with the SPARROW model

    NASA Astrophysics Data System (ADS)

    Wellen, Christopher; Arhonditsis, George B.; Labencki, Tanya; Boyd, Duncan

    2012-10-01

    Regression-type, hybrid empirical/process-based models (e.g., SPARROW, PolFlow) have assumed a prominent role in efforts to estimate the sources and transport of nutrient pollution at river basin scales. However, almost no attempts have been made to explicitly accommodate interannual nutrient loading variability in their structure, despite empirical and theoretical evidence indicating that the associated source/sink processes are quite variable at annual timescales. In this study, we present two methodological approaches to accommodate interannual variability with the Spatially Referenced Regressions on Watershed attributes (SPARROW) nonlinear regression model. The first strategy uses the SPARROW model to estimate a static baseline load and climatic variables (e.g., precipitation) to drive the interannual variability. The second approach allows the source/sink processes within the SPARROW model to vary at annual timescales using dynamic parameter estimation techniques akin to those used in dynamic linear models. Model parameterization is founded upon Bayesian inference techniques that explicitly consider calibration data and model uncertainty. Our case study is the Hamilton Harbor watershed, a mixed agricultural and urban residential area located at the western end of Lake Ontario, Canada. Our analysis suggests that dynamic parameter estimation is the more parsimonious of the two strategies tested and can offer insights into the temporal structural changes associated with watershed functioning. Consistent with empirical and theoretical work, model estimated annual in-stream attenuation rates varied inversely with annual discharge. Estimated phosphorus source areas were concentrated near the receiving water body during years of high in-stream attenuation and dispersed along the main stems of the streams during years of low attenuation, suggesting that nutrient source areas are subject to interannual variability.

  12. Atmospheric Tracer Inverse Modeling Using Markov Chain Monte Carlo (MCMC)

    NASA Astrophysics Data System (ADS)

    Kasibhatla, P.

    2004-12-01

    In recent years, there has been an increasing emphasis on the use of Bayesian statistical estimation techniques to characterize the temporal and spatial variability of atmospheric trace gas sources and sinks. The applications have been varied in terms of the particular species of interest, as well as in terms of the spatial and temporal resolution of the estimated fluxes. However, one common characteristic has been the use of relatively simple statistical models for describing the measurement and chemical transport model error statistics and prior source statistics. For example, multivariate normal probability distribution functions (pdfs) are commonly used to model these quantities and inverse source estimates are derived for fixed values of pdf paramaters. While the advantage of this approach is that closed form analytical solutions for the a posteriori pdfs of interest are available, it is worth exploring Bayesian analysis approaches which allow for a more general treatment of error and prior source statistics. Here, we present an application of the Markov Chain Monte Carlo (MCMC) methodology to an atmospheric tracer inversion problem to demonstrate how more gereral statistical models for errors can be incorporated into the analysis in a relatively straightforward manner. The MCMC approach to Bayesian analysis, which has found wide application in a variety of fields, is a statistical simulation approach that involves computing moments of interest of the a posteriori pdf by efficiently sampling this pdf. The specific inverse problem that we focus on is the annual mean CO2 source/sink estimation problem considered by the TransCom3 project. TransCom3 was a collaborative effort involving various modeling groups and followed a common modeling and analysis protocoal. As such, this problem provides a convenient case study to demonstrate the applicability of the MCMC methodology to atmospheric tracer source/sink estimation problems.

  13. Evaluation of near field atmospheric dispersion around nuclear facilities using a Lorentzian distribution methodology.

    PubMed

    Hawkley, Gavin

    2014-12-01

    Atmospheric dispersion modeling within the near field of a nuclear facility typically applies a building wake correction to the Gaussian plume model, whereby a point source is modeled as a plane source. The plane source results in greater near field dilution and reduces the far field effluent concentration. However, the correction does not account for the concentration profile within the near field. Receptors of interest, such as the maximally exposed individual, may exist within the near field and thus the realm of building wake effects. Furthermore, release parameters and displacement characteristics may be unknown, particularly during upset conditions. Therefore, emphasis is placed upon the need to analyze and estimate an enveloping concentration profile within the near field of a release. This investigation included the analysis of 64 air samples collected over 128 wk. Variables of importance were then derived from the measurement data, and a methodology was developed that allowed for the estimation of Lorentzian-based dispersion coefficients along the lateral axis of the near field recirculation cavity; the development of recirculation cavity boundaries; and conservative evaluation of the associated concentration profile. The results evaluated the effectiveness of the Lorentzian distribution methodology for estimating near field releases and emphasized the need to place air-monitoring stations appropriately for complete concentration characterization. Additionally, the importance of the sampling period and operational conditions were discussed to balance operational feedback and the reporting of public dose.

  14. National Deployment Estimate of the Metropolitan ITS Infrastructure : Updated with 2010 Deployment Data, 7th Revision

    DOT National Transportation Integrated Search

    2011-12-01

    The purpose of this report is to provide a summary and back-up information on the methodology, data sources, and results for the estimate of Intelligent Transportation Systems (ITS) capital expenditures in the top 75 metropolitan areas as of FY 2010....

  15. Emission Database for Global Atmospheric Research (EDGAR).

    ERIC Educational Resources Information Center

    Olivier, J. G. J.; And Others

    1994-01-01

    Presents the objective and methodology chosen for the construction of a global emissions source database called EDGAR and the structural design of the database system. The database estimates on a regional and grid basis, 1990 annual emissions of greenhouse gases, and of ozone depleting compounds from all known sources. (LZ)

  16. Probabilistic tsunami hazard assessment at Seaside, Oregon, for near-and far-field seismic sources

    USGS Publications Warehouse

    Gonzalez, F.I.; Geist, E.L.; Jaffe, B.; Kanoglu, U.; Mofjeld, H.; Synolakis, C.E.; Titov, V.V.; Areas, D.; Bellomo, D.; Carlton, D.; Horning, T.; Johnson, J.; Newman, J.; Parsons, T.; Peters, R.; Peterson, C.; Priest, G.; Venturato, A.; Weber, J.; Wong, F.; Yalciner, A.

    2009-01-01

    The first probabilistic tsunami flooding maps have been developed. The methodology, called probabilistic tsunami hazard assessment (PTHA), integrates tsunami inundation modeling with methods of probabilistic seismic hazard assessment (PSHA). Application of the methodology to Seaside, Oregon, has yielded estimates of the spatial distribution of 100- and 500-year maximum tsunami amplitudes, i.e., amplitudes with 1% and 0.2% annual probability of exceedance. The 100-year tsunami is generated most frequently by far-field sources in the Alaska-Aleutian Subduction Zone and is characterized by maximum amplitudes that do not exceed 4 m, with an inland extent of less than 500 m. In contrast, the 500-year tsunami is dominated by local sources in the Cascadia Subduction Zone and is characterized by maximum amplitudes in excess of 10 m and an inland extent of more than 1 km. The primary sources of uncertainty in these results include those associated with interevent time estimates, modeling of background sea level, and accounting for temporal changes in bathymetry and topography. Nonetheless, PTHA represents an important contribution to tsunami hazard assessment techniques; viewed in the broader context of risk analysis, PTHA provides a method for quantifying estimates of the likelihood and severity of the tsunami hazard, which can then be combined with vulnerability and exposure to yield estimates of tsunami risk. Copyright 2009 by the American Geophysical Union.

  17. Costs of Addressing Heroin Addiction in Malaysia and 32 Comparable Countries Worldwide

    PubMed Central

    Ruger, Jennifer Prah; Chawarski, Marek; Mazlan, Mahmud; Luekens, Craig; Ng, Nora; Schottenfeld, Richard

    2012-01-01

    Objective Develop and apply new costing methodologies to estimate costs of opioid dependence treatment in countries worldwide. Data Sources/Study Setting Micro-costing methodology developed and data collected during randomized controlled trial (RCT) involving 126 patients (July 2003–May 2005) in Malaysia. Gross-costing methodology developed to estimate costs of treatment replication in 32 countries with data collected from publicly available sources. Study Design Fixed, variable, and societal cost components of Malaysian RCT micro-costed and analytical framework created and employed for gross-costing in 32 countries selected by three criteria relative to Malaysia: major heroin problem, geographic proximity, and comparable gross domestic product (GDP) per capita. Principal Findings Medication, and urine and blood testing accounted for the greatest percentage of total costs for both naltrexone (29–53 percent) and buprenorphine (33–72 percent) interventions. In 13 countries, buprenorphine treatment could be provided for under $2,000 per patient. For all countries except United Kingdom and Singapore, incremental costs per person were below $1,000 when comparing buprenorphine to naltrexone. An estimated 100 percent of opiate users in Cambodia and Lao People's Democratic Republic could be treated for $8 and $30 million, respectively. Conclusions Buprenorphine treatment can be provided at low cost in countries across the world. This study's new costing methodologies provide tools for health systems worldwide to determine the feasibility and cost of similar interventions. PMID:22091732

  18. Multiple data sources improve DNA-based mark-recapture population estimates of grizzly bears.

    PubMed

    Boulanger, John; Kendall, Katherine C; Stetz, Jeffrey B; Roon, David A; Waits, Lisette P; Paetkau, David

    2008-04-01

    A fundamental challenge to estimating population size with mark-recapture methods is heterogeneous capture probabilities and subsequent bias of population estimates. Confronting this problem usually requires substantial sampling effort that can be difficult to achieve for some species, such as carnivores. We developed a methodology that uses two data sources to deal with heterogeneity and applied this to DNA mark-recapture data from grizzly bears (Ursus arctos). We improved population estimates by incorporating additional DNA "captures" of grizzly bears obtained by collecting hair from unbaited bear rub trees concurrently with baited, grid-based, hair snag sampling. We consider a Lincoln-Petersen estimator with hair snag captures as the initial session and rub tree captures as the recapture session and develop an estimator in program MARK that treats hair snag and rub tree samples as successive sessions. Using empirical data from a large-scale project in the greater Glacier National Park, Montana, USA, area and simulation modeling we evaluate these methods and compare the results to hair-snag-only estimates. Empirical results indicate that, compared with hair-snag-only data, the joint hair-snag-rub-tree methods produce similar but more precise estimates if capture and recapture rates are reasonably high for both methods. Simulation results suggest that estimators are potentially affected by correlation of capture probabilities between sample types in the presence of heterogeneity. Overall, closed population Huggins-Pledger estimators showed the highest precision and were most robust to sparse data, heterogeneity, and capture probability correlation among sampling types. Results also indicate that these estimators can be used when a segment of the population has zero capture probability for one of the methods. We propose that this general methodology may be useful for other species in which mark-recapture data are available from multiple sources.

  19. Industrial Demand Module - NEMS Documentation

    EIA Publications

    2014-01-01

    Documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Industrial Demand Module. The report catalogues and describes model assumptions, computational methodology, parameter estimation techniques, and model source code.

  20. Coal Market Module - NEMS Documentation

    EIA Publications

    2014-01-01

    Documents the objectives and the conceptual and methodological approach used in the development of the National Energy Modeling System's (NEMS) Coal Market Module (CMM) used to develop the Annual Energy Outlook 2014 (AEO2014). This report catalogues and describes the assumptions, methodology, estimation techniques, and source code of CMM's two submodules. These are the Coal Production Submodule (CPS) and the Coal Distribution Submodule (CDS).

  1. Methodological considerations in cost of illness studies on Alzheimer disease

    PubMed Central

    2012-01-01

    Cost-of-illness studies (COI) can identify and measure all the costs of a particular disease, including the direct, indirect and intangible dimensions. They are intended to provide estimates about the economic impact of costly disease. Alzheimer disease (AD) is a relevant example to review cost of illness studies because of its costliness.The aim of this study was to review relevant published cost studies of AD to analyze the method used and to identify which dimension had to be improved from a methodological perspective. First, we described the key points of cost study methodology. Secondly, cost studies relating to AD were systematically reviewed, focussing on an analysis of the different methods used. The methodological choices of the studies were analysed using an analytical grid which contains the main methodological items of COI studies. Seventeen articles were retained. Depending on the studies, annual total costs per patient vary from $2,935 to $52, 954. The methods, data sources, and estimated cost categories in each study varied widely. The review showed that cost studies adopted different approaches to estimate costs of AD, reflecting a lack of consensus on the methodology of cost studies. To increase its credibility, closer agreement among researchers on the methodological principles of cost studies would be desirable. PMID:22963680

  2. Estimating Green Net National Product for Puerto Rico: An Economic Measure of Sustainability (Journal article)

    EPA Science Inventory

    This paper presents the data sources and methodology used to estimate Green Net National Product (GNNP), an economic metric of sustainability, for Puerto Rico. Using the change in GNNP as a one-sided test of weak sustainability (i.e., positive growth in GNNP is not enough to show...

  3. Minimization of model representativity errors in identification of point source emission from atmospheric concentration measurements

    NASA Astrophysics Data System (ADS)

    Sharan, Maithili; Singh, Amit Kumar; Singh, Sarvesh Kumar

    2017-11-01

    Estimation of an unknown atmospheric release from a finite set of concentration measurements is considered an ill-posed inverse problem. Besides ill-posedness, the estimation process is influenced by the instrumental errors in the measured concentrations and model representativity errors. The study highlights the effect of minimizing model representativity errors on the source estimation. This is described in an adjoint modelling framework and followed in three steps. First, an estimation of point source parameters (location and intensity) is carried out using an inversion technique. Second, a linear regression relationship is established between the measured concentrations and corresponding predicted using the retrieved source parameters. Third, this relationship is utilized to modify the adjoint functions. Further, source estimation is carried out using these modified adjoint functions to analyse the effect of such modifications. The process is tested for two well known inversion techniques, called renormalization and least-square. The proposed methodology and inversion techniques are evaluated for a real scenario by using concentrations measurements from the Idaho diffusion experiment in low wind stable conditions. With both the inversion techniques, a significant improvement is observed in the retrieval of source estimation after minimizing the representativity errors.

  4. Estimating Children's Soil/Dust Ingestion Rates through ...

    EPA Pesticide Factsheets

    Background: Soil/dust ingestion rates are important variables in assessing children’s health risks in contaminated environments. Current estimates are based largely on soil tracer methodology, which is limited by analytical uncertainty, small sample size, and short study duration. Objectives: The objective was to estimate site-specific soil/dust ingestion rates through reevaluation of the lead absorption dose–response relationship using new bioavailability data from the Bunker Hill Mining and Metallurgical Complex Superfund Site (BHSS) in Idaho, USA. Methods: The U.S. Environmental Protection Agency (EPA) in vitro bioavailability methodology was applied to archived BHSS soil and dust samples. Using age-specific biokinetic slope factors, we related bioavailable lead from these sources to children’s blood lead levels (BLLs) monitored during cleanup from 1988 through 2002. Quantitative regression analyses and exposure assessment guidance were used to develop candidate soil/dust source partition scenarios estimating lead intake, allowing estimation of age-specific soil/dust ingestion rates. These ingestion rate and bioavailability estimates were simultaneously applied to the U.S. EPA Integrated Exposure Uptake Biokinetic Model for Lead in Children to determine those combinations best approximating observed BLLs. Results: Absolute soil and house dust bioavailability averaged 33% (SD ± 4%) and 28% (SD ± 6%), respectively. Estimated BHSS age-specific soil/du

  5. Spatial and temporal disaggregation of transport-related carbon dioxide emissions in Bogota - Colombia

    NASA Astrophysics Data System (ADS)

    Hernandez-Gonzalez, L. A.; Jimenez Pizarro, R.; Néstor Y. Rojas, N. Y.

    2011-12-01

    As a result of rapid urbanization during the last 60 years, 75% of the Colombian population now lives in cities. Urban areas are net sources of greenhouse gases (GHG) and contribute significantly to national GHG emission inventories. The development of scientifically-sound GHG mitigation strategies require accurate GHG source and sink estimations. Disaggregated inventories are effective mitigation decision-making tools. The disaggregation process renders detailed information on the distribution of emissions by transport mode, and the resulting a priori emissions map allows for optimal definition of sites for GHG flux monitoring, either by eddy covariance or inverse modeling techniques. Fossil fuel use in transportation is a major source of carbon dioxide (CO2) in Bogota. We present estimates of CO2 emissions from road traffic in Bogota using the Intergovernmental Panel on Climate Change (IPCC) reference method, and a spatial and temporal disaggregation method. Aggregated CO2 emissions from mobile sources were estimated from monthly and annual fossil fuel (gasoline, diesel and compressed natural gas - CNG) consumption statistics, and estimations of bio-ethanol and bio-diesel use. Although bio-fuel CO2 emissions are considered balanced over annual (or multi-annual) agricultural cycles, we included them since CO2 generated by their combustion would be measurable by a net flux monitoring system. For the disaggregation methodology, we used information on Bogota's road network classification, mean travel speed and trip length for each vehicle category and road type. The CO2 emission factors were taken from recent in-road measurements for gasoline- and CNG-powered vehicles and also estimated from COPERT IV. We estimated emission factors for diesel from surveys on average trip length and fuel consumption. Using IPCC's reference method, we estimate Bogota's total transport-related CO2 emissions for 2008 (reference year) at 4.8 Tg CO2. The disaggregation method estimation is 16% lower, mainly due to uncertainty in activity factors. With only 4% of Bogota's fleet, diesel use accounts for 42% of the CO2 emissions. The emissions are almost evenly shared between public (9% of the fleet) and private transport. Peak emissions occur at 8 a.m. and 6 p.m. with maximum values over a densely industrialized area at the northwest of Bogota. This investigation allowed estimating the relative contribution of fuel and vehicle categories to spatially- and temporally-resolved CO2 emissions. Fuel consumption time series indicate a near-stabilization trend on energy consumption for transportation, which is unexpected taking into account the sustained economic and vehicle fleet growth in Bogota. The comparison of the disaggregation methodology with the IPCC methodology contributes to the analysis of possible error sources on activity factor estimations. This information is very useful for uncertainty estimation and adjustment of primary air pollutant emissions inventories.

  6. Estimation of the limit of detection in semiconductor gas sensors through linearized calibration models.

    PubMed

    Burgués, Javier; Jiménez-Soto, Juan Manuel; Marco, Santiago

    2018-07-12

    The limit of detection (LOD) is a key figure of merit in chemical sensing. However, the estimation of this figure of merit is hindered by the non-linear calibration curve characteristic of semiconductor gas sensor technologies such as, metal oxide (MOX), gasFETs or thermoelectric sensors. Additionally, chemical sensors suffer from cross-sensitivities and temporal stability problems. The application of the International Union of Pure and Applied Chemistry (IUPAC) recommendations for univariate LOD estimation in non-linear semiconductor gas sensors is not straightforward due to the strong statistical requirements of the IUPAC methodology (linearity, homoscedasticity, normality). Here, we propose a methodological approach to LOD estimation through linearized calibration models. As an example, the methodology is applied to the detection of low concentrations of carbon monoxide using MOX gas sensors in a scenario where the main source of error is the presence of uncontrolled levels of humidity. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. International Natural Gas Model 2011, Model Documentation Report

    EIA Publications

    2013-01-01

    This report documents the objectives, analytical approach and development of the International Natural Gas Model (INGM). It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  8. Sediment source fingerprinting as an aid to catchment management: A review of the current state of knowledge and a methodological decision-tree for end-users

    USGS Publications Warehouse

    Collins, A.L; Pulley, S.; Foster, I.D.L; Gellis, Allen; Porto, P.; Horowitz, A.J.

    2017-01-01

    The growing awareness of the environmental significance of fine-grained sediment fluxes through catchment systems continues to underscore the need for reliable information on the principal sources of this material. Source estimates are difficult to obtain using traditional monitoring techniques, but sediment source fingerprinting or tracing procedures, have emerged as a potentially valuable alternative. Despite the rapidly increasing numbers of studies reporting the use of sediment source fingerprinting, several key challenges and uncertainties continue to hamper consensus among the international scientific community on key components of the existing methodological procedures. Accordingly, this contribution reviews and presents recent developments for several key aspects of fingerprinting, namely: sediment source classification, catchment source and target sediment sampling, tracer selection, grain size issues, tracer conservatism, source apportionment modelling, and assessment of source predictions using artificial mixtures. Finally, a decision-tree representing the current state of knowledge is presented, to guide end-users in applying the fingerprinting approach.

  9. Residential Demand Module - NEMS Documentation

    EIA Publications

    2017-01-01

    Model Documentation - Documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Residential Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, and FORTRAN source code.

  10. Geothermal resources and reserves in Indonesia: an updated revision

    NASA Astrophysics Data System (ADS)

    Fauzi, A.

    2015-02-01

    More than 300 high- to low-enthalpy geothermal sources have been identified throughout Indonesia. From the early 1980s until the late 1990s, the geothermal potential for power production in Indonesia was estimated to be about 20 000 MWe. The most recent estimate exceeds 29 000 MWe derived from the 300 sites (Geological Agency, December 2013). This resource estimate has been obtained by adding all of the estimated geothermal potential resources and reserves classified as "speculative", "hypothetical", "possible", "probable", and "proven" from all sites where such information is available. However, this approach to estimating the geothermal potential is flawed because it includes double counting of some reserve estimates as resource estimates, thus giving an inflated figure for the total national geothermal potential. This paper describes an updated revision of the geothermal resource estimate in Indonesia using a more realistic methodology. The methodology proposes that the preliminary "Speculative Resource" category should cover the full potential of a geothermal area and form the base reference figure for the resource of the area. Further investigation of this resource may improve the level of confidence of the category of reserves but will not necessarily increase the figure of the "preliminary resource estimate" as a whole, unless the result of the investigation is higher. A previous paper (Fauzi, 2013a, b) redefined and revised the geothermal resource estimate for Indonesia. The methodology, adopted from Fauzi (2013a, b), will be fully described in this paper. As a result of using the revised methodology, the potential geothermal resources and reserves for Indonesia are estimated to be about 24 000 MWe, some 5000 MWe less than the 2013 national estimate.

  11. Quantifying automobile refinishing VOC air emissions - a methodology with estimates and forecasts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, S.P.; Rubick, C.

    1996-12-31

    Automobile refinishing coatings (referred to as paints), paint thinners, reducers, hardeners, catalysts, and cleanup solvents used during their application, contain volatile organic compounds (VOCs) which are precursors to ground level ozone formation. Some of these painting compounds create hazardous air pollutants (HAPs) which are toxic. This paper documents the methodology, data sets, and the results of surveys (conducted in the fall of 1995) used to develop revised per capita emissions factors for estimating and forecasting the VOC air emissions from the area source category of automobile refinishing. Emissions estimates, forecasts, trends, and reasons for these trends are presented. Future emissionsmore » inventory (EI) challenges are addressed in light of data availability and information networks.« less

  12. World Energy Projection System Plus Model Documentation: Coal Module

    EIA Publications

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Coal Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  13. World Energy Projection System Plus Model Documentation: Transportation Module

    EIA Publications

    2017-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) International Transportation model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  14. World Energy Projection System Plus Model Documentation: Residential Module

    EIA Publications

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Residential Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  15. World Energy Projection System Plus Model Documentation: Refinery Module

    EIA Publications

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Refinery Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  16. World Energy Projection System Plus Model Documentation: Main Module

    EIA Publications

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Main Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  17. Transportation Sector Module - NEMS Documentation

    EIA Publications

    2017-01-01

    Documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Transportation Model (TRAN). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated by the model.

  18. World Energy Projection System Plus Model Documentation: Electricity Module

    EIA Publications

    2017-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) World Electricity Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  19. Environmental risk assessment of water quality in harbor areas: a new methodology applied to European ports.

    PubMed

    Gómez, Aina G; Ondiviela, Bárbara; Puente, Araceli; Juanes, José A

    2015-05-15

    This work presents a standard and unified procedure for assessment of environmental risks at the contaminant source level in port aquatic systems. Using this method, port managers and local authorities will be able to hierarchically classify environmental hazards and proceed with the most suitable management actions. This procedure combines rigorously selected parameters and indicators to estimate the environmental risk of each contaminant source based on its probability, consequences and vulnerability. The spatio-temporal variability of multiple stressors (agents) and receptors (endpoints) is taken into account to provide accurate estimations for application of precisely defined measures. The developed methodology is tested on a wide range of different scenarios via application in six European ports. The validation process confirms its usefulness, versatility and adaptability as a management tool for port water quality in Europe and worldwide. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Improved photo response non-uniformity (PRNU) based source camera identification.

    PubMed

    Cooper, Alan J

    2013-03-10

    The concept of using Photo Response Non-Uniformity (PRNU) as a reliable forensic tool to match an image to a source camera is now well established. Traditionally, the PRNU estimation methodologies have centred on a wavelet based de-noising approach. Resultant filtering artefacts in combination with image and JPEG contamination act to reduce the quality of PRNU estimation. In this paper, it is argued that the application calls for a simplified filtering strategy which at its base level may be realised using a combination of adaptive and median filtering applied in the spatial domain. The proposed filtering method is interlinked with a further two stage enhancement strategy where only pixels in the image having high probabilities of significant PRNU bias are retained. This methodology significantly improves the discrimination between matching and non-matching image data sets over that of the common wavelet filtering approach. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  1. Impact of methodological "shortcuts" in conducting public health surveys: Results from a vaccination coverage survey

    PubMed Central

    Luman, Elizabeth T; Sablan, Mariana; Stokley, Shannon; McCauley, Mary M; Shaw, Kate M

    2008-01-01

    Background Lack of methodological rigor can cause survey error, leading to biased results and suboptimal public health response. This study focused on the potential impact of 3 methodological "shortcuts" pertaining to field surveys: relying on a single source for critical data, failing to repeatedly visit households to improve response rates, and excluding remote areas. Methods In a vaccination coverage survey of young children conducted in the Commonwealth of the Northern Mariana Islands in July 2005, 3 sources of vaccination information were used, multiple follow-up visits were made, and all inhabited areas were included in the sampling frame. Results are calculated with and without these strategies. Results Most children had at least 2 sources of data; vaccination coverage estimated from any single source was substantially lower than from all sources combined. Eligibility was ascertained for 79% of households after the initial visit and for 94% of households after follow-up visits; vaccination coverage rates were similar with and without follow-up. Coverage among children on remote islands differed substantially from that of their counterparts on the main island indicating a programmatic need for locality-specific information; excluding remote islands from the survey would have had little effect on overall estimates due to small populations and divergent results. Conclusion Strategies to reduce sources of survey error should be maximized in public health surveys. The impact of the 3 strategies illustrated here will vary depending on the primary outcomes of interest and local situations. Survey limitations such as potential for error should be well-documented, and the likely direction and magnitude of bias should be considered. PMID:18371195

  2. Summary of Analysis of Sources of Forecasting Errors in BP 1500 Requirements Estimating Process and Description of Compensating Methodology.

    DTIC Science & Technology

    1982-04-25

    the Directorate of Programs (AFLC/ XRP ), and 11-4 * the Directorate of Logistics Plans and Programs, Aircraft/Missiles Program Division of the Air Staff...OWRM). * The P-18 Exhibit/Budget Estimate Submission (BES), a document developed by AFLC/LOR, is reviewed by AFLC/ XRP , and is presented to HQ USAF

  3. Global Impact Estimation of ISO 50001 Energy Management System for Industrial and Service Sectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aghajanzadeh, Arian; Therkelsen, Peter L.; Rao, Prakash

    A methodology has been developed to determine the impacts of ISO 50001 Energy Management System (EnMS) at a region or country level. The impacts of ISO 50001 EnMS include energy, CO2 emissions, and cost savings. This internationally recognized and transparent methodology has been embodied in a user friendly Microsoft Excel® based tool called ISO 50001 Impact Estimator Tool (IET 50001). However, the tool inputs are critical in order to get accurate and defensible results. This report is intended to document the data sources used and assumptions made to calculate the global impact of ISO 50001 EnMS.

  4. Using LiF:Mg,Cu,P TLDs to estimate the absorbed dose to water in liquid water around an 192Ir brachytherapy source.

    PubMed

    Lucas, P Avilés; Aubineau-Lanièce, I; Lourenço, V; Vermesse, D; Cutarella, D

    2014-01-01

    The absorbed dose to water is the fundamental reference quantity for brachytherapy treatment planning systems and thermoluminescence dosimeters (TLDs) have been recognized as the most validated detectors for measurement of such a dosimetric descriptor. The detector response in a wide energy spectrum as that of an (192)Ir brachytherapy source as well as the specific measurement medium which surrounds the TLD need to be accounted for when estimating the absorbed dose. This paper develops a methodology based on highly sensitive LiF:Mg,Cu,P TLDs to directly estimate the absorbed dose to water in liquid water around a high dose rate (192)Ir brachytherapy source. Different experimental designs in liquid water and air were constructed to study the response of LiF:Mg,Cu,P TLDs when irradiated in several standard photon beams of the LNE-LNHB (French national metrology laboratory for ionizing radiation). Measurement strategies and Monte Carlo techniques were developed to calibrate the LiF:Mg,Cu,P detectors in the energy interval characteristic of that found when TLDs are immersed in water around an (192)Ir source. Finally, an experimental system was designed to irradiate TLDs at different angles between 1 and 11 cm away from an (192)Ir source in liquid water. Monte Carlo simulations were performed to correct measured results to provide estimates of the absorbed dose to water in water around the (192)Ir source. The dose response dependence of LiF:Mg,Cu,P TLDs with the linear energy transfer of secondary electrons followed the same variations as those of published results. The calibration strategy which used TLDs in air exposed to a standard N-250 ISO x-ray beam and TLDs in water irradiated with a standard (137)Cs beam provided an estimated mean uncertainty of 2.8% (k = 1) in the TLD calibration coefficient for irradiations by the (192)Ir source in water. The 3D TLD measurements performed in liquid water were obtained with a maximum uncertainty of 11% (k = 1) found at 1 cm from the source. Radial dose values in water were compared against published results of the American Association of Physicists in Medicine and the European Society for Radiotherapy and Oncology and no significant differences (maximum value of 3.1%) were found within uncertainties except for one position at 9 cm (5.8%). At this location the background contribution relative to the TLD signal is relatively small and an unexpected experimental fluctuation in the background estimate may have caused such a large discrepancy. This paper shows that reliable measurements with TLDs in complex energy spectra require a study of the detector dose response with the radiation quality and specific calibration methodologies which model accurately the experimental conditions where the detectors will be used. The authors have developed and studied a method with highly sensitive TLDs and contributed to its validation by comparison with results from the literature. This methodology can be used to provide direct estimates of the absorbed dose rate in water for irradiations with HDR (192)Ir brachytherapy sources.

  5. A Comparative Study of Three Spatial Interpolation Methodologies for the Analysis of Air Pollution Concentrations in Athens, Greece

    NASA Astrophysics Data System (ADS)

    Deligiorgi, Despina; Philippopoulos, Kostas; Thanou, Lelouda; Karvounis, Georgios

    2010-01-01

    Spatial interpolation in air pollution modeling is the procedure for estimating ambient air pollution concentrations at unmonitored locations based on available observations. The selection of the appropriate methodology is based on the nature and the quality of the interpolated data. In this paper, an assessment of three widely used interpolation methodologies is undertaken in order to estimate the errors involved. For this purpose, air quality data from January 2001 to December 2005, from a network of seventeen monitoring stations, operating at the greater area of Athens in Greece, are used. The Nearest Neighbor and the Liner schemes were applied to the mean hourly observations, while the Inverse Distance Weighted (IDW) method to the mean monthly concentrations. The discrepancies of the estimated and measured values are assessed for every station and pollutant, using the correlation coefficient, the scatter diagrams and the statistical residuals. The capability of the methods to estimate air quality data in an area with multiple land-use types and pollution sources, such as Athens, is discussed.

  6. World Energy Projection System Plus Model Documentation: Greenhouse Gases Module

    EIA Publications

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Greenhouse Gases Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  7. World Energy Projection System Plus Model Documentation: Natural Gas Module

    EIA Publications

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Natural Gas Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  8. World Energy Projection System Plus Model Documentation: District Heat Module

    EIA Publications

    2017-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) District Heat Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  9. World Energy Projection System Plus Model Documentation: Industrial Module

    EIA Publications

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) World Industrial Model (WIM). It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  10. BLACK Carbon Emissions from Diesel Sources in the Largest Arctic City: Case Study of Murmansk

    NASA Astrophysics Data System (ADS)

    Evans, M.; Kholod, N.; Malyshev, V.; Tretyakova, S.; Gusev, E.; Yu, S.; Barinov, A.

    2014-12-01

    Russia has very little data on its black carbon (BC) emissions. Because Russia makes up such a large share of the Arctic, understanding Russian emissions will improve our understanding of overall BC levels, BC in the Arctic and the link between BC and climate change. This paper provides a detailed, bottom-up inventory of BC emissions from diesel sources in Murmansk, Russia, along with uncertainty estimates associated with these emissions. The research team developed a detailed data collection methodology. The methodology involves assessing the vehicle fleet and activity in Murmansk using traffic, parking lot and driver surveys combined with an existing database from a vehicle inspection station and statistical data. The team also assessed the most appropriate emission factors, drawing from both Russian and international inventory methodologies. The researchers also compared fuel consumption using statistical data and bottom-up fuel calculations. They then calculated emissions for on-road transportation, off-road transportation (including mines), diesel generators, fishing and other sources. The article also provides a preliminary assessment of Russia-wide emissions of black carbon from diesel sources.

  11. Automated source term and wind parameter estimation for atmospheric transport and dispersion applications

    NASA Astrophysics Data System (ADS)

    Bieringer, Paul E.; Rodriguez, Luna M.; Vandenberghe, Francois; Hurst, Jonathan G.; Bieberbach, George; Sykes, Ian; Hannan, John R.; Zaragoza, Jake; Fry, Richard N.

    2015-12-01

    Accurate simulations of the atmospheric transport and dispersion (AT&D) of hazardous airborne materials rely heavily on the source term parameters necessary to characterize the initial release and meteorological conditions that drive the downwind dispersion. In many cases the source parameters are not known and consequently based on rudimentary assumptions. This is particularly true of accidental releases and the intentional releases associated with terrorist incidents. When available, meteorological observations are often not representative of the conditions at the location of the release and the use of these non-representative meteorological conditions can result in significant errors in the hazard assessments downwind of the sensors, even when the other source parameters are accurately characterized. Here, we describe a computationally efficient methodology to characterize both the release source parameters and the low-level winds (eg. winds near the surface) required to produce a refined downwind hazard. This methodology, known as the Variational Iterative Refinement Source Term Estimation (STE) Algorithm (VIRSA), consists of a combination of modeling systems. These systems include a back-trajectory based source inversion method, a forward Gaussian puff dispersion model, a variational refinement algorithm that uses both a simple forward AT&D model that is a surrogate for the more complex Gaussian puff model and a formal adjoint of this surrogate model. The back-trajectory based method is used to calculate a ;first guess; source estimate based on the available observations of the airborne contaminant plume and atmospheric conditions. The variational refinement algorithm is then used to iteratively refine the first guess STE parameters and meteorological variables. The algorithm has been evaluated across a wide range of scenarios of varying complexity. It has been shown to improve the source parameters for location by several hundred percent (normalized by the distance from source to the closest sampler), and improve mass estimates by several orders of magnitude. Furthermore, it also has the ability to operate in scenarios with inconsistencies between the wind and airborne contaminant sensor observations and adjust the wind to provide a better match between the hazard prediction and the observations.

  12. A Study about Kalman Filters Applied to Embedded Sensors

    PubMed Central

    Valade, Aurélien; Acco, Pascal; Grabolosa, Pierre; Fourniols, Jean-Yves

    2017-01-01

    Over the last decade, smart sensors have grown in complexity and can now handle multiple measurement sources. This work establishes a methodology to achieve better estimates of physical values by processing raw measurements within a sensor using multi-physical models and Kalman filters for data fusion. A driving constraint being production cost and power consumption, this methodology focuses on algorithmic complexity while meeting real-time constraints and improving both precision and reliability despite low power processors limitations. Consequently, processing time available for other tasks is maximized. The known problem of estimating a 2D orientation using an inertial measurement unit with automatic gyroscope bias compensation will be used to illustrate the proposed methodology applied to a low power STM32L053 microcontroller. This application shows promising results with a processing time of 1.18 ms at 32 MHz with a 3.8% CPU usage due to the computation at a 26 Hz measurement and estimation rate. PMID:29206187

  13. Sediment source fingerprinting as an aid to catchment management: A review of the current state of knowledge and a methodological decision-tree for end-users.

    PubMed

    Collins, A L; Pulley, S; Foster, I D L; Gellis, A; Porto, P; Horowitz, A J

    2017-06-01

    The growing awareness of the environmental significance of fine-grained sediment fluxes through catchment systems continues to underscore the need for reliable information on the principal sources of this material. Source estimates are difficult to obtain using traditional monitoring techniques, but sediment source fingerprinting or tracing procedures, have emerged as a potentially valuable alternative. Despite the rapidly increasing numbers of studies reporting the use of sediment source fingerprinting, several key challenges and uncertainties continue to hamper consensus among the international scientific community on key components of the existing methodological procedures. Accordingly, this contribution reviews and presents recent developments for several key aspects of fingerprinting, namely: sediment source classification, catchment source and target sediment sampling, tracer selection, grain size issues, tracer conservatism, source apportionment modelling, and assessment of source predictions using artificial mixtures. Finally, a decision-tree representing the current state of knowledge is presented, to guide end-users in applying the fingerprinting approach. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. Methodology for building confidence measures

    NASA Astrophysics Data System (ADS)

    Bramson, Aaron L.

    2004-04-01

    This paper presents a generalized methodology for propagating known or estimated levels of individual source document truth reliability to determine the confidence level of a combined output. Initial document certainty levels are augmented by (i) combining the reliability measures of multiply sources, (ii) incorporating the truth reinforcement of related elements, and (iii) incorporating the importance of the individual elements for determining the probability of truth for the whole. The result is a measure of confidence in system output based on the establishing of links among the truth values of inputs. This methodology was developed for application to a multi-component situation awareness tool under development at the Air Force Research Laboratory in Rome, New York. Determining how improvements in data quality and the variety of documents collected affect the probability of a correct situational detection helps optimize the performance of the tool overall.

  15. Analysis of travel-time reliability for freight corridors connecting the Pacific Northwest.

    DOT National Transportation Integrated Search

    2012-11-01

    A new methodology and algorithms were developed to combine diverse data sources and to estimate the impacts of recurrent and non-recurrent : congestion on freight movements reliability and delays, costs, and emissions. The results suggest that tra...

  16. Wave-Based Algorithms and Bounds for Target Support Estimation

    DTIC Science & Technology

    2015-05-15

    vector electromagnetic formalism in [5]. This theory leads to three main variants of the optical theorem detector, in particular, three alternative...further expands the applicability for transient pulse change detection of ar- bitrary nonlinear-media and time-varying targets [9]. This report... electromagnetic methods a new methodology to estimate the minimum convex source region and the (possibly nonconvex) support of a scattering target from knowledge of

  17. Low-Temperature Hydrothermal Resource Potential Estimate

    DOE Data Explorer

    Katherine Young

    2016-06-30

    Compilation of data (spreadsheet and shapefiles) for several low-temperature resource types, including isolated springs and wells, delineated area convection systems, sedimentary basins and coastal plains sedimentary systems. For each system, we include estimates of the accessible resource base, mean extractable resource and beneficial heat. Data compiled from USGS and other sources. The paper (submitted to GRC 2016) describing the methodology and analysis is also included.

  18. Preliminary Results of the first European Source Apportionment intercomparison for Receptor and Chemical Transport Models

    NASA Astrophysics Data System (ADS)

    Belis, Claudio A.; Pernigotti, Denise; Pirovano, Guido

    2017-04-01

    Source Apportionment (SA) is the identification of ambient air pollution sources and the quantification of their contribution to pollution levels. This task can be accomplished using different approaches: chemical transport models and receptor models. Receptor models are derived from measurements and therefore are considered as a reference for primary sources urban background levels. Chemical transport model have better estimation of the secondary pollutants (inorganic) and are capable to provide gridded results with high time resolution. Assessing the performance of SA model results is essential to guarantee reliable information on source contributions to be used for the reporting to the Commission and in the development of pollution abatement strategies. This is the first intercomparison ever designed to test both receptor oriented models (or receptor models) and chemical transport models (or source oriented models) using a comprehensive method based on model quality indicators and pre-established criteria. The target pollutant of this exercise, organised in the frame of FAIRMODE WG 3, is PM10. Both receptor models and chemical transport models present good performances when evaluated against their respective references. Both types of models demonstrate quite satisfactory capabilities to estimate the yearly source contributions while the estimation of the source contributions at the daily level (time series) is more critical. Chemical transport models showed a tendency to underestimate the contribution of some single sources when compared to receptor models. For receptor models the most critical source category is industry. This is probably due to the variety of single sources with different characteristics that belong to this category. Dust is the most problematic source for Chemical Transport Models, likely due to the poor information about this kind of source in the emission inventories, particularly concerning road dust re-suspension, and consequently the little detail about the chemical components of this source used in the models. The sensitivity tests show that chemical transport models show better performances when displaying a detailed set of sources (14) than when using a simplified one (only 8). It was also observed that an enhanced vertical profiling can improve the estimation of specific sources, such as industry, under complex meteorological conditions and that an insufficient spatial resolution in urban areas can impact on the capabilities of models to estimate the contribution of diffuse primary sources (e.g. traffic). Both families of models identify traffic and biomass burning as the first and second most contributing categories, respectively, to elemental carbon. The results of this study demonstrate that the source apportionment assessment methodology developed by the JRC is applicable to any kind of SA model. The same methodology is implemented in the on-line DeltaSA tool to support source apportionment model evaluation (http://source-apportionment.jrc.ec.europa.eu/).

  19. Impact and Estimation of Balance Coordinate System Rotations and Translations in Wind-Tunnel Testing

    NASA Technical Reports Server (NTRS)

    Toro, Kenneth G.; Parker, Peter A.

    2017-01-01

    Discrepancies between the model and balance coordinate systems lead to biases in the aerodynamic measurements during wind-tunnel testing. The reference coordinate system relative to the calibration coordinate system at which the forces and moments are resolved is crucial to the overall accuracy of force measurements. This paper discusses sources of discrepancies and estimates of coordinate system rotation and translation due to machining and assembly differences. A methodology for numerically estimating the coordinate system biases will be discussed and developed. Two case studies are presented using this methodology to estimate the model alignment. Examples span from angle measurement system shifts on the calibration system to discrepancies in actual wind-tunnel data. The results from these case-studies will help aerodynamic researchers and force balance engineers to better the understand and identify potential differences in calibration systems due to coordinate system rotation and translation.

  20. A comparison of the poverty impact of transfers, taxes and market income across five OECD countries.

    PubMed

    Bibi, Sami; Duclos, Jean-Yves

    2010-01-01

    This paper compares the poverty reduction impact of income sources, taxes and transfers across five OECD countries. Since the estimation of that impact can depend on the order in which the various income sources are introduced into the analysis, it is done by using the Shapley value. Estimates of the poverty reduction impact are presented in a normalized and unnormalized fashion, in order to take into account the total as well as the per dollar impacts. The methodology is applied to data from the Luxembourg Income Study database.

  1. A comparison of three methods for estimating the requirements for medical specialists: the case of otolaryngologists.

    PubMed Central

    Anderson, G F; Han, K C; Miller, R H; Johns, M E

    1997-01-01

    OBJECTIVE: To compare three methods of computing the national requirements for otolaryngologists in 1994 and 2010. DATA SOURCES: Three large HMOs, a Delphi panel, the Bureau of Health Professions (BHPr), and published sources. STUDY DESIGN: Three established methods of computing requirements for otolaryngologists were compared: managed care, demand-utilization, and adjusted needs assessment. Under the managed care model, a published method based on reviewing staffing patterns in HMOs was modified to estimate the number of otolaryngologists. We obtained from BHPr estimates of work force projections from their demand model. To estimate the adjusted needs model, we convened a Delphi panel of otolaryngologists using the methodology developed by the Graduate Medical Education National Advisory Committee (GMENAC). DATA COLLECTION/EXTRACTION METHODS: Not applicable. PRINCIPAL FINDINGS: Wide variation in the estimated number of otolaryngologists required occurred across the three methods. Within each model it was possible to alter the requirements for otolaryngologists significantly by changing one or more of the key assumptions. The managed care model has a potential to obtain the most reliable estimates because it reflects actual staffing patterns in institutions that are attempting to use physicians efficiently. CONCLUSIONS: Estimates of work force requirements can vary considerably if one or more assumptions are changed. In order for the managed care approach to be useful for actual decision making concerning the appropriate number of otolaryngologists required, additional research on the methodology used to extrapolate the results to the general population is necessary. PMID:9180613

  2. Estimating Children's Soil/Dust Ingestion Rates through Retrospective Analyses of Blood Lead Biomonitoring from the Bunker Hill Superfund Site in Idaho.

    PubMed

    von Lindern, Ian; Spalinger, Susan; Stifelman, Marc L; Stanek, Lindsay Wichers; Bartrem, Casey

    2016-09-01

    Soil/dust ingestion rates are important variables in assessing children's health risks in contaminated environments. Current estimates are based largely on soil tracer methodology, which is limited by analytical uncertainty, small sample size, and short study duration. The objective was to estimate site-specific soil/dust ingestion rates through reevaluation of the lead absorption dose-response relationship using new bioavailability data from the Bunker Hill Mining and Metallurgical Complex Superfund Site (BHSS) in Idaho, USA. The U.S. Environmental Protection Agency (EPA) in vitro bioavailability methodology was applied to archived BHSS soil and dust samples. Using age-specific biokinetic slope factors, we related bioavailable lead from these sources to children's blood lead levels (BLLs) monitored during cleanup from 1988 through 2002. Quantitative regression analyses and exposure assessment guidance were used to develop candidate soil/dust source partition scenarios estimating lead intake, allowing estimation of age-specific soil/dust ingestion rates. These ingestion rate and bioavailability estimates were simultaneously applied to the U.S. EPA Integrated Exposure Uptake Biokinetic Model for Lead in Children to determine those combinations best approximating observed BLLs. Absolute soil and house dust bioavailability averaged 33% (SD ± 4%) and 28% (SD ± 6%), respectively. Estimated BHSS age-specific soil/dust ingestion rates are 86-94 mg/day for 6-month- to 2-year-old children and 51-67 mg/day for 2- to 9-year-old children. Soil/dust ingestion rate estimates for 1- to 9-year-old children at the BHSS are lower than those commonly used in human health risk assessment. A substantial component of children's exposure comes from sources beyond the immediate home environment. von Lindern I, Spalinger S, Stifelman ML, Stanek LW, Bartrem C. 2016. Estimating children's soil/dust ingestion rates through retrospective analyses of blood lead biomonitoring from the Bunker Hill Superfund Site in Idaho. Environ Health Perspect 124:1462-1470; http://dx.doi.org/10.1289/ehp.1510144.

  3. A methodological framework for assessing agreement between cost-effectiveness outcomes estimated using alternative sources of data on treatment costs and effects for trial-based economic evaluations.

    PubMed

    Achana, Felix; Petrou, Stavros; Khan, Kamran; Gaye, Amadou; Modi, Neena

    2018-01-01

    A new methodological framework for assessing agreement between cost-effectiveness endpoints generated using alternative sources of data on treatment costs and effects for trial-based economic evaluations is proposed. The framework can be used to validate cost-effectiveness endpoints generated from routine data sources when comparable data is available directly from trial case report forms or from another source. We illustrate application of the framework using data from a recent trial-based economic evaluation of the probiotic Bifidobacterium breve strain BBG administered to babies less than 31 weeks of gestation. Cost-effectiveness endpoints are compared using two sources of information; trial case report forms and data extracted from the National Neonatal Research Database (NNRD), a clinical database created through collaborative efforts of UK neonatal services. Focusing on mean incremental net benefits at £30,000 per episode of sepsis averted, the study revealed no evidence of discrepancy between the data sources (two-sided p values >0.4), low probability estimates of miscoverage (ranging from 0.039 to 0.060) and concordance correlation coefficients greater than 0.86. We conclude that the NNRD could potentially serve as a reliable source of data for future trial-based economic evaluations of neonatal interventions. We also discuss the potential implications of increasing opportunity to utilize routinely available data for the conduct of trial-based economic evaluations.

  4. Macroeconomic Activity Module - NEMS Documentation

    EIA Publications

    2016-01-01

    Documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Macroeconomic Activity Module (MAM) used to develop the Annual Energy Outlook for 2016 (AEO2016). The report catalogues and describes the module assumptions, computations, methodology, parameter estimation techniques, and mainframe source code

  5. FRAGS: estimation of coding sequence substitution rates from fragmentary data

    PubMed Central

    Swart, Estienne C; Hide, Winston A; Seoighe, Cathal

    2004-01-01

    Background Rates of substitution in protein-coding sequences can provide important insights into evolutionary processes that are of biomedical and theoretical interest. Increased availability of coding sequence data has enabled researchers to estimate more accurately the coding sequence divergence of pairs of organisms. However the use of different data sources, alignment protocols and methods to estimate substitution rates leads to widely varying estimates of key parameters that define the coding sequence divergence of orthologous genes. Although complete genome sequence data are not available for all organisms, fragmentary sequence data can provide accurate estimates of substitution rates provided that an appropriate and consistent methodology is used and that differences in the estimates obtainable from different data sources are taken into account. Results We have developed FRAGS, an application framework that uses existing, freely available software components to construct in-frame alignments and estimate coding substitution rates from fragmentary sequence data. Coding sequence substitution estimates for human and chimpanzee sequences, generated by FRAGS, reveal that methodological differences can give rise to significantly different estimates of important substitution parameters. The estimated substitution rates were also used to infer upper-bounds on the amount of sequencing error in the datasets that we have analysed. Conclusion We have developed a system that performs robust estimation of substitution rates for orthologous sequences from a pair of organisms. Our system can be used when fragmentary genomic or transcript data is available from one of the organisms and the other is a completely sequenced genome within the Ensembl database. As well as estimating substitution statistics our system enables the user to manage and query alignment and substitution data. PMID:15005802

  6. Disdrometer-based C-Band Radar Quantitative Precipitation Estimation (QPE) in a highly complex terrain region in tropical Colombia.

    NASA Astrophysics Data System (ADS)

    Sepúlveda, J.; Hoyos Ortiz, C. D.

    2017-12-01

    An adequate quantification of precipitation over land is critical for many societal applications including agriculture, hydroelectricity generation, water supply, and risk management associated with extreme events. The use of rain gauges, a traditional method for precipitation estimation, and an excellent one, to estimate the volume of liquid water during a particular precipitation event, does not allow to fully capture the highly spatial variability of the phenomena which is a requirement for almost all practical applications. On the other hand, the weather radar, an active remote sensing sensor, provides a proxy for rainfall with fine spatial resolution and adequate temporary sampling, however, it does not measure surface precipitation. In order to fully exploit the capabilities of the weather radar, it is necessary to develop quantitative precipitation estimation (QPE) techniques combining radar information with in-situ measurements. Different QPE methodologies are explored and adapted to local observations in a highly complex terrain region in tropical Colombia using a C-Band radar and a relatively dense network of rain gauges and disdrometers. One important result is that the expressions reported in the literature for extratropical locations are not representative of the conditions found in the tropical region studied. In addition to reproducing the state-of-the-art techniques, a new multi-stage methodology based on radar-derived variables and disdrometer data is proposed in order to achieve the best QPE possible. The main motivation for this new methodology is based on the fact that most traditional QPE methods do not directly take into account the different uncertainty sources involved in the process. The main advantage of the multi-stage model compared to traditional models is that it allows assessing and quantifying the uncertainty in the surface rain rate estimation. The sub-hourly rainfall estimations using the multi-stage methodology are realistic compared to observed data in spite of the many sources of uncertainty including the sampling volume, the different physical principles of the sensors, the incomplete understanding of the microphysics of precipitation and, the most important, the rapidly varying droplet size distribution.

  7. Verifying the UK agricultural N2O emission inventory with tall tower measurements

    NASA Astrophysics Data System (ADS)

    Carnell, E. J.; Meneguz, E.; Skiba, U. M.; Misselbrook, T. H.; Cardenas, L. M.; Arnold, T.; Manning, A.; Dragosits, U.

    2016-12-01

    Nitrous oxide (N2O) is a key greenhouse gas (GHG), with a global warming potential 300 times greater than that of CO2. N2O is emitted from a variety of sources, predominantly from agriculture. Annual UK emission estimates are reported, to comply with government commitments under the United Nations Framework Convention on Climate Change (UNFCCC). The UK N2O inventory follows internationally agreed protocols and emission estimates are derived by applying emission factors to estimates of (anthropogenic) emission sources. This approach is useful for comparing anthropogenic emissions from different countries, but does not capture regional differences and inter-annual variability associated with environmental factors (such as climate and soils) and agricultural management. In recent years, the UK inventory approach has been refined to include regional information into its emissions estimates, in an attempt to reduce uncertainty. This study attempts to assess the difference between current published inventory methodology (default IPCC methodology) and an alternative approach, which incorporates the latest thinking, using data from recent work. For 2013, emission estimates made using the alternative approach were 30 % lower than those made using default IPCC methodology, due to the use of lower emission factors suggested by recent projects (Defra projects: AC0116, AC0213 and MinNO). The 2013 emissions estimates were disaggregated on a monthly basis using agricultural management (e.g. sowing dates), climate data and soil properties. The temporally disaggregated emission maps were used as input to the Met Office atmospheric dispersion model NAME, for comparison with measured N2O concentrations, at three observation stations (Tacolneston, E. England; Ridge Hill, W. England; Mace Head, W. Ireland) in the UK DECC network (Deriving Emissions linked to Climate Change). The Mace Head site, situated on the west coast of Ireland, was used to establish baseline concentrations. The trends in the modelled data were found to correspond with the observational data trends, with concentration peaks coinciding with periods of land spreading of manures and fertiliser application. The model run using the default IPCC methodology was found to correspond with the observed data more closely than the alternative approach.

  8. Application of the Approximate Bayesian Computation methods in the stochastic estimation of atmospheric contamination parameters for mobile sources

    NASA Astrophysics Data System (ADS)

    Kopka, Piotr; Wawrzynczak, Anna; Borysiewicz, Mieczyslaw

    2016-11-01

    In this paper the Bayesian methodology, known as Approximate Bayesian Computation (ABC), is applied to the problem of the atmospheric contamination source identification. The algorithm input data are on-line arriving concentrations of the released substance registered by the distributed sensors network. This paper presents the Sequential ABC algorithm in detail and tests its efficiency in estimation of probabilistic distributions of atmospheric release parameters of a mobile contamination source. The developed algorithms are tested using the data from Over-Land Atmospheric Diffusion (OLAD) field tracer experiment. The paper demonstrates estimation of seven parameters characterizing the contamination source, i.e.: contamination source starting position (x,y), the direction of the motion of the source (d), its velocity (v), release rate (q), start time of release (ts) and its duration (td). The online-arriving new concentrations dynamically update the probability distributions of search parameters. The atmospheric dispersion Second-order Closure Integrated PUFF (SCIPUFF) Model is used as the forward model to predict the concentrations at the sensors locations.

  9. Transportation Sector Model of the National Energy Modeling System. Volume 2 -- Appendices: Part 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    The attachments contained within this appendix provide additional details about the model development and estimation process which do not easily lend themselves to incorporation in the main body of the model documentation report. The information provided in these attachments is not integral to the understanding of the model`s operation, but provides the reader with opportunity to gain a deeper understanding of some of the model`s underlying assumptions. There will be a slight degree of replication of materials found elsewhere in the documentation, made unavoidable by the dictates of internal consistency. Each attachment is associated with a specific component of themore » transportation model; the presentation follows the same sequence of modules employed in Volume 1. The following attachments are contained in Appendix F: Fuel Economy Model (FEM)--provides a discussion of the FEM vehicle demand and performance by size class models; Alternative Fuel Vehicle (AFV) Model--describes data input sources and extrapolation methodologies; Light-Duty Vehicle (LDV) Stock Model--discusses the fuel economy gap estimation methodology; Light Duty Vehicle Fleet Model--presents the data development for business, utility, and government fleet vehicles; Light Commercial Truck Model--describes the stratification methodology and data sources employed in estimating the stock and performance of LCT`s; Air Travel Demand Model--presents the derivation of the demographic index, used to modify estimates of personal travel demand; and Airborne Emissions Model--describes the derivation of emissions factors used to associate transportation measures to levels of airborne emissions of several pollutants.« less

  10. Acquisition Program Lead Systems Integration/Lead Capabilities Integration Decision Support Methodology and Tool

    DTIC Science & Technology

    2015-04-30

    from the MIT Sloan School that provide a relative complexity score for functions (Product and Context Complexity). The PMA assesses the complexity...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources...gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or

  11. The RADAR Test Methodology: Evaluating a Multi-Task Machine Learning System with Humans in the Loop

    DTIC Science & Technology

    2006-10-01

    burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing...data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information . Send comments regarding this...burden estimate or any other aspect of this collection of information , including suggestions for reducing this burden, to Washington Headquarters Services

  12. Methods for Estimating the Uncertainty in Emergy Table-Form Models

    EPA Science Inventory

    Emergy studies have suffered criticism due to the lack of uncertainty analysis and this shortcoming may have directly hindered the wider application and acceptance of this methodology. Recently, to fill this gap, the sources of uncertainty in emergy analysis were described and an...

  13. EVALUATION AND REPORTING OF COUNTY GASOLINE USE METHODOLOGIES

    EPA Science Inventory

    The report reviews two EPA studies that investigated improvements in the allocation of state-level gasoline sales to the county level in order to improve annual county-level emissions estimates from this source category. The approaches taken in these studies are compared with the...

  14. EVALUATION OF ALTERNATIVE GAUSSIAN PLUME DISPERSION MODELING TECHNIQUES IN ESTIMATING SHORT-TERM SULFUR DIOXIDE CONCENTRATIONS

    EPA Science Inventory

    A routinely applied atmospheric dispersion model was modified to evaluate alternative modeling techniques which allowed for more detailed source data, onsite meteorological data, and several dispersion methodologies. These were evaluated with hourly SO2 concentrations measured at...

  15. Commercial Demand Module - NEMS Documentation

    EIA Publications

    2017-01-01

    Documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Commercial Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated through the synthesis and scenario development based on these components.

  16. A systematic examination of a random sampling strategy for source apportionment calculations.

    PubMed

    Andersson, August

    2011-12-15

    Estimating the relative contributions from multiple potential sources of a specific component in a mixed environmental matrix is a general challenge in diverse fields such as atmospheric, environmental and earth sciences. Perhaps the most common strategy for tackling such problems is by setting up a system of linear equations for the fractional influence of different sources. Even though an algebraic solution of this approach is possible for the common situation with N+1 sources and N source markers, such methodology introduces a bias, since it is implicitly assumed that the calculated fractions and the corresponding uncertainties are independent of the variability of the source distributions. Here, a random sampling (RS) strategy for accounting for such statistical bias is examined by investigating rationally designed synthetic data sets. This random sampling methodology is found to be robust and accurate with respect to reproducibility and predictability. This method is also compared to a numerical integration solution for a two-source situation where source variability also is included. A general observation from this examination is that the variability of the source profiles not only affects the calculated precision but also the mean/median source contributions. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Methodology for computing the burden of disease of adverse events following immunization.

    PubMed

    McDonald, Scott A; Nijsten, Danielle; Bollaerts, Kaatje; Bauwens, Jorgen; Praet, Nicolas; van der Sande, Marianne; Bauchau, Vincent; de Smedt, Tom; Sturkenboom, Miriam; Hahné, Susan

    2018-03-24

    Composite disease burden measures such as disability-adjusted life-years (DALY) have been widely used to quantify the population-level health impact of disease or injury, but application has been limited for the estimation of the burden of adverse events following immunization. Our objective was to assess the feasibility of adapting the DALY approach for estimating adverse event burden. We developed a practical methodological framework, explicitly describing all steps involved: acquisition of relative or absolute risks and background event incidence rates, selection of disability weights and durations, and computation of the years lived with disability (YLD) measure, with appropriate estimation of uncertainty. We present a worked example, in which YLD is computed for 3 recognized adverse reactions following 3 childhood vaccination types, based on background incidence rates and relative/absolute risks retrieved from the literature. YLD provided extra insight into the health impact of an adverse event over presentation of incidence rates only, as severity and duration are additionally incorporated. As well as providing guidance for the deployment of DALY methodology in the context of adverse events associated with vaccination, we also identified where data limitations potentially occur. Burden of disease methodology can be applied to estimate the health burden of adverse events following vaccination in a systematic way. As with all burden of disease studies, interpretation of the estimates must consider the quality and accuracy of the data sources contributing to the DALY computation. © 2018 The Authors. Pharmacoepidemiology & Drug Safety Published by John Wiley & Sons Ltd.

  18. An adaptive Bayesian inference algorithm to estimate the parameters of a hazardous atmospheric release

    NASA Astrophysics Data System (ADS)

    Rajaona, Harizo; Septier, François; Armand, Patrick; Delignon, Yves; Olry, Christophe; Albergel, Armand; Moussafir, Jacques

    2015-12-01

    In the eventuality of an accidental or intentional atmospheric release, the reconstruction of the source term using measurements from a set of sensors is an important and challenging inverse problem. A rapid and accurate estimation of the source allows faster and more efficient action for first-response teams, in addition to providing better damage assessment. This paper presents a Bayesian probabilistic approach to estimate the location and the temporal emission profile of a pointwise source. The release rate is evaluated analytically by using a Gaussian assumption on its prior distribution, and is enhanced with a positivity constraint to improve the estimation. The source location is obtained by the means of an advanced iterative Monte-Carlo technique called Adaptive Multiple Importance Sampling (AMIS), which uses a recycling process at each iteration to accelerate its convergence. The proposed methodology is tested using synthetic and real concentration data in the framework of the Fusion Field Trials 2007 (FFT-07) experiment. The quality of the obtained results is comparable to those coming from the Markov Chain Monte Carlo (MCMC) algorithm, a popular Bayesian method used for source estimation. Moreover, the adaptive processing of the AMIS provides a better sampling efficiency by reusing all the generated samples.

  19. The cost of vision loss in Canada. 1. Methodology.

    PubMed

    Gordon, Keith D; Cruess, Alan F; Bellan, Lorne; Mitchell, Scott; Pezzullo, M Lynne

    2011-08-01

    This paper outlines the methodology used to estimate the cost of vision loss in Canada. The results of this study will be presented in a second paper. The cost of vision loss (VL) in Canada was estimated using a prevalence-based approach. This was done by estimating the number of people with VL in a base period (2007) and the costs associated with treating them. The cost estimates included direct health system expenditures on eye conditions that cause VL, as well as other indirect financial costs such as productivity losses. Estimates were also made of the value of the loss of healthy life, measured in Disability Adjusted Life Years or DALY's. To estimate the number of cases of VL in the population, epidemiological data on prevalence rates were applied to population data. The number of cases of VL was stratified by gender, age, ethnicity, severity and cause. The following sources were used for estimating prevalence: Population-based eye studies; Canadian Surveys; Canadian journal articles and research studies; and International Population Based Eye Studies. Direct health costs were obtained primarily from Health Canada and Canadian Institute for Health Information (CIHI) sources, while costs associated with productivity losses were based on employment information compiled by Statistics Canada and on economic theory of productivity loss. Costs related to vision rehabilitation (VR) were obtained from Canadian VR organizations. This study shows that it is possible to estimate the costs for VL for a country in the absence of ongoing local epidemiological studies. Copyright © 2011 Canadian Ophthalmological Society. Published by Elsevier Inc. All rights reserved.

  20. Validation of a novel air toxic risk model with air monitoring.

    PubMed

    Pratt, Gregory C; Dymond, Mary; Ellickson, Kristie; Thé, Jesse

    2012-01-01

    Three modeling systems were used to estimate human health risks from air pollution: two versions of MNRiskS (for Minnesota Risk Screening), and the USEPA National Air Toxics Assessment (NATA). MNRiskS is a unique cumulative risk modeling system used to assess risks from multiple air toxics, sources, and pathways on a local to a state-wide scale. In addition, ambient outdoor air monitoring data were available for estimation of risks and comparison with the modeled estimates of air concentrations. Highest air concentrations and estimated risks were generally found in the Minneapolis-St. Paul metropolitan area and lowest risks in undeveloped rural areas. Emissions from mobile and area (nonpoint) sources created greater estimated risks than emissions from point sources. Highest cancer risks were via ingestion pathway exposures to dioxins and related compounds. Diesel particles, acrolein, and formaldehyde created the highest estimated inhalation health impacts. Model-estimated air concentrations were generally highest for NATA and lowest for the AERMOD version of MNRiskS. This validation study showed reasonable agreement between available measurements and model predictions, although results varied among pollutants, and predictions were often lower than measurements. The results increased confidence in identifying pollutants, pathways, geographic areas, sources, and receptors of potential concern, and thus provide a basis for informing pollution reduction strategies and focusing efforts on specific pollutants (diesel particles, acrolein, and formaldehyde), geographic areas (urban centers), and source categories (nonpoint sources). The results heighten concerns about risks from food chain exposures to dioxins and PAHs. Risk estimates were sensitive to variations in methodologies for treating emissions, dispersion, deposition, exposure, and toxicity. © 2011 Society for Risk Analysis.

  1. Estimation of Enterococci Input from Bathers and Animals on A Recreational Beach Using Camera Images

    PubMed Central

    D, Wang John; M, Solo-Gabriele Helena; M, Abdelzaher Amir; E, Fleming Lora

    2010-01-01

    Enterococci, are used nationwide as a water quality indicator of marine recreational beaches. Prior research has demonstrated that enterococci inputs to the study beach site (located in Miami, FL) are dominated by non-point sources (including humans and animals). We have estimated their respective source functions by developing a counting methodology for individuals to better understand their non-point source load impacts. The method utilizes camera images of the beach taken at regular time intervals to determine the number of people and animal visitors. The developed method translates raw image counts for weekdays and weekend days into daily and monthly visitation rates. Enterococci source functions were computed from the observed number of unique individuals for average days of each month of the year, and from average load contributions for humans and for animals. Results indicate that dogs represent the larger source of enterococci relative to humans and birds. PMID:20381094

  2. Global inverse modeling of CH4 sources and sinks: an overview of methods

    NASA Astrophysics Data System (ADS)

    Houweling, Sander; Bergamaschi, Peter; Chevallier, Frederic; Heimann, Martin; Kaminski, Thomas; Krol, Maarten; Michalak, Anna M.; Patra, Prabir

    2017-01-01

    The aim of this paper is to present an overview of inverse modeling methods that have been developed over the years for estimating the global sources and sinks of CH4. It provides insight into how techniques and estimates have evolved over time and what the remaining shortcomings are. As such, it serves a didactical purpose of introducing apprentices to the field, but it also takes stock of developments so far and reflects on promising new directions. The main focus is on methodological aspects that are particularly relevant for CH4, such as its atmospheric oxidation, the use of methane isotopologues, and specific challenges in atmospheric transport modeling of CH4. The use of satellite retrievals receives special attention as it is an active field of methodological development, with special requirements on the sampling of the model and the treatment of data uncertainty. Regional scale flux estimation and attribution is still a grand challenge, which calls for new methods capable of combining information from multiple data streams of different measured parameters. A process model representation of sources and sinks in atmospheric transport inversion schemes allows the integrated use of such data. These new developments are needed not only to improve our understanding of the main processes driving the observed global trend but also to support international efforts to reduce greenhouse gas emissions.

  3. Social Costs of Gambling in the Czech Republic 2012.

    PubMed

    Winkler, Petr; Bejdová, Markéta; Csémy, Ladislav; Weissová, Aneta

    2017-12-01

    Evidence about social costs of gambling is scarce and the methodology for their calculation has been a subject to strong criticism. We aimed to estimate social costs of gambling in the Czech Republic 2012. This retrospective, prevalence based cost of illness study builds on the revised methodology of Australian Productivity Commission. Social costs of gambling were estimated by combining epidemiological and economic data. Prevalence data on negative consequences of gambling were taken from existing national epidemiological studies. Economic data were taken from various national and international sources. Consequences of problem and pathological gambling only were taken into account. In 2012, the social costs of gambling in the Czech Republic were estimated to range between 541,619 and 619,608 thousands EUR. While personal and family costs accounted for 63% of all social costs, direct medical costs were estimated to range from 0.25 to 0.28% of all social costs only. This is the first study which estimates social costs of gambling in any of the Central and East European countries. It builds upon the solid evidence about prevalence of gambling related problems in the Czech Republic and satisfactorily reliable economic data. However, there is a number of limitations stemming from assumptions that were made, which suggest that the methodology for the calculation of the social costs of gambling needs further development.

  4. Estimating the Global Prevalence of Inadequate Zinc Intake from National Food Balance Sheets: Effects of Methodological Assumptions

    PubMed Central

    Wessells, K. Ryan; Singh, Gitanjali M.; Brown, Kenneth H.

    2012-01-01

    Background The prevalence of inadequate zinc intake in a population can be estimated by comparing the zinc content of the food supply with the population’s theoretical requirement for zinc. However, assumptions regarding the nutrient composition of foods, zinc requirements, and zinc absorption may affect prevalence estimates. These analyses were conducted to: (1) evaluate the effect of varying methodological assumptions on country-specific estimates of the prevalence of dietary zinc inadequacy and (2) generate a model considered to provide the best estimates. Methodology and Principal Findings National food balance data were obtained from the Food and Agriculture Organization of the United Nations. Zinc and phytate contents of these foods were estimated from three nutrient composition databases. Zinc absorption was predicted using a mathematical model (Miller equation). Theoretical mean daily per capita physiological and dietary requirements for zinc were calculated using recommendations from the Food and Nutrition Board of the Institute of Medicine and the International Zinc Nutrition Consultative Group. The estimated global prevalence of inadequate zinc intake varied between 12–66%, depending on which methodological assumptions were applied. However, country-specific rank order of the estimated prevalence of inadequate intake was conserved across all models (r = 0.57–0.99, P<0.01). A “best-estimate” model, comprised of zinc and phytate data from a composite nutrient database and IZiNCG physiological requirements for absorbed zinc, estimated the global prevalence of inadequate zinc intake to be 17.3%. Conclusions and Significance Given the multiple sources of uncertainty in this method, caution must be taken in the interpretation of the estimated prevalence figures. However, the results of all models indicate that inadequate zinc intake may be fairly common globally. Inferences regarding the relative likelihood of zinc deficiency as a public health problem in different countries can be drawn based on the country-specific rank order of estimated prevalence of inadequate zinc intake. PMID:23209781

  5. Regional Earthquake Shaking and Loss Estimation

    NASA Astrophysics Data System (ADS)

    Sesetyan, K.; Demircioglu, M. B.; Zulfikar, C.; Durukal, E.; Erdik, M.

    2009-04-01

    This study, conducted under the JRA-3 component of the EU NERIES Project, develops a methodology and software (ELER) for the rapid estimation of earthquake shaking and losses in the Euro-Mediterranean region. This multi-level methodology developed together with researchers from Imperial College, NORSAR and ETH-Zurich is capable of incorporating regional variability and sources of uncertainty stemming from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships. GRM Risk Management, Inc. of Istanbul serves as sub-contractor tor the coding of the ELER software. The methodology encompasses the following general steps: 1. Finding of the most likely location of the source of the earthquake using regional seismotectonic data base and basic source parameters, and if and when possible, by the estimation of fault rupture parameters from rapid inversion of data from on-line stations. 2. Estimation of the spatial distribution of selected ground motion parameters through region specific ground motion attenuation relationships and using shear wave velocity distributions.(Shake Mapping) 4. Incorporation of strong ground motion and other empirical macroseismic data for the improvement of Shake Map 5. Estimation of the losses (damage, casualty and economic) at different levels of sophistication (0, 1 and 2) that commensurate with the availability of inventory of human built environment (Loss Mapping) Both Level 0 (similar to PAGER system of USGS) and Level 1 analyses of the ELER routine are based on obtaining intensity distributions analytically and estimating total number of casualties and their geographic distribution either using regionally adjusted intensity-casualty or magnitude-casualty correlations (Level 0) of using regional building inventory data bases (Level 1). Level 0 analysis is similar to the PAGER system being developed by USGS. For given basis source parameters the intensity distributions can be computed using: a)Regional intensity attenuation relationships, b)Intensity correlations with attenuation relationship based PGV, PGA, and Spectral Amplitudes and, c)Intensity correlations with synthetic Fourier Amplitude Spectrum. In Level 1 analysis EMS98 based building vulnerability relationships are used for regional estimates of building damage and the casualty distributions. Results obtained from pilot applications of the Level 0 and Level 1 analysis modes of the ELER software to the 1999 M 7.4 Kocaeli, 1995 M 6.1 Dinar, and 2007 M 5.4 Bingol earthquakes in terms of ground shaking and losses are presented and comparisons with the observed losses are made. The regional earthquake shaking and loss information is intented for dissemination in a timely manner to related agencies for the planning and coordination of the post-earthquake emergency response. However the same software can also be used for scenario earthquake loss estimation and related Monte-Carlo type simulations.

  6. Sources of Error in Substance Use Prevalence Surveys

    PubMed Central

    Johnson, Timothy P.

    2014-01-01

    Population-based estimates of substance use patterns have been regularly reported now for several decades. Concerns with the quality of the survey methodologies employed to produce those estimates date back almost as far. Those concerns have led to a considerable body of research specifically focused on understanding the nature and consequences of survey-based errors in substance use epidemiology. This paper reviews and summarizes that empirical research by organizing it within a total survey error model framework that considers multiple types of representation and measurement errors. Gaps in our knowledge of error sources in substance use surveys and areas needing future research are also identified. PMID:27437511

  7. Methodological Framework for World Health Organization Estimates of the Global Burden of Foodborne Disease

    PubMed Central

    Devleesschauwer, Brecht; Haagsma, Juanita A.; Angulo, Frederick J.; Bellinger, David C.; Cole, Dana; Döpfer, Dörte; Fazil, Aamir; Fèvre, Eric M.; Gibb, Herman J.; Hald, Tine; Kirk, Martyn D.; Lake, Robin J.; Maertens de Noordhout, Charline; Mathers, Colin D.; McDonald, Scott A.; Pires, Sara M.; Speybroeck, Niko; Thomas, M. Kate; Torgerson, Paul R.; Wu, Felicia; Havelaar, Arie H.; Praet, Nicolas

    2015-01-01

    Background The Foodborne Disease Burden Epidemiology Reference Group (FERG) was established in 2007 by the World Health Organization to estimate the global burden of foodborne diseases (FBDs). This paper describes the methodological framework developed by FERG's Computational Task Force to transform epidemiological information into FBD burden estimates. Methods and Findings The global and regional burden of 31 FBDs was quantified, along with limited estimates for 5 other FBDs, using Disability-Adjusted Life Years in a hazard- and incidence-based approach. To accomplish this task, the following workflow was defined: outline of disease models and collection of epidemiological data; design and completion of a database template; development of an imputation model; identification of disability weights; probabilistic burden assessment; and estimating the proportion of the disease burden by each hazard that is attributable to exposure by food (i.e., source attribution). All computations were performed in R and the different functions were compiled in the R package 'FERG'. Traceability and transparency were ensured by sharing results and methods in an interactive way with all FERG members throughout the process. Conclusions We developed a comprehensive framework for estimating the global burden of FBDs, in which methodological simplicity and transparency were key elements. All the tools developed have been made available and can be translated into a user-friendly national toolkit for studying and monitoring food safety at the local level. PMID:26633883

  8. Estimating stream discharge from a Himalayan Glacier using coupled satellite sensor data

    NASA Astrophysics Data System (ADS)

    Child, S. F.; Stearns, L. A.; van der Veen, C. J.; Haritashya, U. K.; Tarpanelli, A.

    2015-12-01

    The 4th IPCC report highlighted our limited understanding of Himalayan glacier behavior and contribution to the region's hydrology. Seasonal snow and glacier melt in the Himalayas are important sources of water, but estimates greatly differ about the actual contribution of melted glacier ice to stream discharge. A more comprehensive understanding of the contribution of glaciers to stream discharge is needed because streams being fed by glaciers affect the livelihoods of a large part of the world's population. Most of the streams in the Himalayas are unmonitored because in situ measurements are logistically difficult and costly. This necessitates the use of remote sensing platforms to obtain estimates of river discharge for validating hydrological models. In this study, we estimate stream discharge using cost-effective methods via repeat satellite imagery from Landsat-8 and SENTINEL-1A sensors. The methodology is based on previous studies, which show that ratio values from optical satellite bands correlate well with measured stream discharge. While similar, our methodology relies on significantly higher resolution imagery (30 m) and utilizes bands that are in the blue and near-infrared spectrum as opposed to previous studies using 250 m resolution imagery and spectral bands only in the near-infrared. Higher resolution imagery is necessary for streams where the source is a glacier's terminus because the width of the stream is often only 10s of meters. We validate our methodology using two rivers in the state of Kansas, where stream gauges are plentiful. We then apply our method to the Bhagirathi River, in the North-Central Himalayas, which is fed by the Gangotri Glacier and has a well monitored stream gauge. The analysis will later be used to couple river discharge and glacier flow and mass balance through an integrated hydrologic model in the Bhagirathi Basin.

  9. REAL-TIME MODELING AND MEASUREMENT OF MOBILE SOURCE POLLUTANT CONCENTRATIONS FOR ESTIMATING HUMAN EXPOSURES IN COMMUNITIES NEAR ROADWAYS

    EPA Science Inventory

    The United States Environmental Protection Agency's (EPA) National Exposure Research Laboratory (NERL) is pursuing a project to improve the methodology for real-time site specific modeling of human exposure to pollutants from motor vehicles. The overall project goal is to deve...

  10. Potential of neuro-fuzzy methodology to estimate noise level of wind turbines

    NASA Astrophysics Data System (ADS)

    Nikolić, Vlastimir; Petković, Dalibor; Por, Lip Yee; Shamshirband, Shahaboddin; Zamani, Mazdak; Ćojbašić, Žarko; Motamedi, Shervin

    2016-01-01

    Wind turbines noise effect became large problem because of increasing of wind farms numbers since renewable energy becomes the most influential energy sources. However, wind turbine noise generation and propagation is not understandable in all aspects. Mechanical noise of wind turbines can be ignored since aerodynamic noise of wind turbine blades is the main source of the noise generation. Numerical simulations of the noise effects of the wind turbine can be very challenging task. Therefore in this article soft computing method is used to evaluate noise level of wind turbines. The main goal of the study is to estimate wind turbine noise in regard of wind speed at different heights and for different sound frequency. Adaptive neuro-fuzzy inference system (ANFIS) is used to estimate the wind turbine noise levels.

  11. The need for harmonization of methods for finding locations and magnitudes of air pollution sources using observations of concentrations and wind fields

    NASA Astrophysics Data System (ADS)

    Hanna, Steven R.; Young, George S.

    2017-01-01

    What do the terms "top-down", "inverse", "backwards", "adjoint", "sensor data fusion", "receptor", "source term estimation (STE)", to name several appearing in the current literature, have in common? These varied terms are used by different disciplines to describe the same general methodology - the use of observations of air pollutant concentrations and knowledge of wind fields to identify air pollutant source locations and/or magnitudes. Academic journals are publishing increasing numbers of papers on this topic. Examples of scenarios related to this growing interest, ordered from small scale to large scale, are: use of real-time samplers to quickly estimate the location of a toxic gas release by a terrorist at a large public gathering (e.g., Haupt et al., 2009);

  12. Fundamental Rotorcraft Acoustic Modeling From Experiments (FRAME)

    NASA Technical Reports Server (NTRS)

    Greenwood, Eric

    2011-01-01

    A new methodology is developed for the construction of helicopter source noise models for use in mission planning tools from experimental measurements of helicopter external noise radiation. The models are constructed by employing a parameter identification method to an assumed analytical model of the rotor harmonic noise sources. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. The method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor harmonic noise, allowing accurate estimates of the dominant rotorcraft noise sources to be made for operating conditions based on a small number of measurements taken at different operating conditions. The ability of this method to estimate changes in noise radiation due to changes in ambient conditions is also demonstrated.

  13. Bayesian Inference for Source Term Estimation: Application to the International Monitoring System Radionuclide Network

    DTIC Science & Technology

    2014-10-01

    de l’exactitude et de la précision), comparativement au modèle de mesure plus simple qui n’utilise pas de multiplicateurs. Importance pour la défense...3) Bayesian experimental design for receptor placement in order to maximize the expected information in the measured concen- tration data for...applications of the Bayesian inferential methodology for source recon- struction have used high-quality concentration data from well- designed atmospheric

  14. Methodology for Prioritization of Investments to Support the Army Energy Strategy for Installations

    DTIC Science & Technology

    2012-07-01

    kind of energy source onto its own footprint. Whether this is a solar, wind, biomass, geothermal , or any other kind of renewable energy source, it...more common. Right now extortion and disgruntled employers are the attacked and not sophisticated enemies such as China . Our current nation power...users to: • Estimate the NPV cost of energy (COE) and levelized cost of energy (LCOE) from a range of solar, wind and geothermal electricity generation

  15. Validation and calibration of structural models that combine information from multiple sources.

    PubMed

    Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A

    2017-02-01

    Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.

  16. Overdiagnosis across medical disciplines: a scoping review

    PubMed Central

    de Groot, Joris A H; Reitsma, Johannes B; Moons, Karel G M; Hooft, Lotty; Naaktgeboren, Christiana A

    2017-01-01

    Objective To provide insight into how and in what clinical fields overdiagnosis is studied and give directions for further applied and methodological research. Design Scoping review. Data sources Medline up to August 2017. Study selection All English studies on humans, in which overdiagnosis was discussed as a dominant theme. Data extraction Studies were assessed on clinical field, study aim (ie, methodological or non-methodological), article type (eg, primary study, review), the type and role of diagnostic test(s) studied and the context in which these studies discussed overdiagnosis. Results From 4896 studies, 1851 were included for analysis. Half of all studies on overdiagnosis were performed in the field of oncology (50%). Other prevalent clinical fields included mental disorders, infectious diseases and cardiovascular diseases accounting for 9%, 8% and 6% of studies, respectively. Overdiagnosis was addressed from a methodological perspective in 20% of studies. Primary studies were the most common article type (58%). The type of diagnostic tests most commonly studied were imaging tests (32%), although these were predominantly seen in oncology and cardiovascular disease (84%). Diagnostic tests were studied in a screening setting in 43% of all studies, but as high as 75% of all oncological studies. The context in which studies addressed overdiagnosis related most frequently to its estimation, accounting for 53%. Methodology on overdiagnosis estimation and definition provided a source for extensive discussion. Other contexts of discussion included definition of disease, overdiagnosis communication, trends in increasing disease prevalence, drivers and consequences of overdiagnosis, incidental findings and genomics. Conclusions Overdiagnosis is discussed across virtually all clinical fields and in different contexts. The variability in characteristics between studies and lack of consensus on overdiagnosis definition indicate the need for a uniform typology to improve coherence and comparability of studies on overdiagnosis. PMID:29284720

  17. Quantification of groundwater recharge in urban environments.

    PubMed

    Tubau, Isabel; Vázquez-Suñé, Enric; Carrera, Jesús; Valhondo, Cristina; Criollo, Rotman

    2017-08-15

    Groundwater management in urban areas requires a detailed knowledge of the hydrogeological system as well as the adequate tools for predicting the amount of groundwater and water quality evolution. In that context, a key difference between urban and natural areas lies in recharge evaluation. A large number of studies have been published since the 1990s that evaluate recharge in urban areas, with no specific methodology. Most of these methods show that there are generally higher rates of recharge in urban settings than in natural settings. Methods such as mixing ratios or groundwater modeling can be used to better estimate the relative importance of different sources of recharge and may prove to be a good tool for total recharge evaluation. However, accurate evaluation of this input is difficult. The objective is to present a methodology to help overcome those difficulties, and which will allow us to quantify the variability in space and time of the recharge into aquifers in urban areas. Recharge calculations have been initially performed by defining and applying some analytical equations, and validation has been assessed based on groundwater flow and solute transport modeling. This methodology is applicable to complex systems by considering temporal variability of all water sources. This allows managers of urban groundwater to evaluate the relative contribution of different recharge sources at a city scale by considering quantity and quality factors. The methodology is applied to the assessment of recharge sources in the Barcelona city aquifers. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Black carbon emissions in Russia: A critical review

    NASA Astrophysics Data System (ADS)

    Evans, Meredydd; Kholod, Nazar; Kuklinski, Teresa; Denysenko, Artur; Smith, Steven J.; Staniszewski, Aaron; Hao, Wei Min; Liu, Liang; Bond, Tami C.

    2017-08-01

    This study presents a comprehensive review of estimated black carbon (BC) emissions in Russia from a range of studies. Russia has an important role regarding BC emissions given the extent of its territory above the Arctic Circle, where BC emissions have a particularly pronounced effect on the climate. We assess underlying methodologies and data sources for each major emissions source based on their level of detail, accuracy and extent to which they represent current conditions. We then present reference values for each major emissions source. In the case of flaring, the study presents new estimates drawing on data on Russia's associated petroleum gas and the most recent satellite data on flaring. We also present estimates of organic carbon (OC) for each source, either based on the reference studies or from our own calculations. In addition, the study provides uncertainty estimates for each source. Total BC emissions are estimated at 688 Gg in 2014, with an uncertainty range 401 Gg-1453 Gg, while OC emissions are 9224 Gg with uncertainty ranging between 5596 Gg and 14,736 Gg. Wildfires dominated and contributed about 83% of the total BC emissions: however, the effect on radiative forcing is mitigated in part by OC emissions. We also present an adjusted estimate of Arctic forcing from Russia's BC and OC emissions. In recent years, Russia has pursued policies to reduce flaring and limit particulate emissions from on-road transport, both of which appear to significantly contribute to the lower emissions and forcing values found in this study.

  19. Black carbon emissions in Russia: A critical review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Meredydd; Kholod, Nazar; Kuklinski, Teresa

    Russia has a particularly important role regarding black carbon (BC) emissions given the extent of its territory above the Arctic Circle, where BC emissions have a particularly pronounced effect on the climate. This study presents a comprehensive review of BC estimates from a range of studies. We assess underlying methodologies and data sources for each major emissions source based on their level of detail, accuracy and extent to which they represent current conditions. We then present reference values for each major emissions source. In the case of flaring, the study presents new estimates drawing on data on Russian associated petroleummore » gas and the most recent satellite data on flaring. We also present estimates of organic carbon (OC) for each source, either based on the reference studies or from our own calculations. In addition, the study provides uncertainty estimates for each source. Total BC emissions are estimated at 689 Gg in 2014, with an uncertainty range between (407-1,416), while OC emissions are 9,228 Gg (with uncertainty between 5,595 and 14,728). Wildfires dominated and contributed about 83% of the total BC emissions, however the effect on radiative forcing is mitigated by OC emissions. We also present an adjusted estimate of Arctic forcing from Russian OC and BC emissions. In recent years, Russia has pursued policies to reduce flaring and limit particulate emissions from on-road transport, both of which appear to significantly contribute to the lower emissions and forcing values found in this study.« less

  20. Estimating Children’s Soil/Dust Ingestion Rates through Retrospective Analyses of Blood Lead Biomonitoring from the Bunker Hill Superfund Site in Idaho

    PubMed Central

    von Lindern, Ian; Spalinger, Susan; Stifelman, Marc L.; Stanek, Lindsay Wichers; Bartrem, Casey

    2016-01-01

    Background: Soil/dust ingestion rates are important variables in assessing children’s health risks in contaminated environments. Current estimates are based largely on soil tracer methodology, which is limited by analytical uncertainty, small sample size, and short study duration. Objectives: The objective was to estimate site-specific soil/dust ingestion rates through reevaluation of the lead absorption dose–response relationship using new bioavailability data from the Bunker Hill Mining and Metallurgical Complex Superfund Site (BHSS) in Idaho, USA. Methods: The U.S. Environmental Protection Agency (EPA) in vitro bioavailability methodology was applied to archived BHSS soil and dust samples. Using age-specific biokinetic slope factors, we related bioavailable lead from these sources to children’s blood lead levels (BLLs) monitored during cleanup from 1988 through 2002. Quantitative regression analyses and exposure assessment guidance were used to develop candidate soil/dust source partition scenarios estimating lead intake, allowing estimation of age-specific soil/dust ingestion rates. These ingestion rate and bioavailability estimates were simultaneously applied to the U.S. EPA Integrated Exposure Uptake Biokinetic Model for Lead in Children to determine those combinations best approximating observed BLLs. Results: Absolute soil and house dust bioavailability averaged 33% (SD ± 4%) and 28% (SD ± 6%), respectively. Estimated BHSS age-specific soil/dust ingestion rates are 86–94 mg/day for 6-month- to 2-year-old children and 51–67 mg/day for 2- to 9-year-old children. Conclusions: Soil/dust ingestion rate estimates for 1- to 9-year-old children at the BHSS are lower than those commonly used in human health risk assessment. A substantial component of children’s exposure comes from sources beyond the immediate home environment. Citation: von Lindern I, Spalinger S, Stifelman ML, Stanek LW, Bartrem C. 2016. Estimating children’s soil/dust ingestion rates through retrospective analyses of blood lead biomonitoring from the Bunker Hill Superfund Site in Idaho. Environ Health Perspect 124:1462–1470; http://dx.doi.org/10.1289/ehp.1510144 PMID:26745545

  1. Describing the epidemiology of rheumatic diseases: methodological aspects.

    PubMed

    Guillemin, Francis

    2012-03-01

    Producing descriptive epidemiology data is essential to understand the burden of rheumatic diseases (prevalence) and their dynamic in the population (incidence). No matter how simple such indicators may look, the correct collection of data and the appropriate interpretation of the results face several challenges: distinguishing indicators, facing the costs of obtaining data, using appropriate definition, identifying optimal sources of data, choosing among many survey methods, dealing with estimates precision, and standardizing results. This study describes the underlying methodological difficulties to be overcome so as to make descriptive indicators reliable and interpretable.

  2. Maximizing the spatial representativeness of NO2 monitoring data using a combination of local wind-based sectoral division and seasonal and diurnal correction factors.

    PubMed

    Donnelly, Aoife; Naughton, Owen; Misstear, Bruce; Broderick, Brian

    2016-10-14

    This article describes a new methodology for increasing the spatial representativeness of individual monitoring sites. Air pollution levels at a given point are influenced by emission sources in the immediate vicinity. Since emission sources are rarely uniformly distributed around a site, concentration levels will inevitably be most affected by the sources in the prevailing upwind direction. The methodology provides a means of capturing this effect and providing additional information regarding source/pollution relationships. The methodology allows for the division of the air quality data from a given monitoring site into a number of sectors or wedges based on wind direction and estimation of annual mean values for each sector, thus optimising the information that can be obtained from a single monitoring station. The method corrects for short-term data, diurnal and seasonal variations in concentrations (which can produce uneven weighting of data within each sector) and uneven frequency of wind directions. Significant improvements in correlations between the air quality data and the spatial air quality indicators were obtained after application of the correction factors. This suggests the application of these techniques would be of significant benefit in land-use regression modelling studies. Furthermore, the method was found to be very useful for estimating long-term mean values and wind direction sector values using only short-term monitoring data. The methods presented in this article can result in cost savings through minimising the number of monitoring sites required for air quality studies while also capturing a greater degree of variability in spatial characteristics. In this way, more reliable, but also more expensive monitoring techniques can be used in preference to a higher number of low-cost but less reliable techniques. The methods described in this article have applications in local air quality management, source receptor analysis, land-use regression mapping and modelling and population exposure studies.

  3. Real Time Tracking of Magmatic Intrusions by means of Ground Deformation Modeling during Volcanic Crises.

    PubMed

    Cannavò, Flavio; Camacho, Antonio G; González, Pablo J; Mattia, Mario; Puglisi, Giuseppe; Fernández, José

    2015-06-09

    Volcano observatories provide near real-time information and, ultimately, forecasts about volcano activity. For this reason, multiple physical and chemical parameters are continuously monitored. Here, we present a new method to efficiently estimate the location and evolution of magmatic sources based on a stream of real-time surface deformation data, such as High-Rate GPS, and a free-geometry magmatic source model. The tool allows tracking inflation and deflation sources in time, providing estimates of where a volcano might erupt, which is important in understanding an on-going crisis. We show a successful simulated application to the pre-eruptive period of May 2008, at Mount Etna (Italy). The proposed methodology is able to track the fast dynamics of the magma migration by inverting the real-time data within seconds. This general method is suitable for integration in any volcano observatory. The method provides first order unsupervised and realistic estimates of the locations of magmatic sources and of potential eruption sites, information that is especially important for civil protection purposes.

  4. Real Time Tracking of Magmatic Intrusions by means of Ground Deformation Modeling during Volcanic Crises

    PubMed Central

    Cannavò, Flavio; Camacho, Antonio G.; González, Pablo J.; Mattia, Mario; Puglisi, Giuseppe; Fernández, José

    2015-01-01

    Volcano observatories provide near real-time information and, ultimately, forecasts about volcano activity. For this reason, multiple physical and chemical parameters are continuously monitored. Here, we present a new method to efficiently estimate the location and evolution of magmatic sources based on a stream of real-time surface deformation data, such as High-Rate GPS, and a free-geometry magmatic source model. The tool allows tracking inflation and deflation sources in time, providing estimates of where a volcano might erupt, which is important in understanding an on-going crisis. We show a successful simulated application to the pre-eruptive period of May 2008, at Mount Etna (Italy). The proposed methodology is able to track the fast dynamics of the magma migration by inverting the real-time data within seconds. This general method is suitable for integration in any volcano observatory. The method provides first order unsupervised and realistic estimates of the locations of magmatic sources and of potential eruption sites, information that is especially important for civil protection purposes. PMID:26055494

  5. Medical costs and quality-adjusted life years associated with smoking: a systematic review.

    PubMed

    Feirman, Shari P; Glasser, Allison M; Teplitskaya, Lyubov; Holtgrave, David R; Abrams, David B; Niaura, Raymond S; Villanti, Andrea C

    2016-07-27

    Estimated medical costs ("T") and QALYs ("Q") associated with smoking are frequently used in cost-utility analyses of tobacco control interventions. The goal of this study was to understand how researchers have addressed the methodological challenges involved in estimating these parameters. Data were collected as part of a systematic review of tobacco modeling studies. We searched five electronic databases on July 1, 2013 with no date restrictions and synthesized studies qualitatively. Studies were eligible for the current analysis if they were U.S.-based, provided an estimate for Q, and used a societal perspective and lifetime analytic horizon to estimate T. We identified common methods and frequently cited sources used to obtain these estimates. Across all 18 studies included in this review, 50 % cited a 1992 source to estimate the medical costs associated with smoking and 56 % cited a 1996 study to derive the estimate for QALYs saved by quitting or preventing smoking. Approaches for estimating T varied dramatically among the studies included in this review. T was valued as a positive number, negative number and $0; five studies did not include estimates for T in their analyses. The most commonly cited source for Q based its estimate on the Health Utilities Index (HUI). Several papers also cited sources that based their estimates for Q on the Quality of Well-Being Scale and the EuroQol five dimensions questionnaire (EQ-5D). Current estimates of the lifetime medical care costs and the QALYs associated with smoking are dated and do not reflect the latest evidence on the health effects of smoking, nor the current costs and benefits of smoking cessation and prevention. Given these limitations, we recommend that researchers conducting economic evaluations of tobacco control interventions perform extensive sensitivity analyses around these parameter estimates.

  6. Evaluation of high temperature superconductive thermal bridges for space borne cryogenic detectors

    NASA Technical Reports Server (NTRS)

    Scott, Elaine P.

    1996-01-01

    Infrared sensor satellites are used to monitor the conditions in the earth's upper atmosphere. In these systems, the electronic links connecting the cryogenically cooled infrared detectors to the significantly warmer amplification electronics act as thermal bridges and, consequently, the mission lifetimes of the satellites are limited due to cryogenic evaporation. High-temperature superconductor (HTS) materials have been proposed by researchers at the National Aeronautics and Space Administration Langley's Research Center (NASA-LaRC) as an alternative to the currently used manganin wires for electrical connection. The potential for using HTS films as thermal bridges has provided the motivation for the design and the analysis of a spaceflight experiment to evaluate the performance of this superconductive technology in the space environment. The initial efforts were focused on the preliminary design of the experimental system which allows for the quantitative comparison of superconductive leads with manganin leads, and on the thermal conduction modeling of the proposed system. Most of the HTS materials were indicated to be potential replacements for the manganin wires. In the continuation of this multi-year research, the objectives of this study were to evaluate the sources of heat transfer on the thermal bridges that have been neglected in the preliminary conductive model and then to develop a methodology for the estimation of the thermal conductivities of the HTS thermal bridges in space. The Joule heating created by the electrical current through the manganin wires was incorporated as a volumetric heat source into the manganin conductive model. The radiative heat source on the HTS thermal bridges was determined by performing a separate radiant interchange analysis within a high-T(sub c) superconductor housing area. Both heat sources indicated no significant contribution on the cryogenic heat load, which validates the results obtained in the preliminary conduction model. A methodology was presented for the estimation of the thermal conductivities of the individual HTS thermal bridge materials and the effective thermal conductivities of the composite HTS thermal bridges as functions of temperature. This methodology included a sensitivity analysis and the demonstration of the estimation procedure using simulated data with added random errors. The thermal conductivities could not be estimated as functions of temperature; thus the effective thermal conductivities of the HTS thermal bridges were analyzed as constants.

  7. Estimating the reliability of glycemic index values and potential sources of methodological and biological variability

    USDA-ARS?s Scientific Manuscript database

    Background: The utility of glycemic index (GI) values for chronic disease risk management remains controversial. While absolute GI value determinations for individual foods have been shown to vary significantly in individuals with diabetes, there is a dearth of data on the reliability of GI value de...

  8. Green Net Regional Product for the San Luis Basin, Colorado: An Economic Measure of Regional Sustainability

    EPA Science Inventory

    This paper presents the data sources and methodology used to estimate Green Net Regional Product (GNRP), a green accounting approach, for the San Luis Basin (SLB). GNRP is equal to aggregate consumption minus the depreciation of man-made and natural capital. We measure the move...

  9. Assessment of Infrared Sounder Radiometric Noise from Analysis of Spectral Residuals

    NASA Astrophysics Data System (ADS)

    Dufour, E.; Klonecki, A.; Standfuss, C.; Tournier, B.; Serio, C.; Masiello, G.; Tjemkes, S.; Stuhlmann, R.

    2016-08-01

    For the preparation and performance monitoring of the future generation of hyperspectral InfraRed sounders dedicated to the precise vertical profiling of the atmospheric state, such as the Meteosat Third Generation hyperspectral InfraRed Sounder, a reliable assessment of the instrument radiometric error covariance matrix is needed.Ideally, an inflight estimation of the radiometrric noise is recommended as certain sources of noise can be driven by the spectral signature of the observed Earth/ atmosphere radiance. Also, unknown correlated noise sources, generally related to incomplete knowledge of the instrument state, can be present, so a caracterisation of the noise spectral correlation is also neeed.A methodology, relying on the analysis of post-retreival spectral residuals, is designed and implemented to derive in-flight the covariance matrix on the basis of Earth scenes measurements. This methodology is successfully demonstrated using IASI observations as MTG-IRS proxy data and made it possible to highlight anticipated correlation structures explained by apodization and micro-vibration effects (ghost). This analysis is corroborated by a parallel estimation based on an IASI black body measurement dataset and the results of an independent micro-vibration model.

  10. Resource management and nonmarket valuation research

    USGS Publications Warehouse

    Douglas, A.J.; Taylor, J.G.

    1999-01-01

    Survey based nonmarket valuation research is often regarded as economics research. However, resource economists need to be aware of and acknowledge the manifold information sources that they employ in order to enhance the policy credibility of their studies. Communication between resource economists and practitioners of allied disciplines including chemistry, civil engineering, sociology, and anthropology are often neglected. Recent resource allocation policy debates have given rise to an extensive discussion of methodological issues that narrow the scope of the subject. The present paper provides a format for the presentation of nonmarket valuation research results that emphasizes the manifold links between economics studies that employ different methodologies to estimate nonmarket resource values. A more robust emphasis on the interlocking features of the different approaches for estimating nonmarket benefits should foster appreciation of the transdisciplinary aspects of the subject.

  11. An Interactive Computer Package for Use with Simulation Models Which Performs Multidimensional Sensitivity Analysis by Employing the Techniques of Response Surface Methodology.

    DTIC Science & Technology

    1984-12-01

    total sum of squares at the center points minus the correction factor for the mean at the center points ( SSpe =Y’Y-nlY), where n1 is the number of...SSlac=SSres- SSpe ). The sum of squares due to pure error estimates 0" and the sum of squares due to lack-of-fit estimates 0’" plus a bias term if...Response Surface Methodology Source d.f. SS MS Regression n b’X1 Y b’XVY/n Residual rn-n Y’Y-b’X’ *Y (Y’Y-b’X’Y)/(n-n) Pure Error ni-i Y’Y-nl1Y SSpe / (ni

  12. Rapid earthquake hazard and loss assessment for Euro-Mediterranean region

    NASA Astrophysics Data System (ADS)

    Erdik, Mustafa; Sesetyan, Karin; Demircioglu, Mine; Hancilar, Ufuk; Zulfikar, Can; Cakti, Eser; Kamer, Yaver; Yenidogan, Cem; Tuzun, Cuneyt; Cagnan, Zehra; Harmandar, Ebru

    2010-10-01

    The almost-real time estimation of ground shaking and losses after a major earthquake in the Euro-Mediterranean region was performed in the framework of the Joint Research Activity 3 (JRA-3) component of the EU FP6 Project entitled "Network of Research Infra-structures for European Seismology, NERIES". This project consists of finding the most likely location of the earthquake source by estimating the fault rupture parameters on the basis of rapid inversion of data from on-line regional broadband stations. It also includes an estimation of the spatial distribution of selected site-specific ground motion parameters at engineering bedrock through region-specific ground motion prediction equations (GMPEs) or physical simulation of ground motion. By using the Earthquake Loss Estimation Routine (ELER) software, the multi-level methodology developed for real time estimation of losses is capable of incorporating regional variability and sources of uncertainty stemming from GMPEs, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships.

  13. Solar Radiation Estimated Through Mesoscale Atmospheric Modeling over Northeast Brazil

    NASA Astrophysics Data System (ADS)

    de Menezes Neto, Otacilio Leandro; Costa, Alexandre Araújo; Ramalho, Fernando Pinto; de Maria, Paulo Henrique Santiago

    2009-03-01

    The use of renewable energy sources, like solar, wind and biomass is rapidly increasing in recent years, with solar radiation as a particularly abundant energy source over Northeast Brazil. A proper quantitative knowledge of the incoming solar radiation is of great importance for energy planning in Brazil, serving as basis for developing future projects of photovoltaic power plants and solar energy exploitation. This work presents a methodology for mapping the incoming solar radiation at ground level for Northeast Brazil, using a mesoscale atmospheric model (Regional Atmospheric Modeling System—RAMS), calibrated and validated using data from the network of automatic surface stations from the State Foundation for Meteorology and Water Resources from Ceará (Fundação Cearense de Meteorologia e Recursos Hídricos- FUNCEME). The results showed that the model exhibits systematic errors, overestimating surface radiation, but that, after the proper statistical corrections, using a relationship between the model-predicted cloud fraction, the ground-level observed solar radiation and the incoming solar radiation estimated at the top of the atmosphere, a correlation of 0.92 with a confidence interval of 13.5 W/m2 is found for monthly data. Using this methodology, we found an estimate for annual average incoming solar radiation over Ceará of 215 W/m2 (maximum in October: 260 W/m2).

  14. A GIS-based time-dependent seismic source modeling of Northern Iran

    NASA Astrophysics Data System (ADS)

    Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza

    2017-01-01

    The first step in any seismic hazard study is the definition of seismogenic sources and the estimation of magnitude-frequency relationships for each source. There is as yet no standard methodology for source modeling and many researchers have worked on this topic. This study is an effort to define linear and area seismic sources for Northern Iran. The linear or fault sources are developed based on tectonic features and characteristic earthquakes while the area sources are developed based on spatial distribution of small to moderate earthquakes. Time-dependent recurrence relationships are developed for fault sources using renewal approach while time-independent frequency-magnitude relationships are proposed for area sources based on Poisson process. GIS functionalities are used in this study to introduce and incorporate spatial-temporal and geostatistical indices in delineating area seismic sources. The proposed methodology is used to model seismic sources for an area of about 500 by 400 square kilometers around Tehran. Previous researches and reports are studied to compile an earthquake/fault catalog that is as complete as possible. All events are transformed to uniform magnitude scale; duplicate events and dependent shocks are removed. Completeness and time distribution of the compiled catalog is taken into account. The proposed area and linear seismic sources in conjunction with defined recurrence relationships can be used to develop time-dependent probabilistic seismic hazard analysis of Northern Iran.

  15. [Methodological considerations in the interpretation of serologic screening for hepatitis B virus among blood donors].

    PubMed

    Martelli, C M; de Andrade, A L; das Dores, D; Cardoso, P; Almeida e Silva, S; Zicker, F

    1991-02-01

    Between October 1988 and February 1989, 1,033 voluntary first-time blood donors were screened for hepatitis B infection in five blood banks in Goiâna, Central Brazil. The survey was part of a major study designed to estimate seroprevalence of HBsAg and anti-HBs and to discuss methodological issues related to prevalence estimation based on data from blood banks. Donors were interviewed and blood samples were collected and tested for HBsAg and anti-HBs by ELISA tests. Prevalences of 1.9% and 10.9% were obtained for HBsAg and anti-HBs, respectively, and no statistical difference was found between the sexes. Prevalence of anti-HBs increased with age (X2 for trend = 7.9 p = 0.004). The positive predictive value and sensitivity of history of jaundice or hepatitis reported in the interview in detecting seropositives were 13.6% and 2.2%, respectively. The methodological issues, including internal and external validity of HBV prevalence estimated among blood donors are discussed. The potential usefulness of blood banks as a source of morbidity information for surveillance for Hepatitis B virus infection is stressed.

  16. An Infrared Camera Simulation for Estimating Spatial Temperature Profiles and Signal-to-Noise Ratios of an Airborne Laser-Illuminated Target

    DTIC Science & Technology

    2007-06-01

    of SNR, she incorporated the effects that an InGaAs photovoltaic detector have in producing the signal along with the photon, Johnson, and shot noises ...the photovoltaic FPA detector modeled? • What detector noise sources limit the computed signal? 3.1 Modeling Methodology Two aspects in the IR camera...Another shot noise source in photovoltaic detectors is dark current. This current represents the current flowing in the detector when no optical radiation

  17. A spatio-temporal model for probabilistic seismic hazard zonation of Tehran

    NASA Astrophysics Data System (ADS)

    Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza

    2013-08-01

    A precondition for all disaster management steps, building damage prediction, and construction code developments is a hazard assessment that shows the exceedance probabilities of different ground motion levels at a site considering different near- and far-field earthquake sources. The seismic sources are usually categorized as time-independent area sources and time-dependent fault sources. While the earlier incorporates the small and medium events, the later takes into account only the large characteristic earthquakes. In this article, a probabilistic approach is proposed to aggregate the effects of time-dependent and time-independent sources on seismic hazard. The methodology is then applied to generate three probabilistic seismic hazard maps of Tehran for 10%, 5%, and 2% exceedance probabilities in 50 years. The results indicate an increase in peak ground acceleration (PGA) values toward the southeastern part of the study area and the PGA variations are mostly controlled by the shear wave velocities across the city. In addition, the implementation of the methodology takes advantage of GIS capabilities especially raster-based analyses and representations. During the estimation of the PGA exceedance rates, the emphasis has been placed on incorporating the effects of different attenuation relationships and seismic source models by using a logic tree.

  18. Updating the USGS seismic hazard maps for Alaska

    USGS Publications Warehouse

    Mueller, Charles; Briggs, Richard; Wesson, Robert L.; Petersen, Mark D.

    2015-01-01

    The U.S. Geological Survey makes probabilistic seismic hazard maps and engineering design maps for building codes, emergency planning, risk management, and many other applications. The methodology considers all known earthquake sources with their associated magnitude and rate distributions. Specific faults can be modeled if slip-rate or recurrence information is available. Otherwise, areal sources are developed from earthquake catalogs or GPS data. Sources are combined with ground-motion estimates to compute the hazard. The current maps for Alaska were developed in 2007, and included modeled sources for the Alaska-Aleutian megathrust, a few crustal faults, and areal seismicity sources. The megathrust was modeled as a segmented dipping plane with segmentation largely derived from the slip patches of past earthquakes. Some megathrust deformation is aseismic, so recurrence was estimated from seismic history rather than plate rates. Crustal faults included the Fairweather-Queen Charlotte system, the Denali–Totschunda system, the Castle Mountain fault, two faults on Kodiak Island, and the Transition fault, with recurrence estimated from geologic data. Areal seismicity sources were developed for Benioff-zone earthquakes and for crustal earthquakes not associated with modeled faults. We review the current state of knowledge in Alaska from a seismic-hazard perspective, in anticipation of future updates of the maps. Updated source models will consider revised seismicity catalogs, new information on crustal faults, new GPS data, and new thinking on megathrust recurrence, segmentation, and geometry. Revised ground-motion models will provide up-to-date shaking estimates for crustal earthquakes and subduction earthquakes in Alaska.

  19. Costs of addressing heroin addiction in Malaysia and 32 comparable countries worldwide.

    PubMed

    Ruger, Jennifer Prah; Chawarski, Marek; Mazlan, Mahmud; Luekens, Craig; Ng, Nora; Schottenfeld, Richard

    2012-04-01

    Develop and apply new costing methodologies to estimate costs of opioid dependence treatment in countries worldwide. Micro-costing methodology developed and data collected during randomized controlled trial (RCT) involving 126 patients (July 2003-May 2005) in Malaysia. Gross-costing methodology developed to estimate costs of treatment replication in 32 countries with data collected from publicly available sources. Fixed, variable, and societal cost components of Malaysian RCT micro-costed and analytical framework created and employed for gross-costing in 32 countries selected by three criteria relative to Malaysia: major heroin problem, geographic proximity, and comparable gross domestic product (GDP) per capita. Medication, and urine and blood testing accounted for the greatest percentage of total costs for both naltrexone (29-53 percent) and buprenorphine (33-72 percent) interventions. In 13 countries, buprenorphine treatment could be provided for under $2,000 per patient. For all countries except United Kingdom and Singapore, incremental costs per person were below $1,000 when comparing buprenorphine to naltrexone. An estimated 100 percent of opiate users in Cambodia and Lao People's Democratic Republic could be treated for $8 and $30 million, respectively. Buprenorphine treatment can be provided at low cost in countries across the world. This study's new costing methodologies provide tools for health systems worldwide to determine the feasibility and cost of similar interventions. © Health Research and Educational Trust.

  20. Precipitable water vapour content from ESR/SKYNET sun-sky radiometers: validation against GNSS/GPS and AERONET over three different sites in Europe

    NASA Astrophysics Data System (ADS)

    Campanelli, Monica; Mascitelli, Alessandra; Sanò, Paolo; Diémoz, Henri; Estellés, Victor; Federico, Stefano; Iannarelli, Anna Maria; Fratarcangeli, Francesca; Mazzoni, Augusto; Realini, Eugenio; Crespi, Mattia; Bock, Olivier; Martínez-Lozano, Jose A.; Dietrich, Stefano

    2018-01-01

    The estimation of the precipitable water vapour content (W) with high temporal and spatial resolution is of great interest to both meteorological and climatological studies. Several methodologies based on remote sensing techniques have been recently developed in order to obtain accurate and frequent measurements of this atmospheric parameter. Among them, the relative low cost and easy deployment of sun-sky radiometers, or sun photometers, operating in several international networks, allowed the development of automatic estimations of W from these instruments with high temporal resolution. However, the great problem of this methodology is the estimation of the sun-photometric calibration parameters. The objective of this paper is to validate a new methodology based on the hypothesis that the calibration parameters characterizing the atmospheric transmittance at 940 nm are dependent on vertical profiles of temperature, air pressure and moisture typical of each measurement site. To obtain the calibration parameters some simultaneously seasonal measurements of W, from independent sources, taken over a large range of solar zenith angle and covering a wide range of W, are needed. In this work yearly GNSS/GPS datasets were used for obtaining a table of photometric calibration constants and the methodology was applied and validated in three European ESR-SKYNET network sites, characterized by different atmospheric and climatic conditions: Rome, Valencia and Aosta. Results were validated against the GNSS/GPS and AErosol RObotic NETwork (AERONET) W estimations. In both the validations the agreement was very high, with a percentage RMSD of about 6, 13 and 8 % in the case of GPS intercomparison at Rome, Aosta and Valencia, respectively, and of 8 % in the case of AERONET comparison in Valencia. Analysing the results by W classes, the present methodology was found to clearly improve W estimation at low W content when compared against AERONET in terms of % bias, bringing the agreement with the GPS (considered the reference one) from a % bias of 5.76 to 0.52.

  1. The Cost of Crime to Society: New Crime-Specific Estimates for Policy and Program Evaluation

    PubMed Central

    French, Michael T.; Fang, Hai

    2010-01-01

    Estimating the cost to society of individual crimes is essential to the economic evaluation of many social programs, such as substance abuse treatment and community policing. A review of the crime-costing literature reveals multiple sources, including published articles and government reports, which collectively represent the alternative approaches for estimating the economic losses associated with criminal activity. Many of these sources are based upon data that are more than ten years old, indicating a need for updated figures. This study presents a comprehensive methodology for calculating the cost of society of various criminal acts. Tangible and intangible losses are estimated using the most current data available. The selected approach, which incorporates both the cost-of-illness and the jury compensation methods, yields cost estimates for more than a dozen major crime categories, including several categories not found in previous studies. Updated crime cost estimates can help government agencies and other organizations execute more prudent policy evaluations, particularly benefit-cost analyses of substance abuse treatment or other interventions that reduce crime. PMID:20071107

  2. The cost of crime to society: new crime-specific estimates for policy and program evaluation.

    PubMed

    McCollister, Kathryn E; French, Michael T; Fang, Hai

    2010-04-01

    Estimating the cost to society of individual crimes is essential to the economic evaluation of many social programs, such as substance abuse treatment and community policing. A review of the crime-costing literature reveals multiple sources, including published articles and government reports, which collectively represent the alternative approaches for estimating the economic losses associated with criminal activity. Many of these sources are based upon data that are more than 10 years old, indicating a need for updated figures. This study presents a comprehensive methodology for calculating the cost to society of various criminal acts. Tangible and intangible losses are estimated using the most current data available. The selected approach, which incorporates both the cost-of-illness and the jury compensation methods, yields cost estimates for more than a dozen major crime categories, including several categories not found in previous studies. Updated crime cost estimates can help government agencies and other organizations execute more prudent policy evaluations, particularly benefit-cost analyses of substance abuse treatment or other interventions that reduce crime. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  3. Verifying the UK N_{2}O emission inventory with tall tower measurements

    NASA Astrophysics Data System (ADS)

    Carnell, Ed; Meneguz, Elena; Skiba, Ute; Misselbrook, Tom; Cardenas, Laura; Arnold, Tim; Manning, Alistair; Dragosits, Ulli

    2016-04-01

    Nitrous oxide (N2O) is a key greenhouse gas (GHG), with a global warming potential ˜300 times greater than that of CO2. N2O is emitted from a variety of sources, predominantly from agriculture. Annual UK emission estimates are reported, to comply with government commitments under the United Nations Framework Convention on Climate Change (UNFCCC). The UK N2O inventory follows internationally agreed protocols and emission estimates are derived by applying emission factors to estimates of (anthropogenic) emission sources. This approach is useful for comparing anthropogenic emissions from different countries, but does not capture regional differences and inter-annual variability associated with environmental factors (such as climate and soils) and agricultural management. In recent years, the UK inventory approach has been refined to include regional information into its emissions estimates (e.g. agricultural management data), in an attempt to reduce uncertainty. This study attempts to assess the difference between current published inventory methodology (default IPCC methodology) and a revised approach, which incorporates the latest thinking, using data from recent work. For 2013, emission estimates made using the revised approach were 30 % lower than those made using default IPCC methodology, due to the use of lower emission factors suggested by recent projects (www.ghgplatform.org.uk, Defra projects: AC0116, AC0213 and MinNO). The 2013 emissions estimates were disaggregated on a monthly basis using agricultural management (e.g. sowing dates), climate data and soil properties. The temporally disaggregated emission maps were used as input to the Met Office atmospheric dispersion model NAME, for comparison with measured N2O concentrations, at three observation stations (Tacolneston, E England; Ridge Hill, W England; Mace Head, W Ireland) in the UK DECC network (Deriving Emissions linked to Climate Change). The Mace Head site, situated on the west coast of Ireland, was used to establish baseline concentrations. The trends in the modelled data were found to fit with the observational data trends, with concentration peaks coinciding with periods of fertiliser application and land spreading of manures. The model run using the 'experimental' approach was found to give a closer agreement with the observed data.

  4. A Resource Paper on the Relative Cost of Special Education.

    ERIC Educational Resources Information Center

    Osher, Trina; And Others

    This resource paper describes two recent studies and one report on special education costs and the methodology used in their analyses. Each study and its data source are summarized with a short discussion on the quality of the data and its usefulness and limitations in generating reliable cost estimates for special education. The paper summarizes…

  5. Model documentation: Electricity Market Module, Electricity Fuel Dispatch Submodule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    This report documents the objectives, analytical approach and development of the National Energy Modeling System Electricity Fuel Dispatch Submodule (EFD), a submodule of the Electricity Market Module (EMM). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated through the synthesis and scenario development based on these components.

  6. Black carbon emissions in Russia: A critical review

    DOE PAGES

    Evans, Meredydd; Kholod, Nazar; Kuklinski, Teresa; ...

    2017-05-18

    Here, this study presents a comprehensive review of estimated black carbon (BC) emissions in Russia from a range of studies. Russia has an important role regarding BC emissions given the extent of its territory above the Arctic Circle, where BC emissions have a particularly pronounced effect on the climate. We assess underlying methodologies and data sources for each major emissions source based on their level of detail, accuracy and extent to which they represent current conditions. We then present reference values for each major emissions source. In the case of flaring, the study presents new estimates drawing on data onmore » Russia's associated petroleum gas and the most recent satellite data on flaring. We also present estimates of organic carbon (OC) for each source, either based on the reference studies or from our own calculations. In addition, the study provides uncertainty estimates for each source. Total BC emissions are estimated at 688 Gg in 2014, with an uncertainty range 401 Gg-1453 Gg, while OC emissions are 9224 Gg with uncertainty ranging between 5596 Gg and 14,736 Gg. Wildfires dominated and contributed about 83% of the total BC emissions: however, the effect on radiative forcing is mitigated in part by OC emissions. We also present an adjusted estimate of Arctic forcing from Russia's BC and OC emissions. In recent years, Russia has pursued policies to reduce flaring and limit particulate emissions from on-road transport, both of which appear to significantly contribute to the lower emissions and forcing values found in this study.« less

  7. Black carbon emissions in Russia: A critical review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Meredydd; Kholod, Nazar; Kuklinski, Teresa

    Here, this study presents a comprehensive review of estimated black carbon (BC) emissions in Russia from a range of studies. Russia has an important role regarding BC emissions given the extent of its territory above the Arctic Circle, where BC emissions have a particularly pronounced effect on the climate. We assess underlying methodologies and data sources for each major emissions source based on their level of detail, accuracy and extent to which they represent current conditions. We then present reference values for each major emissions source. In the case of flaring, the study presents new estimates drawing on data onmore » Russia's associated petroleum gas and the most recent satellite data on flaring. We also present estimates of organic carbon (OC) for each source, either based on the reference studies or from our own calculations. In addition, the study provides uncertainty estimates for each source. Total BC emissions are estimated at 688 Gg in 2014, with an uncertainty range 401 Gg-1453 Gg, while OC emissions are 9224 Gg with uncertainty ranging between 5596 Gg and 14,736 Gg. Wildfires dominated and contributed about 83% of the total BC emissions: however, the effect on radiative forcing is mitigated in part by OC emissions. We also present an adjusted estimate of Arctic forcing from Russia's BC and OC emissions. In recent years, Russia has pursued policies to reduce flaring and limit particulate emissions from on-road transport, both of which appear to significantly contribute to the lower emissions and forcing values found in this study.« less

  8. Approaches to Children’s Exposure Assessment: Case Study with Diethylhexylphthalate (DEHP)

    PubMed Central

    Ginsberg, Gary; Ginsberg, Justine; Foos, Brenda

    2016-01-01

    Children’s exposure assessment is a key input into epidemiology studies, risk assessment and source apportionment. The goals of this article are to describe a methodology for children’s exposure assessment that can be used for these purposes and to apply the methodology to source apportionment for the case study chemical, diethylhexylphthalate (DEHP). A key feature is the comparison of total (aggregate) exposure calculated via a pathways approach to that derived from a biomonitoring approach. The 4-step methodology and its results for DEHP are: (1) Prioritization of life stages and exposure pathways, with pregnancy, breast-fed infants, and toddlers the focus of the case study and pathways selected that are relevant to these groups; (2) Estimation of pathway-specific exposures by life stage wherein diet was found to be the largest contributor for pregnant women, breast milk and mouthing behavior for the nursing infant and diet, house dust, and mouthing for toddlers; (3) Comparison of aggregate exposure by pathways vs biomonitoring-based approaches wherein good concordance was found for toddlers and pregnant women providing confidence in the exposure assessment; (4) Source apportionment in which DEHP presence in foods, children’s products, consumer products and the built environment are discussed with respect to early life mouthing, house dust and dietary exposure. A potential fifth step of the method involves the calculation of exposure doses for risk assessment which is described but outside the scope for the current case study. In summary, the methodology has been used to synthesize the available information to identify key sources of early life exposure to DEHP. PMID:27376320

  9. Data compilation, synthesis, and calculations used for organic-carbon storage and inventory estimates for mineral soils of the Mississippi River Basin

    USGS Publications Warehouse

    Buell, Gary R.; Markewich, Helaine W.

    2004-01-01

    U.S. Geological Survey investigations of environmental controls on carbon cycling in soils and sediments of the Mississippi River Basin (MRB), an area of 3.3 x 106 square kilometers (km2), have produced an assessment tool for estimating the storage and inventory of soil organic carbon (SOC) by using soil-characterization data from Federal, State, academic, and literature sources. The methodology is based on the linkage of site-specific SOC data (pedon data) to the soil-association map units of the U.S. Department of Agriculture State Soil Geographic (STATSGO) and Soil Survey Geographic (SSURGO) digital soil databases in a geographic information system. The collective pedon database assembled from individual sources presently contains 7,321 pedon records representing 2,581 soil series. SOC storage, in kilograms per square meter (kg/m2), is calculated for each pedon at standard depth intervals from 0 to 10, 10 to 20, 20 to 50, and 50 to 100 centimeters. The site-specific storage estimates are then regionalized to produce national-scale (STATSGO) and county-scale (SSURGO) maps of SOC to a specified depth. Based on this methodology, the mean SOC storage for the top meter of mineral soil in the MRB is approximately 10 kg/m2, and the total inventory is approximately 32.3 Pg (1 petagram = 109 metric tons). This inventory is from 2.5 to 3 percent of the estimated global mineral SOC pool.

  10. Estimating Equivalency of Explosives Through A Thermochemical Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maienschein, J L

    2002-07-08

    The Cheetah thermochemical computer code provides an accurate method for estimating the TNT equivalency of any explosive, evaluated either with respect to peak pressure or the quasi-static pressure at long time in a confined volume. Cheetah calculates the detonation energy and heat of combustion for virtually any explosive (pure or formulation). Comparing the detonation energy for an explosive with that of TNT allows estimation of the TNT equivalency with respect to peak pressure, while comparison of the heat of combustion allows estimation of TNT equivalency with respect to quasi-static pressure. We discuss the methodology, present results for many explosives, andmore » show comparisons with equivalency data from other sources.« less

  11. Nitrogen excretion factors of livestock in the European Union: a review.

    PubMed

    Velthof, Gerard L; Hou, Yong; Oenema, Oene

    2015-12-01

    Livestock manures are major sources of nutrients, used for the fertilisation of cropland and grassland. Accurate estimates of the amounts of nutrients in livestock manures are required for nutrient management planning, but also for estimating nitrogen (N) budgets and emissions to the environment. Here we report on N excretion factors for a range of animal categories in policy reports by member states of the European Union (EU). Nitrogen excretion is defined in this paper as the total amount of N excreted by livestock per year as urine and faeces. We discuss the guidelines and methodologies for the estimation of N excretion factors by the EU Nitrates Directive, the OECD/Eurostat gross N balance guidebook, the EMEP/EEA Guidebook and the IPCC Guidelines. Our results show that N excretion factors for dairy cattle, other cattle, pigs, laying hens, broilers, sheep, and goats differ significantly between policy reports and between countries. Part of these differences may be related to differences in animal production (e.g. production of meat, milk and eggs), size/weight of the animals, and feed composition, but partly also to differences in the aggregation of livestock categories and estimation procedures. The methodologies and data used by member states are often not well described. There is a need for a common, harmonised methodology and procedure for the estimation of N excretion factors, to arrive at a common basis for the estimation of the production of manure N and N balances, and emissions of ammonia (NH3 ) and nitrous oxide (N2 O) across the EU. © 2015 Society of Chemical Industry.

  12. Nighttime image dehazing using local atmospheric selection rule and weighted entropy for visible-light systems

    NASA Astrophysics Data System (ADS)

    Park, Dubok; Han, David K.; Ko, Hanseok

    2017-05-01

    Optical imaging systems are often degraded by scattering due to atmospheric particles, such as haze, fog, and mist. Imaging under nighttime haze conditions may suffer especially from the glows near active light sources as well as scattering. We present a methodology for nighttime image dehazing based on an optical imaging model which accounts for varying light sources and their glow. First, glow effects are decomposed using relative smoothness. Atmospheric light is then estimated by assessing global and local atmospheric light using a local atmospheric selection rule. The transmission of light is then estimated by maximizing an objective function designed on the basis of weighted entropy. Finally, haze is removed using two estimated parameters, namely, atmospheric light and transmission. The visual and quantitative comparison of the experimental results with the results of existing state-of-the-art methods demonstrates the significance of the proposed approach.

  13. Single-chip source-free terahertz spectroscope across 0.04-0.99 THz: combining sub-wavelength near-field sensing and regression analysis.

    PubMed

    Wu, Xue; Sengupta, Kaushik

    2018-03-19

    This paper demonstrates a methodology to miniaturize THz spectroscopes into a single silicon chip by eliminating traditional solid-state architectural components such as complex tunable THz and optical sources, nonlinear mixing and amplifiers. The proposed method achieves this by extracting incident THz spectral signatures from the surface of an on-chip antenna itself. The information is sensed through the spectrally-sensitive 2D distribution of the impressed current surface under the THz incident field. By converting the antenna from a single-port to a massively multi-port architecture with integrated electronics and deep subwavelength sensing, THz spectral estimation is converted into a linear estimation problem. We employ rigorous regression techniques and analysis to demonstrate a single silicon chip system operating at room temperature across 0.04-0.99 THz with 10 MHz accuracy in spectrum estimation of THz tones across the entire spectrum.

  14. Detector Position Estimation for PET Scanners.

    PubMed

    Pierce, Larry; Miyaoka, Robert; Lewellen, Tom; Alessio, Adam; Kinahan, Paul

    2012-06-11

    Physical positioning of scintillation crystal detector blocks in Positron Emission Tomography (PET) scanners is not always exact. We test a proof of concept methodology for the determination of the six degrees of freedom for detector block positioning errors by utilizing a rotating point source over stepped axial intervals. To test our method, we created computer simulations of seven Micro Crystal Element Scanner (MiCES) PET systems with randomized positioning errors. The computer simulations show that our positioning algorithm can estimate the positions of the block detectors to an average of one-seventh of the crystal pitch tangentially, and one-third of the crystal pitch axially. Virtual acquisitions of a point source grid and a distributed phantom show that our algorithm improves both the quantitative and qualitative accuracy of the reconstructed objects. We believe this estimation algorithm is a practical and accurate method for determining the spatial positions of scintillation detector blocks.

  15. Air quality assessment of benzo(a)pyrene from asphalt plant operation.

    PubMed

    Gibson, Nigel; Stewart, Robert; Rankin, Erika

    2012-01-01

    A study has been carried out to assess the contribution of Polycyclic Aromatic Hydrocarbons (PAHs) from asphalt plant operation, utilising Benzo(a)pyrene (BaP) as a marker for PAHs, to the background air concentration around asphalt plants in the UK. The purpose behind this assessment was to determine whether the use of published BaP emission factors based on the US Environmental Protection Agency (EPA) methodology is appropriate in the context of the UK, especially as the EPA methodology does not give BaP emission factors for all activities. The study also aimed to improve the overall understanding of BaP emissions from asphalt plants in the UK, and determine whether site location and operation is likely to influence the contribution of PAHs to ambient air quality. In order to establish whether the use of US EPA emissions factors is appropriate, the study has compared the BaP emissions measured and calculated emissions rates from two UK sites with those estimated using US EPA emission factors. A dispersion modelling exercise was carried out to show the BaP contribution to ambient air around each site. This study showed that, as the US EPA methodology does not provide factors for all emission sources on asphalt plants, their use may give rise to over- or under-estimations, particularly where sources of BaP are temperature dependent. However, the contribution of both the estimated and measured BaP concentrations to environmental concentration were low, averaging about 0.05 ng m(-3) at the boundary of the sites, which is well below the UK BaP assessment threshold of 0.25 ng m(-3). Therefore, BaP concentrations, and hence PAH concentrations, from similar asphalt plant operations are unlikely to contribute negatively to ambient air quality.

  16. Standardised survey method for identifying catchment risks to water quality.

    PubMed

    Baker, D L; Ferguson, C M; Chier, P; Warnecke, M; Watkinson, A

    2016-06-01

    This paper describes the development and application of a systematic methodology to identify and quantify risks in drinking water and recreational catchments. The methodology assesses microbial and chemical contaminants from both diffuse and point sources within a catchment using Escherichia coli, protozoan pathogens and chemicals (including fuel and pesticides) as index contaminants. Hazard source information is gathered by a defined sanitary survey process involving use of a software tool which groups hazards into six types: sewage infrastructure, on-site sewage systems, industrial, stormwater, agriculture and recreational sites. The survey estimates the likelihood of the site affecting catchment water quality, and the potential consequences, enabling the calculation of risk for individual sites. These risks are integrated to calculate a cumulative risk for each sub-catchment and the whole catchment. The cumulative risks process accounts for the proportion of potential input sources surveyed and for transfer of contaminants from upstream to downstream sub-catchments. The output risk matrices show the relative risk sources for each of the index contaminants, highlighting those with the greatest impact on water quality at a sub-catchment and catchment level. Verification of the sanitary survey assessments and prioritisation is achieved by comparison with water quality data and microbial source tracking.

  17. Non-contact cardiac pulse rate estimation based on web-camera

    NASA Astrophysics Data System (ADS)

    Wang, Yingzhi; Han, Tailin

    2015-12-01

    In this paper, we introduce a new methodology of non-contact cardiac pulse rate estimation based on the imaging Photoplethysmography (iPPG) and blind source separation. This novel's approach can be applied to color video recordings of the human face and is based on automatic face tracking along with blind source separation of the color channels into RGB three-channel component. First of all, we should do some pre-processings of the data which can be got from color video such as normalization and sphering. We can use spectrum analysis to estimate the cardiac pulse rate by Independent Component Analysis (ICA) and JADE algorithm. With Bland-Altman and correlation analysis, we compared the cardiac pulse rate extracted from videos recorded by a basic webcam to a Commercial pulse oximetry sensors and achieved high accuracy and correlation. Root mean square error for the estimated results is 2.06bpm, which indicates that the algorithm can realize the non-contact measurements of cardiac pulse rate.

  18. Lucid dreaming incidence: A quality effects meta-analysis of 50years of research.

    PubMed

    Saunders, David T; Roe, Chris A; Smith, Graham; Clegg, Helen

    2016-07-01

    We report a quality effects meta-analysis on studies from the period 1966-2016 measuring either (a) lucid dreaming prevalence (one or more lucid dreams in a lifetime); (b) frequent lucid dreaming (one or more lucid dreams in a month) or both. A quality effects meta-analysis allows for the minimisation of the influence of study methodological quality on overall model estimates. Following sensitivity analysis, a heterogeneous lucid dreaming prevalence data set of 34 studies yielded a mean estimate of 55%, 95% C. I. [49%, 62%] for which moderator analysis showed no systematic bias for suspected sources of variability. A heterogeneous lucid dreaming frequency data set of 25 studies yielded a mean estimate of 23%, 95% C. I. [20%, 25%], moderator analysis revealed no suspected sources of variability. These findings are consistent with earlier estimates of lucid dreaming prevalence and frequent lucid dreaming in the population but are based on more robust evidence. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Substructuring of multibody systems for numerical transfer path analysis in internal combustion engines

    NASA Astrophysics Data System (ADS)

    Acri, Antonio; Offner, Guenter; Nijman, Eugene; Rejlek, Jan

    2016-10-01

    Noise legislations and the increasing customer demands determine the Noise Vibration and Harshness (NVH) development of modern commercial vehicles. In order to meet the stringent legislative requirements for the vehicle noise emission, exact knowledge of all vehicle noise sources and their acoustic behavior is required. Transfer path analysis (TPA) is a fairly well established technique for estimating and ranking individual low-frequency noise or vibration contributions via the different transmission paths. Transmission paths from different sources to target points of interest and their contributions can be analyzed by applying TPA. This technique is applied on test measurements, which can only be available on prototypes, at the end of the designing process. In order to overcome the limits of TPA, a numerical transfer path analysis methodology based on the substructuring of a multibody system is proposed in this paper. Being based on numerical simulation, this methodology can be performed starting from the first steps of the designing process. The main target of the proposed methodology is to get information of noise sources contributions of a dynamic system considering the possibility to have multiple forces contemporary acting on the system. The contributions of these forces are investigated with particular focus on distribute or moving forces. In this paper, the mathematical basics of the proposed methodology and its advantages in comparison with TPA will be discussed. Then, a dynamic system is investigated with a combination of two methods. Being based on the dynamic substructuring (DS) of the investigated model, the methodology proposed requires the evaluation of the contact forces at interfaces, which are computed with a flexible multi-body dynamic (FMBD) simulation. Then, the structure-borne noise paths are computed with the wave based method (WBM). As an example application a 4-cylinder engine is investigated and the proposed methodology is applied on the engine block. The aim is to get accurate and clear relationships between excitations and responses of the simulated dynamic system, analyzing the noise and vibrational sources inside a car engine, showing the main advantages of a numerical methodology.

  20. Accuracy assessment: The statistical approach to performance evaluation in LACIE. [Great Plains corridor, United States

    NASA Technical Reports Server (NTRS)

    Houston, A. G.; Feiveson, A. H.; Chhikara, R. S.; Hsu, E. M. (Principal Investigator)

    1979-01-01

    A statistical methodology was developed to check the accuracy of the products of the experimental operations throughout crop growth and to determine whether the procedures are adequate to accomplish the desired accuracy and reliability goals. It has allowed the identification and isolation of key problems in wheat area yield estimation, some of which have been corrected and some of which remain to be resolved. The major unresolved problem in accuracy assessment is that of precisely estimating the bias of the LACIE production estimator. Topics covered include: (1) evaluation techniques; (2) variance and bias estimation for the wheat production estimate; (3) the 90/90 evaluation; (4) comparison of the LACIE estimate with reference standards; and (5) first and second order error source investigations.

  1. Yield Estimation for Semipalatinsk Underground Nuclear Explosions Using Seismic Surface-wave Observations at Near-regional Distances

    NASA Astrophysics Data System (ADS)

    Adushkin, V. V.

    - A statistical procedure is described for estimating the yields of underground nuclear tests at the former Soviet Semipalatinsk test site using the peak amplitudes of short-period surface waves observed at near-regional distances (Δ < 150 km) from these explosions. This methodology is then applied to data recorded from a large sample of the Semipalatinsk explosions, including the Soviet JVE explosion of September 14, 1988, and it is demonstrated that it provides seismic estimates of explosion yield which are typically within 20% of the yields determined for these same explosions using more accurate, non-seismic techniques based on near-source observations.

  2. ADVANCED WAVEFORM SIMULATION FOR SEISMIC MONITORING EVENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helmberger, Donald V.; Tromp, Jeroen; Rodgers, Arthur J.

    Earthquake source parameters underpin several aspects of nuclear explosion monitoring. Such aspects are: calibration of moment magnitudes (including coda magnitudes) and magnitude and distance amplitude corrections (MDAC); source depths; discrimination by isotropic moment tensor components; and waveform modeling for structure (including waveform tomography). This project seeks to improve methods for and broaden the applicability of estimating source parameters from broadband waveforms using the Cut-and-Paste (CAP) methodology. The CAP method uses a library of Green’s functions for a one-dimensional (1D, depth-varying) seismic velocity model. The method separates the main arrivals of the regional waveform into 5 windows: Pnl (vertical and radialmore » components), Rayleigh (vertical and radial components) and Love (transverse component). Source parameters are estimated by grid search over strike, dip, rake and depth and seismic moment or equivalently moment magnitude, MW, are adjusted to fit the amplitudes. Key to the CAP method is allowing the synthetic seismograms to shift in time relative to the data in order to account for path-propagation errors (delays) in the 1D seismic velocity model used to compute the Green’s functions. The CAP method has been shown to improve estimates of source parameters, especially when delay and amplitude biases are calibrated using high signal-to-noise data from moderate earthquakes, CAP+.« less

  3. A methodology for translating positional error into measures of attribute error, and combining the two error sources

    Treesearch

    Yohay Carmel; Curtis Flather; Denis Dean

    2006-01-01

    This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...

  4. Low-Temperature Hydrothermal Resource Potential

    DOE Data Explorer

    Katherine Young

    2016-06-30

    Compilation of data (spreadsheet and shapefiles) for several low-temperature resource types, including isolated springs and wells, delineated area convection systems, sedimentary basins and coastal plains sedimentary systems. For each system, we include estimates of the accessible resource base, mean extractable resource and beneficial heat. Data compiled from USGS and other sources. The paper (submitted to GRC 2016) describing the methodology and analysis is also included.

  5. The American High School Graduation Rate: Trends and Levels. NBER Working Paper No. 13670

    ERIC Educational Resources Information Center

    Heckman, James J.; LaFontaine, Paul A.

    2007-01-01

    This paper uses multiple data sources and a unified methodology to estimate the trends and levels of the U.S. high school graduation rate. Correcting for important biases that plague previous calculations, we establish that (1) the true high school graduation rate is substantially lower than the official rate issued by the National Center for…

  6. Quantile uncertainty and value-at-risk model risk.

    PubMed

    Alexander, Carol; Sarabia, José María

    2012-08-01

    This article develops a methodology for quantifying model risk in quantile risk estimates. The application of quantile estimates to risk assessment has become common practice in many disciplines, including hydrology, climate change, statistical process control, insurance and actuarial science, and the uncertainty surrounding these estimates has long been recognized. Our work is particularly important in finance, where quantile estimates (called Value-at-Risk) have been the cornerstone of banking risk management since the mid 1980s. A recent amendment to the Basel II Accord recommends additional market risk capital to cover all sources of "model risk" in the estimation of these quantiles. We provide a novel and elegant framework whereby quantile estimates are adjusted for model risk, relative to a benchmark which represents the state of knowledge of the authority that is responsible for model risk. A simulation experiment in which the degree of model risk is controlled illustrates how to quantify Value-at-Risk model risk and compute the required regulatory capital add-on for banks. An empirical example based on real data shows how the methodology can be put into practice, using only two time series (daily Value-at-Risk and daily profit and loss) from a large bank. We conclude with a discussion of potential applications to nonfinancial risks. © 2012 Society for Risk Analysis.

  7. Estimation of single-year-of-age counts of live births, fetal losses, abortions, and pregnant women for counties of Texas.

    PubMed

    Singh, Bismark; Meyers, Lauren Ancel

    2017-05-08

    We provide a methodology for estimating counts of single-year-of-age live-births, fetal-losses, abortions, and pregnant women from aggregated age-group counts. As a case study, we estimate counts for the 254 counties of Texas for the year 2010. We use interpolation to estimate counts of live-births, fetal-losses, and abortions by women of each single-year-of-age for all Texas counties. We then use these counts to estimate the numbers of pregnant women for each single-year-of-age, which were previously available only in aggregate. To support public health policy and planning, we provide single-year-of-age estimates of live-births, fetal-losses, abortions, and pregnant women for all Texas counties in the year 2010, as well as the estimation method source code.

  8. A methodology to estimate uncertainty for emission projections through sensitivity analysis.

    PubMed

    Lumbreras, Julio; de Andrés, Juan Manuel; Pérez, Javier; Borge, Rafael; de la Paz, David; Rodríguez, María Encarnación

    2015-04-01

    Air pollution abatement policies must be based on quantitative information on current and future emissions of pollutants. As emission projections uncertainties are inevitable and traditional statistical treatments of uncertainty are highly time/resources consuming, a simplified methodology for nonstatistical uncertainty estimation based on sensitivity analysis is presented in this work. The methodology was applied to the "with measures" scenario for Spain, concretely over the 12 highest emitting sectors regarding greenhouse gas and air pollutants emissions. Examples of methodology application for two important sectors (power plants, and agriculture and livestock) are shown and explained in depth. Uncertainty bands were obtained up to 2020 by modifying the driving factors of the 12 selected sectors and the methodology was tested against a recomputed emission trend in a low economic-growth perspective and official figures for 2010, showing a very good performance. A solid understanding and quantification of uncertainties related to atmospheric emission inventories and projections provide useful information for policy negotiations. However, as many of those uncertainties are irreducible, there is an interest on how they could be managed in order to derive robust policy conclusions. Taking this into account, a method developed to use sensitivity analysis as a source of information to derive nonstatistical uncertainty bands for emission projections is presented and applied to Spain. This method simplifies uncertainty assessment and allows other countries to take advantage of their sensitivity analyses.

  9. An evaluation of three-dimensional photogrammetric and morphometric techniques for estimating volume and mass in Weddell seals Leptonychotes weddellii

    PubMed Central

    Ruscher-Hill, Brandi; Kirkham, Amy L.; Burns, Jennifer M.

    2018-01-01

    Body mass dynamics of animals can indicate critical associations between extrinsic factors and population vital rates. Photogrammetry can be used to estimate mass of individuals in species whose life histories make it logistically difficult to obtain direct body mass measurements. Such studies typically use equations to relate volume estimates from photogrammetry to mass; however, most fail to identify the sources of error between the estimated and actual mass. Our objective was to identify the sources of error that prevent photogrammetric mass estimation from directly predicting actual mass, and develop a methodology to correct this issue. To do this, we obtained mass, body measurements, and scaled photos for 56 sedated Weddell seals (Leptonychotes weddellii). After creating a three-dimensional silhouette in the image processing program PhotoModeler Pro, we used horizontal scale bars to define the ground plane, then removed the below-ground portion of the animal’s estimated silhouette. We then re-calculated body volume and applied an expected density to estimate animal mass. We compared the body mass estimates derived from this silhouette slice method with estimates derived from two other published methodologies: body mass calculated using photogrammetry coupled with a species-specific correction factor, and estimates using elliptical cones and measured tissue densities. The estimated mass values (mean ± standard deviation 345±71 kg for correction equation, 346±75 kg for silhouette slice, 343±76 kg for cones) were not statistically distinguishable from each other or from actual mass (346±73 kg) (ANOVA with Tukey HSD post-hoc, p>0.05 for all pairwise comparisons). We conclude that volume overestimates from photogrammetry are likely due to the inability of photo modeling software to properly render the ventral surface of the animal where it contacts the ground. Due to logistical differences between the “correction equation”, “silhouette slicing”, and “cones” approaches, researchers may find one technique more useful for certain study programs. In combination or exclusively, these three-dimensional mass estimation techniques have great utility in field studies with repeated measures sampling designs or where logistic constraints preclude weighing animals. PMID:29320573

  10. The Educational Consequences of Teen Childbearing

    PubMed Central

    Kane, Jennifer B.; Morgan, S. Philip; Harris, Kathleen Mullan; Guilkey, David K.

    2013-01-01

    A huge literature shows that teen mothers face a variety of detriments across the life course, including truncated educational attainment. To what extent is this association causal? The estimated effects of teen motherhood on schooling vary widely, ranging from no discernible difference to 2.6 fewer years among teen mothers. The magnitude of educational consequences is therefore uncertain, despite voluminous policy and prevention efforts that rest on the assumption of a negative and presumably causal effect. This study adjudicates between two potential sources of inconsistency in the literature—methodological differences or cohort differences—by using a single, high-quality data source: namely, The National Longitudinal Study of Adolescent Health. We replicate analyses across four different statistical strategies: ordinary least squares regression; propensity score matching; and parametric and semiparametric maximum likelihood estimation. Results demonstrate educational consequences of teen childbearing, with estimated effects between 0.7 and 1.9 fewer years of schooling among teen mothers. We select our preferred estimate (0.7), derived from semiparametric maximum likelihood estimation, on the basis of weighing the strengths and limitations of each approach. Based on the range of estimated effects observed in our study, we speculate that variable statistical methods are the likely source of inconsistency in the past. We conclude by discussing implications for future research and policy, and recommend that future studies employ a similar multimethod approach to evaluate findings. PMID:24078155

  11. Source Data Impacts on Epistemic Uncertainty for Launch Vehicle Fault Tree Models

    NASA Technical Reports Server (NTRS)

    Al Hassan, Mohammad; Novack, Steven; Ring, Robert

    2016-01-01

    Launch vehicle systems are designed and developed using both heritage and new hardware. Design modifications to the heritage hardware to fit new functional system requirements can impact the applicability of heritage reliability data. Risk estimates for newly designed systems must be developed from generic data sources such as commercially available reliability databases using reliability prediction methodologies, such as those addressed in MIL-HDBK-217F. Failure estimates must be converted from the generic environment to the specific operating environment of the system in which it is used. In addition, some qualification of applicability for the data source to the current system should be made. Characterizing data applicability under these circumstances is crucial to developing model estimations that support confident decisions on design changes and trade studies. This paper will demonstrate a data-source applicability classification method for suggesting epistemic component uncertainty to a target vehicle based on the source and operating environment of the originating data. The source applicability is determined using heuristic guidelines while translation of operating environments is accomplished by applying statistical methods to MIL-HDK-217F tables. The paper will provide one example for assigning environmental factors uncertainty when translating between operating environments for the microelectronic part-type components. The heuristic guidelines will be followed by uncertainty-importance routines to assess the need for more applicable data to reduce model uncertainty.

  12. Latency in Distributed Acquisition and Rendering for Telepresence Systems.

    PubMed

    Ohl, Stephan; Willert, Malte; Staadt, Oliver

    2015-12-01

    Telepresence systems use 3D techniques to create a more natural human-centered communication over long distances. This work concentrates on the analysis of latency in telepresence systems where acquisition and rendering are distributed. Keeping latency low is important to immerse users in the virtual environment. To better understand latency problems and to identify the source of such latency, we focus on the decomposition of system latency into sub-latencies. We contribute a model of latency and show how it can be used to estimate latencies in a complex telepresence dataflow network. To compare the estimates with real latencies in our prototype, we modify two common latency measurement methods. This presented methodology enables the developer to optimize the design, find implementation issues and gain deeper knowledge about specific sources of latency.

  13. Drought risk assessment under climate change is sensitive to methodological choices for the estimation of evaporative demand.

    PubMed

    Dewes, Candida F; Rangwala, Imtiaz; Barsugli, Joseph J; Hobbins, Michael T; Kumar, Sanjiv

    2017-01-01

    Several studies have projected increases in drought severity, extent and duration in many parts of the world under climate change. We examine sources of uncertainty arising from the methodological choices for the assessment of future drought risk in the continental US (CONUS). One such uncertainty is in the climate models' expression of evaporative demand (E0), which is not a direct climate model output but has been traditionally estimated using several different formulations. Here we analyze daily output from two CMIP5 GCMs to evaluate how differences in E0 formulation, treatment of meteorological driving data, choice of GCM, and standardization of time series influence the estimation of E0. These methodological choices yield different assessments of spatio-temporal variability in E0 and different trends in 21st century drought risk. First, we estimate E0 using three widely used E0 formulations: Penman-Monteith; Hargreaves-Samani; and Priestley-Taylor. Our analysis, which primarily focuses on the May-September warm-season period, shows that E0 climatology and its spatial pattern differ substantially between these three formulations. Overall, we find higher magnitudes of E0 and its interannual variability using Penman-Monteith, in particular for regions like the Great Plains and southwestern US where E0 is strongly influenced by variations in wind and relative humidity. When examining projected changes in E0 during the 21st century, there are also large differences among the three formulations, particularly the Penman-Monteith relative to the other two formulations. The 21st century E0 trends, particularly in percent change and standardized anomalies of E0, are found to be sensitive to the long-term mean value and the amplitude of interannual variability, i.e. if the magnitude of E0 and its interannual variability are relatively low for a particular E0 formulation, then the normalized or standardized 21st century trend based on that formulation is amplified relative to other formulations. This is the case for the use of Hargreaves-Samani and Priestley-Taylor, where future E0 trends are comparatively much larger than for Penman-Monteith. When comparing Penman-Monteith E0 responses between different choices of input variables related to wind speed, surface roughness, and net radiation, we found differences in E0 trends, although these choices had a much smaller influence on E0 trends than did the E0 formulation choices. These methodological choices and specific climate model selection, also have a large influence on the estimation of trends in standardized drought indices used for drought assessment operationally. We find that standardization tends to amplify divergences between the E0 trends calculated using different E0 formulations, because standardization is sensitive to both the climatology and amplitude of interannual variability of E0. For different methodological choices and GCM output considered in estimating E0, we examine potential sources of uncertainty in 21st century trends in the Standardized Precipitation Evapotranspiration Index (SPEI) and Evaporative Demand Drought Index (EDDI) over selected regions of the CONUS to demonstrate the practical implications of these methodological choices for the quantification of drought risk under climate change.

  14. A method for calculating a land-use change carbon footprint (LUC-CFP) for agricultural commodities - applications to Brazilian beef and soy, Indonesian palm oil.

    PubMed

    Persson, U Martin; Henders, Sabine; Cederberg, Christel

    2014-11-01

    The world's agricultural system has come under increasing scrutiny recently as an important driver of global climate change, creating a demand for indicators that estimate the climatic impacts of agricultural commodities. Such carbon footprints, however, have in most cases excluded emissions from land-use change and the proposed methodologies for including this significant emissions source suffer from different shortcomings. Here, we propose a new methodology for calculating land-use change carbon footprints for agricultural commodities and illustrate this methodology by applying it to three of the most prominent agricultural commodities driving tropical deforestation: Brazilian beef and soybeans, and Indonesian palm oil. We estimate land-use change carbon footprints in 2010 to be 66 tCO2 /t meat (carcass weight) for Brazilian beef, 0.89 tCO2 /t for Brazilian soybeans, and 7.5 tCO2 /t for Indonesian palm oil, using a 10 year amortization period. The main advantage of the proposed methodology is its flexibility: it can be applied in a tiered approach, using detailed data where it is available while still allowing for estimation of footprints for a broad set of countries and agricultural commodities; it can be applied at different scales, estimating both national and subnational footprints; it can be adopted to account both for direct (proximate) and indirect drivers of land-use change. It is argued that with an increasing commercialization and globalization of the drivers of land-use change, the proposed carbon footprint methodology could help leverage the power needed to alter environmentally destructive land-use practices within the global agricultural system by providing a tool for assessing the environmental impacts of production, thereby informing consumers about the impacts of consumption and incentivizing producers to become more environmentally responsible. © 2014 John Wiley & Sons Ltd.

  15. Analysis of ISO NE Balancing Requirements: Uncertainty-based Secure Ranges for ISO New England Dynamic Inerchange Adjustments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Etingov, Pavel V.; Makarov, Yuri V.; Wu, Di

    The document describes detailed uncertainty quantification (UQ) methodology developed by PNNL to estimate secure ranges of potential dynamic intra-hour interchange adjustments in the ISO-NE system and provides description of the dynamic interchange adjustment (DINA) tool developed under the same contract. The overall system ramping up and down capability, spinning reserve requirements, interchange schedules, load variations and uncertainties from various sources that are relevant to the ISO-NE system are incorporated into the methodology and the tool. The DINA tool has been tested by PNNL and ISO-NE staff engineers using ISO-NE data.

  16. Estimating Coastal Digital Elevation Model (DEM) Uncertainty

    NASA Astrophysics Data System (ADS)

    Amante, C.; Mesick, S.

    2017-12-01

    Integrated bathymetric-topographic digital elevation models (DEMs) are representations of the Earth's solid surface and are fundamental to the modeling of coastal processes, including tsunami, storm surge, and sea-level rise inundation. Deviations in elevation values from the actual seabed or land surface constitute errors in DEMs, which originate from numerous sources, including: (i) the source elevation measurements (e.g., multibeam sonar, lidar), (ii) the interpolative gridding technique (e.g., spline, kriging) used to estimate elevations in areas unconstrained by source measurements, and (iii) the datum transformation used to convert bathymetric and topographic data to common vertical reference systems. The magnitude and spatial distribution of the errors from these sources are typically unknown, and the lack of knowledge regarding these errors represents the vertical uncertainty in the DEM. The National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Information (NCEI) has developed DEMs for more than 200 coastal communities. This study presents a methodology developed at NOAA NCEI to derive accompanying uncertainty surfaces that estimate DEM errors at the individual cell-level. The development of high-resolution (1/9th arc-second), integrated bathymetric-topographic DEMs along the southwest coast of Florida serves as the case study for deriving uncertainty surfaces. The estimated uncertainty can then be propagated into the modeling of coastal processes that utilize DEMs. Incorporating the uncertainty produces more reliable modeling results, and in turn, better-informed coastal management decisions.

  17. Methodological challenges when evaluating potential off-label prescribing of drugs using electronic health care databases: A case study of dabigatran etexilate in Europe.

    PubMed

    Cainzos-Achirica, Miguel; Varas-Lorenzo, Cristina; Pottegård, Anton; Asmar, Joelle; Plana, Estel; Rasmussen, Lotte; Bizouard, Geoffray; Forns, Joan; Hellfritzsch, Maja; Zint, Kristina; Perez-Gutthann, Susana; Pladevall-Vila, Manel

    2018-03-23

    To report and discuss estimated prevalence of potential off-label use and associated methodological challenges using a case study of dabigatran. Observational, cross-sectional study using 3 databases with different types of clinical information available: Cegedim Strategic Data Longitudinal Patient Database (CSD-LPD), France (cardiologist panel, n = 1706; general practitioner panel, n = 2813; primary care data); National Health Databases, Denmark (n = 28 619; hospital episodes and dispensed ambulatory medications); and Clinical Practice Research Datalink (CPRD), UK (linkable to Hospital Episode Statistics [HES], n = 2150; not linkable, n = 1285; primary care data plus hospital data for HES-linkable patients). August 2011 to August 2015. Two definitions were used to estimate potential off-label use: a broad definition of on-label prescribing using codes for disease indication (eg, atrial fibrillation [AF]), and a restrictive definition excluding patients with conditions for which dabigatran is not indicated (eg, valvular AF). Prevalence estimates under the broad definition ranged from 5.7% (CPRD-HES) to 34.0% (CSD-LPD) and, under the restrictive definition, from 17.4% (CPRD-HES) to 44.1% (CSD-LPD). For the majority of potential off-label users, no diagnosis potentially related to anticoagulant use was identified. Key methodological challenges were the limited availability of detailed clinical information, likely leading to overestimation of off-label use, and differences in the information available, which may explain the disparate prevalence estimates across data sources. Estimates of potential off-label use should be interpreted cautiously due to limitations in available information. In this context, CPRD HES-linkable estimates are likely to be the most accurate. Copyright © 2018 John Wiley & Sons, Ltd.

  18. Comparison of flavonoid intake assessment methods.

    PubMed

    Ivey, Kerry L; Croft, Kevin; Prince, Richard L; Hodgson, Jonathan M

    2016-09-14

    Flavonoids are a diverse group of polyphenolic compounds found in high concentrations in many plant foods and beverages. High flavonoid intake has been associated with reduced risk of chronic disease. To date, population based studies have used the United States Department of Agriculture (USDA) food content database to determine habitual flavonoid intake. More recently, a new flavonoid food content database, Phenol-Explorer (PE), has been developed. However, the level of agreement between the two databases is yet to be explored. To compare the methods used to create each database, and to explore the level of agreement between the flavonoid intake estimates derived from USDA and PE data. The study population included 1063 randomly selected women aged over 75 years. Two separate intake estimates were determined using food composition data from the USDA and the PE databases. There were many similarities in methods used to create each database; however, there are several methodological differences that manifest themselves in differences in flavonoid intake estimates between the 2 databases. Despite differences in net estimates, there was a strong level of agreement between total-flavonoid, flavanol, flavanone and anthocyanidin intake estimates derived from each database. Intake estimates for flavanol monomers showed greater agreement than flavanol polymers. The level of agreement between the two databases was the weakest for the flavonol and flavone intake estimates. In this population, the application of USDA and PE source data yielded highly correlated intake estimates for total-flavonoids, flavanols, flavanones and anthocyanidins. For these sub-classes, the USDA and PE databases may be used interchangeably in epidemiological investigations. There was poorer correlation between intake estimates for flavonols and flavones due to differences in USDA and PE methodologies. Individual flavonoid compound groups that comprise flavonoid sub-classes had varying levels of agreement. As such, when determining the appropriate database to calculate flavonoid intake variables, it is important to consider methodologies underpinning database creation and which foods are important contributors to dietary intake in the population of interest.

  19. Industrial point source CO2 emission strength estimation with aircraft measurements and dispersion modelling.

    PubMed

    Carotenuto, Federico; Gualtieri, Giovanni; Miglietta, Franco; Riccio, Angelo; Toscano, Piero; Wohlfahrt, Georg; Gioli, Beniamino

    2018-02-22

    CO 2 remains the greenhouse gas that contributes most to anthropogenic global warming, and the evaluation of its emissions is of major interest to both research and regulatory purposes. Emission inventories generally provide quite reliable estimates of CO 2 emissions. However, because of intrinsic uncertainties associated with these estimates, it is of great importance to validate emission inventories against independent estimates. This paper describes an integrated approach combining aircraft measurements and a puff dispersion modelling framework by considering a CO 2 industrial point source, located in Biganos, France. CO 2 density measurements were obtained by applying the mass balance method, while CO 2 emission estimates were derived by implementing the CALMET/CALPUFF model chain. For the latter, three meteorological initializations were used: (i) WRF-modelled outputs initialized by ECMWF reanalyses; (ii) WRF-modelled outputs initialized by CFSR reanalyses and (iii) local in situ observations. Governmental inventorial data were used as reference for all applications. The strengths and weaknesses of the different approaches and how they affect emission estimation uncertainty were investigated. The mass balance based on aircraft measurements was quite succesful in capturing the point source emission strength (at worst with a 16% bias), while the accuracy of the dispersion modelling, markedly when using ECMWF initialization through the WRF model, was only slightly lower (estimation with an 18% bias). The analysis will help in highlighting some methodological best practices that can be used as guidelines for future experiments.

  20. Prevention of Blast-Related Injuries

    DTIC Science & Technology

    2013-07-01

    collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources...of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1 . REPORT DATE...Introduction 4 Statement of Work 4 Task I Report 4 1 . Adjustment of the experimental design and methodology 4 2. Preparations for Blast

  1. Federal Research and Development Contract Trends and the Supporting Industrial Base, 2000-2014

    DTIC Science & Technology

    2016-04-30

    Homeland Security, and government-wide services contracting trends; sourcing policy and cost estimation methodologies; and recent U.S. Army modernization ...been fears that the sharp downturn in federal contract obligations would disproportionately impact the R&D contracting portfolios within individual...329 - contracting portfolios , and the industrial base that supports those efforts, within each R&D contracting agency. The main finding of this

  2. The inverse problem in electroencephalography using the bidomain model of electrical activity.

    PubMed

    Lopez Rincon, Alejandro; Shimoda, Shingo

    2016-12-01

    Acquiring information about the distribution of electrical sources in the brain from electroencephalography (EEG) data remains a significant challenge. An accurate solution would provide an understanding of the inner mechanisms of the electrical activity in the brain and information about damaged tissue. In this paper, we present a methodology for reconstructing brain electrical activity from EEG data by using the bidomain formulation. The bidomain model considers continuous active neural tissue coupled with a nonlinear cell model. Using this technique, we aim to find the brain sources that give rise to the scalp potential recorded by EEG measurements taking into account a non-static reconstruction. We simulate electrical sources in the brain volume and compare the reconstruction to the minimum norm estimates (MNEs) and low resolution electrical tomography (LORETA) results. Then, with the EEG dataset from the EEG Motor Movement/Imagery Database of the Physiobank, we identify the reaction to visual stimuli by calculating the time between stimulus presentation and the spike in electrical activity. Finally, we compare the activation in the brain with the registered activation using the LinkRbrain platform. Our methodology shows an improved reconstruction of the electrical activity and source localization in comparison with MNE and LORETA. For the Motor Movement/Imagery Database, the reconstruction is consistent with the expected position and time delay generated by the stimuli. Thus, this methodology is a suitable option for continuously reconstructing brain potentials. Copyright © 2016 The Author(s). Published by Elsevier B.V. All rights reserved.

  3. The potential of using remote sensing data to estimate air-sea CO2 exchange in the Baltic Sea

    NASA Astrophysics Data System (ADS)

    Parard, Gaëlle; Rutgersson, Anna; Parampil, Sindu Raj; Alexandre Charantonis, Anastase

    2017-12-01

    In this article, we present the first climatological map of air-sea CO2 flux over the Baltic Sea based on remote sensing data: estimates of pCO2 derived from satellite imaging using self-organizing map classifications along with class-specific linear regressions (SOMLO methodology) and remotely sensed wind estimates. The estimates have a spatial resolution of 4 km both in latitude and longitude and a monthly temporal resolution from 1998 to 2011. The CO2 fluxes are estimated using two types of wind products, i.e. reanalysis winds and satellite wind products, the higher-resolution wind product generally leading to higher-amplitude flux estimations. Furthermore, the CO2 fluxes were also estimated using two methods: the method of Wanninkhof et al. (2013) and the method of Rutgersson and Smedman (2009). The seasonal variation in fluxes reflects the seasonal variation in pCO2 unvaryingly over the whole Baltic Sea, with high winter CO2 emissions and high pCO2 uptakes. All basins act as a source for the atmosphere, with a higher degree of emission in the southern regions (mean source of 1.6 mmol m-2 d-1 for the South Basin and 0.9 for the Central Basin) than in the northern regions (mean source of 0.1 mmol m-2 d-1) and the coastal areas act as a larger sink (annual uptake of -4.2 mmol m-2 d-1) than does the open sea (-4 mmol m-2 d-1). In its entirety, the Baltic Sea acts as a small source of 1.2 mmol m-2 d-1 on average and this annual uptake has increased from 1998 to 2012.

  4. Prospective CO 2 saline resource estimation methodology: Refinement of existing US-DOE-NETL methods based on data availability

    DOE PAGES

    Goodman, Angela; Sanguinito, Sean; Levine, Jonathan S.

    2016-09-28

    Carbon storage resource estimation in subsurface saline formations plays an important role in establishing the scale of carbon capture and storage activities for governmental policy and commercial project decision-making. Prospective CO 2 resource estimation of large regions or subregions, such as a basin, occurs at the initial screening stages of a project using only limited publicly available geophysical data, i.e. prior to project-specific site selection data generation. As the scale of investigation is narrowed and selected areas and formations are identified, prospective CO 2 resource estimation can be refined and uncertainty narrowed when site-specific geophysical data are available. Here, wemore » refine the United States Department of Energy – National Energy Technology Laboratory (US-DOE-NETL) methodology as the scale of investigation is narrowed from very large regional assessments down to selected areas and formations that may be developed for commercial storage. In addition, we present a new notation that explicitly identifies differences between data availability and data sources used for geologic parameters and efficiency factors as the scale of investigation is narrowed. This CO 2 resource estimation method is available for screening formations in a tool called CO 2-SCREEN.« less

  5. Parameter estimation in a structural acoustic system with fully nonlinear coupling conditions

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Smith, Ralph C.

    1994-01-01

    A methodology for estimating physical parameters in a class of structural acoustic systems is presented. The general model under consideration consists of an interior cavity which is separated from an exterior noise source by an enclosing elastic structure. Piezoceramic patches are bonded to or embedded in the structure; these can be used both as actuators and sensors in applications ranging from the control of interior noise levels to the determination of structural flaws through nondestructive evaluation techniques. The presence and excitation of patches, however, changes the geometry and material properties of the structure as well as involves unknown patch parameters, thus necessitating the development of parameter estimation techniques which are applicable in this coupled setting. In developing a framework for approximation, parameter estimation and implementation, strong consideration is given to the fact that the input operator is unbonded due to the discrete nature of the patches. Moreover, the model is weakly nonlinear. As a result of the coupling mechanism between the structural vibrations and the interior acoustic dynamics. Within this context, an illustrating model is given, well-posedness and approximations results are discussed and an applicable parameter estimation methodology is presented. The scheme is then illustrated through several numerical examples with simulations modeling a variety of commonly used structural acoustic techniques for systems excitations and data collection.

  6. Prospective CO 2 saline resource estimation methodology: Refinement of existing US-DOE-NETL methods based on data availability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodman, Angela; Sanguinito, Sean; Levine, Jonathan S.

    Carbon storage resource estimation in subsurface saline formations plays an important role in establishing the scale of carbon capture and storage activities for governmental policy and commercial project decision-making. Prospective CO 2 resource estimation of large regions or subregions, such as a basin, occurs at the initial screening stages of a project using only limited publicly available geophysical data, i.e. prior to project-specific site selection data generation. As the scale of investigation is narrowed and selected areas and formations are identified, prospective CO 2 resource estimation can be refined and uncertainty narrowed when site-specific geophysical data are available. Here, wemore » refine the United States Department of Energy – National Energy Technology Laboratory (US-DOE-NETL) methodology as the scale of investigation is narrowed from very large regional assessments down to selected areas and formations that may be developed for commercial storage. In addition, we present a new notation that explicitly identifies differences between data availability and data sources used for geologic parameters and efficiency factors as the scale of investigation is narrowed. This CO 2 resource estimation method is available for screening formations in a tool called CO 2-SCREEN.« less

  7. Effects of extreme weather on human health: methodology review

    NASA Astrophysics Data System (ADS)

    Wu, R.; Liss, A.; Naumova, E. N.

    2012-12-01

    This work critically evaluates current methodology applied to estimate the effects of extreme weather events (EWE) on human health. Specifically, we focus on uncertainties associated with: a) the main statistical approaches for estimating the effects of EWE, b) definitions of health outcomes and EWE, and c) possible sources of errors and biases in currently available data sets. The EWE, which include heat waves, cold spells, ice storms, flood, drought and tornadoes, are known for their massive effects on ecosystems, economies, infrastructures. In particular, human lives and health are frequently impacted by EWE; however, the estimate of such effects is complex and lacks a systematic methodology. An accurate and reliable estimate of health impacts is critical for developing preparedness and effective prevention strategies, better allocating scarce resources for mitigating negative impacts of EWE, and detecting vulnerable populations and regions in a timely manner. We reviewed 82 manuscripts published between 1993 and 2011, selected from MedPub and Medline databases using predetermined sets of keywords, such as extreme weather, mortality, morbidity and hospitalization. We classified publications based on their geographical locations, types of included health outcomes, methods for detecting EWE and statistical methodology employed to determine the presence and magnitude of EWE associated health outcomes. We determined that 57% of the reviewed manuscripts applied time-series analysis and the associations analysis and were conducted in temperate regions of the US, Canada, Korea, Japan and Europe respectively. About 60% of reviewed studies focused primarily on mortality data, 30% on morbidity outcomes and 9% studied both mortality and morbidity with respect to direct effects of extreme heat waves and cold spells. A wide range of EWE definitions were employed in those manuscripts, which limited the ability to compare the results to a certain degree. We observed at least three main sources of uncertainty, which may lead to an estimate bias: potential misrepresentation and misspecification of the biological causal mechanism in statistical models, completeness and quality of reporting EWE-specific health outcomes, and incomplete accounting for spatial uncertainties in historical environmental records. Finally we show that some of those systematic biases can be reduced by performing proper adjustments, while some of them still need further studies and efforts. Reducing bias provides more accurate representation of disease burden. Better understanding of EWE and their impacts on human health, combined with other preventive strategies, can provide better protection from EWE for vulnerable populations in the future.

  8. Comparison of cost-benefit analysis of nitrogen dioxide control in Tokyo, Japan with those in other countries and cities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voorhees, A.S.; Araki, S.; Sakai, R.

    1999-07-01

    To evaluate the economic effectiveness of past NO{sub 2} controls in Tokyo, the authors compared the results of their cost-benefit analysis (CBA) of these controls with other investigations. The authors carried out a CBA of NO{sub 2} controls in Tokyo using Freeman's benefit methodology and EPA and Dixon et al. cost methodologies and they compared their assumptions and results to work done by other researchers for other countries and cities, which were collected from the literature. The authors assumed 2 to 3 days duration per incidence of respiratory illness. Kenkel suggested 4.1 days and Dixon et al. assumed 2 weeks. They estimated avoided incidence per person in adults as 2.6 (upper limit UL 2.7; lower limit LL 2.4) and in children as 0.33 (UL 0.35; LL 0.30). Ostro estimated 0.20 for respiratory symptoms in adults from NO{sub 2} exposure, 5.2 for respiratory symptoms and 0.078 for asthma attacks in adults from particulates. The authors estimated work loss days (WLDs) per person for workers as 4.7 (UL 5.0; LL 4.4) and for working mothers as 0.61 (UL 0.66; LL 0.56). Shin et al.'s per-person estimates included 4.5 WLDs in Bangkok, 3.7 in Beijing, 2.3 in Shanghai, and 1.1 in Kuala Lumpur. They estimated the cost effectiveness of NO{sub 2} control in Tokyo to bemore » $1,400/ton (UL $1,500; LL $1,300) for motor vehicles, $21,000/ton (UL $23,000; LL $$19,000) for all NO{sub x} sources, and $$91,000/ton (UL $98,000; LL $84,000) for stationary point sources. This compares to $240 to $$1,500/ton in West Virginia for all NO{sub x} sources, $$2,700/ton in northern Virginia from motor vehicles, $5,600/ton from motor vehicles in Virginia, and $17,000 to $26,000/ton from all NO{sub x} sources in the Chesapeake River Watershed. Herein, the benefits in Tokyo exceeded the costs by a ratio of approximately 6 to 1 (UL 7:1; LL 5:1).« less

  9. Constrained Maximum Likelihood Estimation for Model Calibration Using Summary-level Information from External Big Data Sources

    PubMed Central

    Chatterjee, Nilanjan; Chen, Yi-Hau; Maas, Paige; Carroll, Raymond J.

    2016-01-01

    Information from various public and private data sources of extremely large sample sizes are now increasingly available for research purposes. Statistical methods are needed for utilizing information from such big data sources while analyzing data from individual studies that may collect more detailed information required for addressing specific hypotheses of interest. In this article, we consider the problem of building regression models based on individual-level data from an “internal” study while utilizing summary-level information, such as information on parameters for reduced models, from an “external” big data source. We identify a set of very general constraints that link internal and external models. These constraints are used to develop a framework for semiparametric maximum likelihood inference that allows the distribution of covariates to be estimated using either the internal sample or an external reference sample. We develop extensions for handling complex stratified sampling designs, such as case-control sampling, for the internal study. Asymptotic theory and variance estimators are developed for each case. We use simulation studies and a real data application to assess the performance of the proposed methods in contrast to the generalized regression (GR) calibration methodology that is popular in the sample survey literature. PMID:27570323

  10. Advanced LIGO low-latency searches

    NASA Astrophysics Data System (ADS)

    Kanner, Jonah; LIGO Scientific Collaboration, Virgo Collaboration

    2016-06-01

    Advanced LIGO recently made the first detection of gravitational waves from merging binary black holes. The signal was first identified by a low-latency analysis, which identifies gravitational-wave transients within a few minutes of data collection. More generally, Advanced LIGO transients are sought with a suite of automated tools, which collectively identify events, evaluate statistical significance, estimate source position, and attempt to characterize source properties. This low-latency effort is enabling a broad multi-messenger approach to the science of compact object mergers and other transients. This talk will give an overview of the low-latency methodology and recent results.

  11. The cost of adherence mismeasurement in serious mental illness: a claims-based analysis.

    PubMed

    Shafrin, Jason; Forma, Felicia; Scherer, Ethan; Hatch, Ainslie; Vytlacil, Edward; Lakdawalla, Darius

    2017-05-01

    To quantify how adherence mismeasurement affects the estimated impact of adherence on inpatient costs among patients with serious mental illness (SMI). Proportion of days covered (PDC) is a common claims-based measure of medication adherence. Because PDC does not measure medication ingestion, however, it may inaccurately measure adherence. We derived a formula to correct the bias that occurs in adherence-utilization studies resulting from errors in claims-based measures of adherence. We conducted a literature review to identify the correlation between gold-standard and claims-based adherence measures. We derived a bias-correction methodology to address claims-based medication adherence measurement error. We then applied this methodology to a case study of patients with SMI who initiated atypical antipsychotics in 2 large claims databases. Our literature review identified 6 studies of interest. The 4 most relevant ones measured correlations between 0.38 and 0.91. Our preferred estimate implies that the effect of adherence on inpatient spending estimated from claims data would understate the true effect by a factor of 5.3, if there were no other sources of bias. Although our procedure corrects for measurement error, such error also may amplify or mitigate other potential biases. For instance, if adherent patients are healthier than nonadherent ones, measurement error makes the resulting bias worse. On the other hand, if adherent patients are sicker, measurement error mitigates the other bias. Measurement error due to claims-based adherence measures is worth addressing, alongside other more widely emphasized sources of bias in inference.

  12. Quantitative identification of riverine nitrogen from point, direct runoff and base flow sources.

    PubMed

    Huang, Hong; Zhang, Baifa; Lu, Jun

    2014-01-01

    We present a methodological example for quantifying the contributions of riverine total nitrogen (TN) from point, direct runoff and base flow sources by combining a recursive digital filter technique and statistical methods. First, we separated daily riverine flow into direct runoff and base flow using a recursive digital filter technique; then, a statistical model was established using daily simultaneous data for TN load, direct runoff rate, base flow rate, and temperature; and finally, the TN loading from direct runoff and base flow sources could be inversely estimated. As a case study, this approach was adopted to identify the TN source contributions in Changle River, eastern China. Results showed that, during 2005-2009, the total annual TN input to the river was 1,700.4±250.2 ton, and the contributions of point, direct runoff and base flow sources were 17.8±2.8%, 45.0±3.6%, and 37.2±3.9%, respectively. The innovation of the approach is that the nitrogen from direct runoff and base flow sources could be separately quantified. The approach is simple but detailed enough to take the major factors into account, providing an effective and reliable method for riverine nitrogen loading estimation and source apportionment.

  13. The 26 December 2004 tsunami source estimated from satellite radar altimetry and seismic waves

    NASA Technical Reports Server (NTRS)

    Song, Tony Y.; Ji, Chen; Fu, L. -L.; Zlotnicki, Victor; Shum, C. K.; Yi, Yuchan; Hjorleifsdottir, Vala

    2005-01-01

    The 26 December 2004 Indian Ocean tsunami was the first earthquake tsunami of its magnitude to occur since the advent of both digital seismometry and satellite radar altimetry. Both have independently recorded the event from different physical aspects. The seismic data has then been used to estimate the earthquake fault parameters, and a three-dimensional ocean-general-circulation-model (OGCM) coupled with the fault information has been used to simulate the satellite-observed tsunami waves. Here we show that these two datasets consistently provide the tsunami source using independent methodologies of seismic waveform inversion and ocean modeling. Cross-examining the two independent results confirms that the slip function is the most important condition controlling the tsunami strength, while the geometry and the rupture velocity of the tectonic plane determine the spatial patterns of the tsunami.

  14. Source Data Applicability Impacts on Epistemic Uncertainty for Launch Vehicle Fault Tree Models

    NASA Technical Reports Server (NTRS)

    Al Hassan, Mohammad; Novack, Steven D.; Ring, Robert W.

    2016-01-01

    Launch vehicle systems are designed and developed using both heritage and new hardware. Design modifications to the heritage hardware to fit new functional system requirements can impact the applicability of heritage reliability data. Risk estimates for newly designed systems must be developed from generic data sources such as commercially available reliability databases using reliability prediction methodologies, such as those addressed in MIL-HDBK-217F. Failure estimates must be converted from the generic environment to the specific operating environment of the system where it is used. In addition, some qualification of applicability for the data source to the current system should be made. Characterizing data applicability under these circumstances is crucial to developing model estimations that support confident decisions on design changes and trade studies. This paper will demonstrate a data-source applicability classification method for assigning uncertainty to a target vehicle based on the source and operating environment of the originating data. The source applicability is determined using heuristic guidelines while translation of operating environments is accomplished by applying statistical methods to MIL-HDK-217F tables. The paper will provide a case study example by translating Ground Benign (GB) and Ground Mobile (GM) to the Airborne Uninhabited Fighter (AUF) environment for three electronic components often found in space launch vehicle control systems. The classification method will be followed by uncertainty-importance routines to assess the need to for more applicable data to reduce uncertainty.

  15. RAiSE III: 3C radio AGN energetics and composition

    NASA Astrophysics Data System (ADS)

    Turner, Ross J.; Shabala, Stanislav S.; Krause, Martin G. H.

    2018-03-01

    Kinetic jet power estimates based exclusively on observed monochromatic radio luminosities are highly uncertain due to confounding variables and a lack of knowledge about some aspects of the physics of active galactic nuclei (AGNs). We propose a new methodology to calculate the jet powers of the largest, most powerful radio sources based on combinations of their size, lobe luminosity, and shape of their radio spectrum; this approach avoids the uncertainties encountered by previous relationships. The outputs of our model are calibrated using hydrodynamical simulations and tested against independent X-ray inverse-Compton measurements. The jet powers and lobe magnetic field strengths of radio sources are found to be recovered using solely the lobe luminosity and spectral curvature, enabling the intrinsic properties of unresolved high-redshift sources to be inferred. By contrast, the radio source ages cannot be estimated without knowledge of the lobe volumes. The monochromatic lobe luminosity alone is incapable of accurately estimating the jet power or source age without knowledge of the lobe magnetic field strength and size, respectively. We find that, on average, the lobes of the Third Cambridge Catalogue of Radio Sources (3C) have magnetic field strengths approximately a factor three lower than the equipartition value, inconsistent with equal energy in the particles and the fields at the 5σ level. The particle content of 3C radio lobes is discussed in the context of complementary observations; we do not find evidence favouring an energetically dominant proton population.

  16. Are cannabis prevalence estimates comparable across countries and regions? A cross-cultural validation using search engine query data.

    PubMed

    Steppan, Martin; Kraus, Ludwig; Piontek, Daniela; Siciliano, Valeria

    2013-01-01

    Prevalence estimation of cannabis use is usually based on self-report data. Although there is evidence on the reliability of this data source, its cross-cultural validity is still a major concern. External objective criteria are needed for this purpose. In this study, cannabis-related search engine query data are used as an external criterion. Data on cannabis use were taken from the 2007 European School Survey Project on Alcohol and Other Drugs (ESPAD). Provincial data came from three Italian nation-wide studies using the same methodology (2006-2008; ESPAD-Italia). Information on cannabis-related search engine query data was based on Google search volume indices (GSI). (1) Reliability analysis was conducted for GSI. (2) Latent measurement models of "true" cannabis prevalence were tested using perceived availability, web-based cannabis searches and self-reported prevalence as indicators. (3) Structure models were set up to test the influences of response tendencies and geographical position (latitude, longitude). In order to test the stability of the models, analyses were conducted on country level (Europe, US) and on provincial level in Italy. Cannabis-related GSI were found to be highly reliable and constant over time. The overall measurement model was highly significant in both data sets. On country level, no significant effects of response bias indicators and geographical position on perceived availability, web-based cannabis searches and self-reported prevalence were found. On provincial level, latitude had a significant positive effect on availability indicating that perceived availability of cannabis in northern Italy was higher than expected from the other indicators. Although GSI showed weaker associations with cannabis use than perceived availability, the findings underline the external validity and usefulness of search engine query data as external criteria. The findings suggest an acceptable relative comparability of national (provincial) prevalence estimates of cannabis use that are based on a common survey methodology. Search engine query data are a too weak indicator to base prevalence estimations on this source only, but in combination with other sources (waste water analysis, sales of cigarette paper) they may provide satisfactory estimates. Copyright © 2012. Published by Elsevier B.V.

  17. Testing earthquake source inversion methodologies

    USGS Publications Warehouse

    Page, M.; Mai, P.M.; Schorlemmer, D.

    2011-01-01

    Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.

  18. Bayesian inference on EMRI signals using low frequency approximations

    NASA Astrophysics Data System (ADS)

    Ali, Asad; Christensen, Nelson; Meyer, Renate; Röver, Christian

    2012-07-01

    Extreme mass ratio inspirals (EMRIs) are thought to be one of the most exciting gravitational wave sources to be detected with LISA. Due to their complicated nature and weak amplitudes the detection and parameter estimation of such sources is a challenging task. In this paper we present a statistical methodology based on Bayesian inference in which the estimation of parameters is carried out by advanced Markov chain Monte Carlo (MCMC) algorithms such as parallel tempering MCMC. We analysed high and medium mass EMRI systems that fall well inside the low frequency range of LISA. In the context of the Mock LISA Data Challenges, our investigation and results are also the first instance in which a fully Markovian algorithm is applied for EMRI searches. Results show that our algorithm worked well in recovering EMRI signals from different (simulated) LISA data sets having single and multiple EMRI sources and holds great promise for posterior computation under more realistic conditions. The search and estimation methods presented in this paper are general in their nature, and can be applied in any other scenario such as AdLIGO, AdVIRGO and Einstein Telescope with their respective response functions.

  19. HETEROGENEITY IN TREATMENT EFFECT AND COMPARATIVE EFFECTIVENESS RESEARCH.

    PubMed

    Luo, Zhehui

    2011-10-01

    The ultimate goal of comparative effectiveness research (CER) is to develop and disseminate evidence-based information about which interventions are most effective for which patients under what circumstances. To achieve this goal it is crucial that researchers in methodology development find appropriate methods for detecting the presence and sources of heterogeneity in treatment effect (HTE). Comparing with the typically reported average treatment effect (ATE) in randomized controlled trials and non-experimental (i.e., observational) studies, identifying and reporting HTE better reflect the nature and purposes of CER. Methodologies of CER include meta-analysis, systematic review, design of experiments that encompasses HTE, and statistical correction of various types of estimation bias, which is the focus of this review.

  20. Drought risk assessment under climate change is sensitive to methodological choices for the estimation of evaporative demand

    PubMed Central

    Barsugli, Joseph J.; Hobbins, Michael T.; Kumar, Sanjiv

    2017-01-01

    Several studies have projected increases in drought severity, extent and duration in many parts of the world under climate change. We examine sources of uncertainty arising from the methodological choices for the assessment of future drought risk in the continental US (CONUS). One such uncertainty is in the climate models’ expression of evaporative demand (E0), which is not a direct climate model output but has been traditionally estimated using several different formulations. Here we analyze daily output from two CMIP5 GCMs to evaluate how differences in E0 formulation, treatment of meteorological driving data, choice of GCM, and standardization of time series influence the estimation of E0. These methodological choices yield different assessments of spatio-temporal variability in E0 and different trends in 21st century drought risk. First, we estimate E0 using three widely used E0 formulations: Penman-Monteith; Hargreaves-Samani; and Priestley-Taylor. Our analysis, which primarily focuses on the May-September warm-season period, shows that E0 climatology and its spatial pattern differ substantially between these three formulations. Overall, we find higher magnitudes of E0 and its interannual variability using Penman-Monteith, in particular for regions like the Great Plains and southwestern US where E0 is strongly influenced by variations in wind and relative humidity. When examining projected changes in E0 during the 21st century, there are also large differences among the three formulations, particularly the Penman-Monteith relative to the other two formulations. The 21st century E0 trends, particularly in percent change and standardized anomalies of E0, are found to be sensitive to the long-term mean value and the amplitude of interannual variability, i.e. if the magnitude of E0 and its interannual variability are relatively low for a particular E0 formulation, then the normalized or standardized 21st century trend based on that formulation is amplified relative to other formulations. This is the case for the use of Hargreaves-Samani and Priestley-Taylor, where future E0 trends are comparatively much larger than for Penman-Monteith. When comparing Penman-Monteith E0 responses between different choices of input variables related to wind speed, surface roughness, and net radiation, we found differences in E0 trends, although these choices had a much smaller influence on E0 trends than did the E0 formulation choices. These methodological choices and specific climate model selection, also have a large influence on the estimation of trends in standardized drought indices used for drought assessment operationally. We find that standardization tends to amplify divergences between the E0 trends calculated using different E0 formulations, because standardization is sensitive to both the climatology and amplitude of interannual variability of E0. For different methodological choices and GCM output considered in estimating E0, we examine potential sources of uncertainty in 21st century trends in the Standardized Precipitation Evapotranspiration Index (SPEI) and Evaporative Demand Drought Index (EDDI) over selected regions of the CONUS to demonstrate the practical implications of these methodological choices for the quantification of drought risk under climate change. PMID:28301603

  1. Soil Bulk Density by Soil Type, Land Use and Data Source: Putting the Error in SOC Estimates

    NASA Astrophysics Data System (ADS)

    Wills, S. A.; Rossi, A.; Loecke, T.; Ramcharan, A. M.; Roecker, S.; Mishra, U.; Waltman, S.; Nave, L. E.; Williams, C. O.; Beaudette, D.; Libohova, Z.; Vasilas, L.

    2017-12-01

    An important part of SOC stock and pool assessment is the assessment, estimation, and application of bulk density estimates. The concept of bulk density is relatively simple (the mass of soil in a given volume), the specifics Bulk density can be difficult to measure in soils due to logistical and methodological constraints. While many estimates of SOC pools use legacy data in their estimates, few concerted efforts have been made to assess the process used to convert laboratory carbon concentration measurements and bulk density collection into volumetrically based SOC estimates. The methodologies used are particularly sensitive in wetlands and organic soils with high amounts of carbon and very low bulk densities. We will present an analysis across four database measurements: NCSS - the National Cooperative Soil Survey Characterization dataset, RaCA - the Rapid Carbon Assessment sample dataset, NWCA - the National Wetland Condition Assessment, and ISCN - the International soil Carbon Network. The relationship between bulk density and soil organic carbon will be evaluated by dataset and land use/land cover information. Prediction methods (both regression and machine learning) will be compared and contrasted across datasets and available input information. The assessment and application of bulk density, including modeling, aggregation and error propagation will be evaluated. Finally, recommendations will be made about both the use of new data in soil survey products (such as SSURGO) and the use of that information as legacy data in SOC pool estimates.

  2. A revised ground-motion and intensity interpolation scheme for shakemap

    USGS Publications Warehouse

    Worden, C.B.; Wald, D.J.; Allen, T.I.; Lin, K.; Garcia, D.; Cua, G.

    2010-01-01

    We describe a weighted-average approach for incorporating various types of data (observed peak ground motions and intensities and estimates from groundmotion prediction equations) into the ShakeMap ground motion and intensity mapping framework. This approach represents a fundamental revision of our existing ShakeMap methodology. In addition, the increased availability of near-real-time macroseismic intensity data, the development of newrelationships between intensity and peak ground motions, and new relationships to directly predict intensity from earthquake source information have facilitated the inclusion of intensity measurements directly into ShakeMap computations. Our approach allows for the combination of (1) direct observations (ground-motion measurements or reported intensities), (2) observations converted from intensity to ground motion (or vice versa), and (3) estimated ground motions and intensities from prediction equations or numerical models. Critically, each of the aforementioned data types must include an estimate of its uncertainties, including those caused by scaling the influence of observations to surrounding grid points and those associated with estimates given an unknown fault geometry. The ShakeMap ground-motion and intensity estimates are an uncertainty-weighted combination of these various data and estimates. A natural by-product of this interpolation process is an estimate of total uncertainty at each point on the map, which can be vital for comprehensive inventory loss calculations. We perform a number of tests to validate this new methodology and find that it produces a substantial improvement in the accuracy of ground-motion predictions over empirical prediction equations alone.

  3. Coal resources available for development; a methodology and pilot study

    USGS Publications Warehouse

    Eggleston, Jane R.; Carter, M. Devereux; Cobb, James C.

    1990-01-01

    Coal accounts for a major portion of our Nation's energy supply in projections for the future. A demonstrated reserve base of more than 475 billion short tons, as the Department of Energy currently estimates, indicates that, on the basis of today's rate of consumption, the United States has enough coal to meet projected energy needs for almost 200 years. However, the traditional procedures used for estimating the demonstrated reserve base do not account for many environmental and technological restrictions placed on coal mining. A new methodology has been developed to determine the quantity of coal that might actually be available for mining under current and foreseeable conditions. This methodology is unique in its approach, because it applies restrictions to the coal resource before it is mined. Previous methodologies incorporated restrictions into the recovery factor (a percentage), which was then globally applied to the reserve (minable coal) tonnage to derive a recoverable coal tonnage. None of the previous methodologies define the restrictions and their area and amount of impact specifically. Because these restrictions and their impacts are defined in this new methodology, it is possible to achieve more accurate and specific assessments of available resources. This methodology has been tested in a cooperative project between the U.S. Geological Survey and the Kentucky Geological Survey on the Matewan 7.5-minute quadrangle in eastern Kentucky. Pertinent geologic, mining, land-use, and technological data were collected, assimilated, and plotted. The National Coal Resources Data System was used as the repository for data, and its geographic information system software was applied to these data to eliminate restricted coal and quantify that which is available for mining. This methodology does not consider recovery factors or the economic factors that would be considered by a company before mining. Results of the pilot study indicate that, of the estimated original 986.5 million short tons of coal resources in Kentucky's Matewan quadrangle, 13 percent has been mined, 2 percent is restricted by land-use considerations, and 23 percent is restricted by technological considerations. This leaves an estimated 62 percent of the original resource, or approximately 612 million short tons available for mining. However, only 44 percent of this available coal (266 million short tons) will meet current Environmental Protection Agency new-source performance standards for sulfur emissions from electric generating plants in the United States. In addition, coal tonnage lost during mining and cleaning would further reduce the amount of coal actually arriving at the market.

  4. An open source Bayesian Monte Carlo isotope mixing model with applications in Earth surface processes

    NASA Astrophysics Data System (ADS)

    Arendt, Carli A.; Aciego, Sarah M.; Hetland, Eric A.

    2015-05-01

    The implementation of isotopic tracers as constraints on source contributions has become increasingly relevant to understanding Earth surface processes. Interpretation of these isotopic tracers has become more accessible with the development of Bayesian Monte Carlo (BMC) mixing models, which allow uncertainty in mixing end-members and provide methodology for systems with multicomponent mixing. This study presents an open source multiple isotope BMC mixing model that is applicable to Earth surface environments with sources exhibiting distinct end-member isotopic signatures. Our model is first applied to new δ18O and δD measurements from the Athabasca Glacier, which showed expected seasonal melt evolution trends and vigorously assessed the statistical relevance of the resulting fraction estimations. To highlight the broad applicability of our model to a variety of Earth surface environments and relevant isotopic systems, we expand our model to two additional case studies: deriving melt sources from δ18O, δD, and 222Rn measurements of Greenland Ice Sheet bulk water samples and assessing nutrient sources from ɛNd and 87Sr/86Sr measurements of Hawaiian soil cores. The model produces results for the Greenland Ice Sheet and Hawaiian soil data sets that are consistent with the originally published fractional contribution estimates. The advantage of this method is that it quantifies the error induced by variability in the end-member compositions, unrealized by the models previously applied to the above case studies. Results from all three case studies demonstrate the broad applicability of this statistical BMC isotopic mixing model for estimating source contribution fractions in a variety of Earth surface systems.

  5. Assessing the Uncertainties on Seismic Source Parameters: Towards Realistic Estimates of Moment Tensor Determinations

    NASA Astrophysics Data System (ADS)

    Magnoni, F.; Scognamiglio, L.; Tinti, E.; Casarotti, E.

    2014-12-01

    Seismic moment tensor is one of the most important source parameters defining the earthquake dimension and style of the activated fault. Moment tensor catalogues are ordinarily used by geoscientists, however, few attempts have been done to assess possible impacts of moment magnitude uncertainties upon their own analysis. The 2012 May 20 Emilia mainshock is a representative event since it is defined in literature with a moment magnitude value (Mw) spanning between 5.63 and 6.12. An uncertainty of ~0.5 units in magnitude leads to a controversial knowledge of the real size of the event. The possible uncertainty associated to this estimate could be critical for the inference of other seismological parameters, suggesting caution for seismic hazard assessment, coulomb stress transfer determination and other analyses where self-consistency is important. In this work, we focus on the variability of the moment tensor solution, highlighting the effect of four different velocity models, different types and ranges of filtering, and two different methodologies. Using a larger dataset, to better quantify the source parameter uncertainty, we also analyze the variability of the moment tensor solutions depending on the number, the epicentral distance and the azimuth of used stations. We endorse that the estimate of seismic moment from moment tensor solutions, as well as the estimate of the other kinematic source parameters, cannot be considered an absolute value and requires to come out with the related uncertainties and in a reproducible framework characterized by disclosed assumptions and explicit processing workflows.

  6. Female genital mutilation/cutting in Italy: an enhanced estimation for first generation migrant women based on 2016 survey data.

    PubMed

    Ortensi, Livia Elisa; Farina, Patrizia; Leye, Els

    2018-01-12

    Migration flows of women from Female Genital Mutilation/Cutting practicing countries have generated a need for data on women potentially affected by Female Genital Mutilation/Cutting. This paper presents enhanced estimates for foreign-born women and asylum seekers in Italy in 2016, with the aim of supporting resource planning and policy making, and advancing the methodological debate on estimation methods. The estimates build on the most recent methodological development in Female Genital Mutilation/Cutting direct and indirect estimation for Female Genital Mutilation/Cutting non-practicing countries. Direct estimation of prevalence was performed for 9 communities using the results of the survey FGM-Prev, held in Italy in 2016. Prevalence for communities not involved in the FGM-Prev survey was estimated using to the 'extrapolation-of-FGM/C countries prevalence data method' with corrections according to the selection hypothesis. It is estimated that 60 to 80 thousand foreign-born women aged 15 and over with Female Genital Mutilation/Cutting are present in Italy in 2016. We also estimated the presence of around 11 to 13 thousand cut women aged 15 and over among asylum seekers to Italy in 2014-2016. Due to the long established presence of female migrants from some practicing communities Female Genital Mutilation/Cutting is emerging as an issue also among women aged 60 and over from selected communities. Female Genital Mutilation/Cutting is an additional source of concern for slightly more than 60% of women seeking asylum. Reliable estimates on Female Genital Mutilation/Cutting at country level are important for evidence-based policy making and service planning. This study suggests that indirect estimations cannot fully replace direct estimations, even if corrections for migrant socioeconomic selection can be implemented to reduce the bias.

  7. A Life-Cycle Cost Estimating Methodology for NASA-Developed Air Traffic Control Decision Support Tools

    NASA Technical Reports Server (NTRS)

    Wang, Jianzhong Jay; Datta, Koushik; Landis, Michael R. (Technical Monitor)

    2002-01-01

    This paper describes the development of a life-cycle cost (LCC) estimating methodology for air traffic control Decision Support Tools (DSTs) under development by the National Aeronautics and Space Administration (NASA), using a combination of parametric, analogy, and expert opinion methods. There is no one standard methodology and technique that is used by NASA or by the Federal Aviation Administration (FAA) for LCC estimation of prospective Decision Support Tools. Some of the frequently used methodologies include bottom-up, analogy, top-down, parametric, expert judgement, and Parkinson's Law. The developed LCC estimating methodology can be visualized as a three-dimensional matrix where the three axes represent coverage, estimation, and timing. This paper focuses on the three characteristics of this methodology that correspond to the three axes.

  8. Assessing methane emission estimation methods based on atmospheric measurements from oil and gas production using LES simulations

    NASA Astrophysics Data System (ADS)

    Saide, P. E.; Steinhoff, D.; Kosovic, B.; Weil, J.; Smith, N.; Blewitt, D.; Delle Monache, L.

    2017-12-01

    There are a wide variety of methods that have been proposed and used to estimate methane emissions from oil and gas production by using air composition and meteorology observations in conjunction with dispersion models. Although there has been some verification of these methodologies using controlled releases and concurrent atmospheric measurements, it is difficult to assess the accuracy of these methods for more realistic scenarios considering factors such as terrain, emissions from multiple components within a well pad, and time-varying emissions representative of typical operations. In this work we use a large-eddy simulation (LES) to generate controlled but realistic synthetic observations, which can be used to test multiple source term estimation methods, also known as an Observing System Simulation Experiment (OSSE). The LES is based on idealized simulations of the Weather Research & Forecasting (WRF) model at 10 m horizontal grid-spacing covering an 8 km by 7 km domain with terrain representative of a region located in the Barnett shale. Well pads are setup in the domain following a realistic distribution and emissions are prescribed every second for the components of each well pad (e.g., chemical injection pump, pneumatics, compressor, tanks, and dehydrator) using a simulator driven by oil and gas production volume, composition and realistic operational conditions. The system is setup to allow assessments under different scenarios such as normal operations, during liquids unloading events, or during other prescribed operational upset events. Methane and meteorology model output are sampled following the specifications of the emission estimation methodologies and considering typical instrument uncertainties, resulting in realistic observations (see Figure 1). We will show the evaluation of several emission estimation methods including the EPA Other Test Method 33A and estimates using the EPA AERMOD regulatory model. We will also show source estimation results from advanced methods such as variational inverse modeling, and Bayesian inference and stochastic sampling techniques. Future directions including other types of observations, other hydrocarbons being considered, and assessment of additional emission estimation methods will be discussed.

  9. Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model

    USGS Publications Warehouse

    Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua

    2015-01-01

    We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.

  10. Can weekly noise levels of urban road traffic, as predominant noise source, estimate annual ones?

    PubMed

    Prieto Gajardo, Carlos; Barrigón Morillas, Juan Miguel; Rey Gozalo, Guillermo; Vílchez-Gómez, Rosendo

    2016-11-01

    The effects of noise pollution on human quality of life and health were recognised by the World Health Organisation a long time ago. There is a crucial dilemma for the study of urban noise when one is looking for proven methodologies that can allow, on the one hand, an increase in the quality of predictions, and on the other hand, saving resources in the spatial and temporal sampling. The temporal structure of urban noise is studied in this work from a different point of view. This methodology, based on Fourier analysis, is applied to several measurements of urban noise, mainly from road traffic and one-week long, carried out in two cities located on different continents and with different sociological life styles (Cáceres, Spain and Talca, Chile). Its capacity to predict annual noise levels from weekly measurements is studied. The relation between this methodology and the categorisation method is also analysed.

  11. A methodology to quantify the release of spent nuclear fuel from dry casks during security-related scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Durbin, Samuel G.; Luna, Robert Earl

    Assessing the risk to the public and the environment from a release of radioactive material produced by accidental or purposeful forces/environments is an important aspect of the regulatory process in many facets of the nuclear industry. In particular, the transport and storage of radioactive materials is of particular concern to the public, especially with regard to potential sabotage acts that might be undertaken by terror groups to cause injuries, panic, and/or economic consequences to a nation. For many such postulated attacks, no breach in the robust cask or storage module containment is expected to occur. However, there exists evidence thatmore » some hypothetical attack modes can penetrate and cause a release of radioactive material. This report is intended as an unclassified overview of the methodology for release estimation as well as a guide to useful resource data from unclassified sources and relevant analysis methods for the estimation process.« less

  12. The Leeb Hardness Test for Rock: An Updated Methodology and UCS Correlation

    NASA Astrophysics Data System (ADS)

    Corkum, A. G.; Asiri, Y.; El Naggar, H.; Kinakin, D.

    2018-03-01

    The Leeb hardness test (LHT with test value of L D ) is a rebound hardness test, originally developed for metals, that has been correlated with the Unconfined Compressive Strength (test value of σ c ) of rock by several authors. The tests can be carried out rapidly, conveniently and nondestructively on core and block samples or on rock outcrops. This makes the relatively small LHT device convenient for field tests. The present study compiles test data from literature sources and presents new laboratory testing carried out by the authors to develop a substantially expanded database with wide-ranging rock types. In addition, the number of impacts that should be averaged to comprise a "test result" was revisited along with the issue of test specimen size. Correlation for L D and σ c for various rock types is provided along with recommended testing methodology. The accuracy of correlated σ c estimates was assessed and reasonable correlations were observed between L D and σ c . The study findings show that LHT can be useful particularly for field estimation of σ c and offers a significant improvement over the conventional field estimation methods outlined by the ISRM (e.g., hammer blows). This test is rapid and simple, with relatively low equipment costs, and provides a reasonably accurate estimate of σ c .

  13. Effect of Broadband Nature of Marine Mammal Echolocation Clicks on Click-Based Population Density Estimates

    DTIC Science & Technology

    2014-09-30

    research will focus initially on beaked whales (Blainville’s or Cuvier’s), for which high quality click recordings of clicks are available from DTAG...The same methodology will be applied also to other species such as sperm whale (Physeter macrocephalus) (whose high source level assures long range...Thomas, University of St. Andrews). REFERENCES Gillespie, D. and Leaper, R. (1996). Detection of sperm whale Physeter macrocephalus clicks and

  14. Jobs and Economic Development Impact (JEDI) Model: Offshore Wind User Reference Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lantz, E.; Goldberg, M.; Keyser, D.

    2013-06-01

    The Offshore Wind Jobs and Economic Development Impact (JEDI) model, developed by NREL and MRG & Associates, is a spreadsheet based input-output tool. JEDI is meant to be a user friendly and transparent tool to estimate potential economic impacts supported by the development and operation of offshore wind projects. This guide describes how to use the model as well as technical information such as methodology, limitations, and data sources.

  15. Estimation of bathymetric depth and slope from data assimilation of swath altimetry into a hydrodynamic model

    NASA Astrophysics Data System (ADS)

    Durand, Michael; Andreadis, Konstantinos M.; Alsdorf, Douglas E.; Lettenmaier, Dennis P.; Moller, Delwyn; Wilson, Matthew

    2008-10-01

    The proposed Surface Water and Ocean Topography (SWOT) mission would provide measurements of water surface elevation (WSE) for characterization of storage change and discharge. River channel bathymetry is a significant source of uncertainty in estimating discharge from WSE measurements, however. In this paper, we demonstrate an ensemble-based data assimilation (DA) methodology for estimating bathymetric depth and slope from WSE measurements and the LISFLOOD-FP hydrodynamic model. We performed two proof-of-concept experiments using synthetically generated SWOT measurements. The experiments demonstrated that bathymetric depth and slope can be estimated to within 3.0 microradians or 50 cm, respectively, using SWOT WSE measurements, within the context of our DA and modeling framework. We found that channel bathymetry estimation accuracy is relatively insensitive to SWOT measurement error, because uncertainty in LISFLOOD-FP inputs (such as channel roughness and upstream boundary conditions) is likely to be of greater magnitude than measurement error.

  16. Estimating Traveler Populations at Airport and Cruise Terminals for Population Distribution and Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jochem, Warren C; Sims, Kelly M; Bright, Eddie A

    In recent years, uses of high-resolution population distribution databases are increasing steadily for environmental, socioeconomic, public health, and disaster-related research and operations. With the development of daytime population distribution, temporal resolution of such databases has been improved. However, the lack of incorporation of transitional population, namely business and leisure travelers, leaves a significant population unaccounted for within the critical infrastructure networks, such as at transportation hubs. This paper presents two general methodologies for estimating passenger populations in airport and cruise port terminals at a high temporal resolution which can be incorporated into existing population distribution models. The methodologies are geographicallymore » scalable and are based on, and demonstrate how, two different transportation hubs with disparate temporal population dynamics can be modeled utilizing publicly available databases including novel data sources of flight activity from the Internet which are updated in near-real time. The airport population estimation model shows great potential for rapid implementation for a large collection of airports on a national scale, and the results suggest reasonable accuracy in the estimated passenger traffic. By incorporating population dynamics at high temporal resolutions into population distribution models, we hope to improve the estimates of populations exposed to or at risk to disasters, thereby improving emergency planning and response, and leading to more informed policy decisions.« less

  17. Combining optimization methods with response spectra curve-fitting toward improved damping ratio estimation

    NASA Astrophysics Data System (ADS)

    Brewick, Patrick T.; Smyth, Andrew W.

    2016-12-01

    The authors have previously shown that many traditional approaches to operational modal analysis (OMA) struggle to properly identify the modal damping ratios for bridges under traffic loading due to the interference caused by the driving frequencies of the traffic loads. This paper presents a novel methodology for modal parameter estimation in OMA that overcomes the problems presented by driving frequencies and significantly improves the damping estimates. This methodology is based on finding the power spectral density (PSD) of a given modal coordinate, and then dividing the modal PSD into separate regions, left- and right-side spectra. The modal coordinates were found using a blind source separation (BSS) algorithm and a curve-fitting technique was developed that uses optimization to find the modal parameters that best fit each side spectra of the PSD. Specifically, a pattern-search optimization method was combined with a clustering analysis algorithm and together they were employed in a series of stages in order to improve the estimates of the modal damping ratios. This method was used to estimate the damping ratios from a simulated bridge model subjected to moving traffic loads. The results of this method were compared to other established OMA methods, such as Frequency Domain Decomposition (FDD) and BSS methods, and they were found to be more accurate and more reliable, even for modes that had their PSDs distorted or altered by driving frequencies.

  18. The chlorine budget of the present-day atmosphere - A modeling study

    NASA Technical Reports Server (NTRS)

    Weisenstein, Debra K.; Ko, Malcolm K. W.; Sze, Nien-Dak

    1992-01-01

    The contribution of source gases to the total amount of inorganic chlorine (ClY) is examined analytically with a time-dependent model employing 11 source gases. The source-gas emission data are described, and the modeling methodology is set forth with attention given to the data interpretation. The abundances and distributions are obtained for all 11 source gases with corresponding ClY production rates and mixing ratios. It is shown that the ClY production rate and the ClY mixing ratio for each source gas are spatially dependent, and the change in the relative contributions from 1950 to 1990 is given. Ozone changes in the past decade are characterized by losses in the polar and midlatitude lower stratosphere. The values for CFC-11, CCl4, and CH3CCl3 suggest that they are more evident in the lower stratosphere than is suggested by steady-state estimates based on surface concentrations.

  19. Electrophysiological Source Imaging: A Noninvasive Window to Brain Dynamics.

    PubMed

    He, Bin; Sohrabpour, Abbas; Brown, Emery; Liu, Zhongming

    2018-06-04

    Brain activity and connectivity are distributed in the three-dimensional space and evolve in time. It is important to image brain dynamics with high spatial and temporal resolution. Electroencephalography (EEG) and magnetoencephalography (MEG) are noninvasive measurements associated with complex neural activations and interactions that encode brain functions. Electrophysiological source imaging estimates the underlying brain electrical sources from EEG and MEG measurements. It offers increasingly improved spatial resolution and intrinsically high temporal resolution for imaging large-scale brain activity and connectivity on a wide range of timescales. Integration of electrophysiological source imaging and functional magnetic resonance imaging could further enhance spatiotemporal resolution and specificity to an extent that is not attainable with either technique alone. We review methodological developments in electrophysiological source imaging over the past three decades and envision its future advancement into a powerful functional neuroimaging technology for basic and clinical neuroscience applications.

  20. Estimation of Solar Radiation on Building Roofs in Mountainous Areas

    NASA Astrophysics Data System (ADS)

    Agugiaro, G.; Remondino, F.; Stevanato, G.; De Filippi, R.; Furlanello, C.

    2011-04-01

    The aim of this study is estimating solar radiation on building roofs in complex mountain landscape areas. A multi-scale solar radiation estimation methodology is proposed that combines 3D data ranging from regional scale to the architectural one. Both the terrain and the nearby building shadowing effects are considered. The approach is modular and several alternative roof models, obtained by surveying and modelling techniques at varying level of detail, can be embedded in a DTM, e.g. that of an Alpine valley surrounded by mountains. The solar radiation maps obtained from raster models at different resolutions are compared and evaluated in order to obtain information regarding the benefits and disadvantages tied to each roof modelling approach. The solar radiation estimation is performed within the open-source GRASS GIS environment using r.sun and its ancillary modules.

  1. Adaptively Reevaluated Bayesian Localization (ARBL). A Novel Technique for Radiological Source Localization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Erin A.; Robinson, Sean M.; Anderson, Kevin K.

    2015-01-19

    Here we present a novel technique for the localization of radiological sources in urban or rural environments from an aerial platform. The technique is based on a Bayesian approach to localization, in which measured count rates in a time series are compared with predicted count rates from a series of pre-calculated test sources to define likelihood. Furthermore, this technique is expanded by using a localized treatment with a limited field of view (FOV), coupled with a likelihood ratio reevaluation, allowing for real-time computation on commodity hardware for arbitrarily complex detector models and terrain. In particular, detectors with inherent asymmetry ofmore » response (such as those employing internal collimation or self-shielding for enhanced directional awareness) are leveraged by this approach to provide improved localization. Our results from the localization technique are shown for simulated flight data using monolithic as well as directionally-aware detector models, and the capability of the methodology to locate radioisotopes is estimated for several test cases. This localization technique is shown to facilitate urban search by allowing quick and adaptive estimates of source location, in many cases from a single flyover near a source. In particular, this method represents a significant advancement from earlier methods like full-field Bayesian likelihood, which is not generally fast enough to allow for broad-field search in real time, and highest-net-counts estimation, which has a localization error that depends strongly on flight path and cannot generally operate without exhaustive search« less

  2. HIV, HCV, HBV, and syphilis among transgender women from Brazil

    PubMed Central

    Bastos, Francisco I.; Bastos, Leonardo Soares; Coutinho, Carolina; Toledo, Lidiane; Mota, Jurema Corrêa; Velasco-de-Castro, Carlos Augusto; Sperandei, Sandro; Brignol, Sandra; Travassos, Tamiris Severino; dos Santos, Camila Mattos; Malta, Monica Siqueira

    2018-01-01

    Abstract Different sampling strategies, analytic alternatives, and estimators have been proposed to better assess the characteristics of different hard-to-reach populations and their respective infection rates (as well as their sociodemographic characteristics, associated harms, and needs) in the context of studies based on respondent-driven sampling (RDS). Despite several methodological advances and hundreds of empirical studies implemented worldwide, some inchoate findings and methodological challenges remain. The in-depth assessment of the local structure of networks and the performance of the available estimators are particularly relevant when the target populations are sparse and highly stigmatized. In such populations, bottlenecks as well as other sources of biases (for instance, due to homophily and/or too sparse or fragmented groups of individuals) may be frequent, affecting the estimates. In the present study, data were derived from a cross-sectional, multicity RDS study, carried out in 12 Brazilian cities with transgender women (TGW). Overall, infection rates for HIV and syphilis were very high, with some variation between different cities. Notwithstanding, findings are of great concern, considering the fact that female TGW are not only very hard-to-reach but also face deeply-entrenched prejudice and have been out of the reach of most therapeutic and preventive programs and projects. We cross-compared findings adjusted using 2 estimators (the classic estimator usually known as estimator II, originally proposed by Volz and Heckathorn) and a brand new strategy to adjust data generated by RDS, partially based on Bayesian statistics, called for the sake of this paper, the RDS-B estimator. Adjusted prevalence was cross-compared with estimates generated by non-weighted analyses, using what has been called by us a naïve estimator or rough estimates. PMID:29794601

  3. HIV, HCV, HBV, and syphilis among transgender women from Brazil: Assessing different methods to adjust infection rates of a hard-to-reach, sparse population.

    PubMed

    Bastos, Francisco I; Bastos, Leonardo Soares; Coutinho, Carolina; Toledo, Lidiane; Mota, Jurema Corrêa; Velasco-de-Castro, Carlos Augusto; Sperandei, Sandro; Brignol, Sandra; Travassos, Tamiris Severino; Dos Santos, Camila Mattos; Malta, Monica Siqueira

    2018-05-01

    Different sampling strategies, analytic alternatives, and estimators have been proposed to better assess the characteristics of different hard-to-reach populations and their respective infection rates (as well as their sociodemographic characteristics, associated harms, and needs) in the context of studies based on respondent-driven sampling (RDS). Despite several methodological advances and hundreds of empirical studies implemented worldwide, some inchoate findings and methodological challenges remain. The in-depth assessment of the local structure of networks and the performance of the available estimators are particularly relevant when the target populations are sparse and highly stigmatized. In such populations, bottlenecks as well as other sources of biases (for instance, due to homophily and/or too sparse or fragmented groups of individuals) may be frequent, affecting the estimates.In the present study, data were derived from a cross-sectional, multicity RDS study, carried out in 12 Brazilian cities with transgender women (TGW). Overall, infection rates for HIV and syphilis were very high, with some variation between different cities. Notwithstanding, findings are of great concern, considering the fact that female TGW are not only very hard-to-reach but also face deeply-entrenched prejudice and have been out of the reach of most therapeutic and preventive programs and projects.We cross-compared findings adjusted using 2 estimators (the classic estimator usually known as estimator II, originally proposed by Volz and Heckathorn) and a brand new strategy to adjust data generated by RDS, partially based on Bayesian statistics, called for the sake of this paper, the RDS-B estimator. Adjusted prevalence was cross-compared with estimates generated by non-weighted analyses, using what has been called by us a naïve estimator or rough estimates.

  4. Challenges to inferring causality from viral information dispersion in dynamic social networks

    NASA Astrophysics Data System (ADS)

    Ternovski, John

    2014-06-01

    Understanding the mechanism behind large-scale information dispersion through complex networks has important implications for a variety of industries ranging from cyber-security to public health. With the unprecedented availability of public data from online social networks (OSNs) and the low cost nature of most OSN outreach, randomized controlled experiments, the "gold standard" of causal inference methodologies, have been used with increasing regularity to study viral information dispersion. And while these studies have dramatically furthered our understanding of how information disseminates through social networks by isolating causal mechanisms, there are still major methodological concerns that need to be addressed in future research. This paper delineates why modern OSNs are markedly different from traditional sociological social networks and why these differences present unique challenges to experimentalists and data scientists. The dynamic nature of OSNs is particularly troublesome for researchers implementing experimental designs, so this paper identifies major sources of bias arising from network mutability and suggests strategies to circumvent and adjust for these biases. This paper also discusses the practical considerations of data quality and collection, which may adversely impact the efficiency of the estimator. The major experimental methodologies used in the current literature on virality are assessed at length, and their strengths and limits identified. Other, as-yetunsolved threats to the efficiency and unbiasedness of causal estimators--such as missing data--are also discussed. This paper integrates methodologies and learnings from a variety of fields under an experimental and data science framework in order to systematically consolidate and identify current methodological limitations of randomized controlled experiments conducted in OSNs.

  5. Different approaches to assess the environmental performance of a cow manure biogas plant

    NASA Astrophysics Data System (ADS)

    Torrellas, Marta; Burgos, Laura; Tey, Laura; Noguerol, Joan; Riau, Victor; Palatsi, Jordi; Antón, Assumpció; Flotats, Xavier; Bonmatí, August

    2018-03-01

    In intensive livestock production areas, farmers must apply manure management systems to comply with governmental regulations. Biogas plants, as a source of renewable energy, have the potential to reduce environmental impacts comparing with other manure management practices. Nevertheless, manure processing at biogas plants also incurs in non-desired gas emissions that should be considered. At present, available emission calculation methods cover partially emissions produced at a biogas plant, with the subsequent difficulty in the preparation of life cycle inventories. The objective of this study is to characterise gaseous emissions: ammonia (NH3-N), methane (CH4), nitrous oxide (N2Oindirect, and N2Odirect) and hydrogen sulphide (H2S) from the anaerobic co-digestion of cow manure by using different approaches for preparing gaseous emission inventories, and to compare the different methodologies used. The chosen scenario for the study is a biogas plant located next to a dairy farm in the North of Catalonia, Spain. Emissions were calculated by two methods: field measurements and estimation, following international guidelines. International Panel on Climate Change (IPCC) guidelines were adapted to estimate emissions for the specific situation according to Tier 1, Tier 2 and Tier 3 approaches. Total air emissions at the biogas plant were calculated from the emissions produced at the three main manure storage facilities on the plant: influent storage, liquid fraction storage, and the solid fraction storage of the digestate. Results showed that most of the emissions were produced in the liquid fraction storage. Comparing measured emissions with estimated emissions, NH3, CH4, N2Oindirect and H2S total emission results were in the same order of magnitude for both methodologies, while, N2Odirect total measured emissions were one order of magnitude higher than the estimates. A Monte Carlo analysis was carried out to examine the uncertainties of emissions determined from experimental data, providing probability distribution functions. Four emission inventories were developed with the different methodologies used. Estimation methods proved to be a useful tool to determine emissions when field sampling is not possible. Nevertheless, it was not possible to establish which methodology is more reliable. Therefore, more measurements at different biogas plants should be evaluated to validate the methodologies more precisely.

  6. Using the Delphi technique in economic evaluation: time to revisit the oracle?

    PubMed

    Simoens, S

    2006-12-01

    Although the Delphi technique has been commonly used as a data source in medical and health services research, its application in economic evaluation of medicines has been more limited. The aim of this study was to describe the methodology of the Delphi technique, to present a case for using the technique in economic evaluation, and to provide recommendations to improve such use. The literature was accessed through MEDLINE focusing on studies discussing the methodology of the Delphi technique and economic evaluations of medicines using the Delphi technique. The Delphi technique can be used to provide estimates of health care resources required and to modify such estimates when making inter-country comparisons. The Delphi technique can also contribute to mapping the treatment process under investigation, to identifying the appropriate comparator to be used, and to ensuring that the economic evaluation estimates cost-effectiveness rather than cost-efficacy. Ideally, economic evaluations of medicines should be based on real-patient data. In the absence of such data, evaluations need to incorporate the best evidence available by employing approaches such as the Delphi technique. Evaluations based on this approach should state the limitations, and explore the impact of the associated uncertainty in the results.

  7. Greenhouse gas emissions from reservoir water surfaces: A ...

    EPA Pesticide Factsheets

    Collectively, reservoirs created by dams are thought to be an important source ofgreenhouse gases (GHGs) to the atmosphere. So far, efforts to quantify, model, andmanage these emissions have been limited by data availability and inconsistenciesin methodological approach. Here we synthesize worldwide reservoir methane,carbon dioxide, and nitrous oxide emission data with three main objectives: (1) togenerate a global estimate of GHG emissions from reservoirs, (2) to identify the bestpredictors of these emissions, and (3) to consider the effect of methodology onemission estimates. We estimate that GHG emission from reservoir water surfacesaccount for 0.8 (0.5-1.2) Pg CO2-equivalents per year, equal to ~1.3 % of allanthropogenic GHG emissions, with the majority (79%) of this forcing due tomethane. We also discuss the potential for several alternative pathways such as damdegassing and downstream emissions to contribute significantly to overall GHGemissions. Although prior studies have linked reservoir GHG emissions to systemage and latitude, we find that factors related to reservoir productivity are betterpredictors of emission. Finally, as methane contributed the most to total reservoirGHG emissions, it is important that future monitoring campaigns incorporatemethane emission pathways, especially ebullition. To inform the public.

  8. Utilising social media contents for flood inundation mapping

    NASA Astrophysics Data System (ADS)

    Schröter, Kai; Dransch, Doris; Fohringer, Joachim; Kreibich, Heidi

    2016-04-01

    Data about the hazard and its consequences are scarce and not readily available during and shortly after a disaster. An information source which should be explored in a more efficient way is eyewitness accounts via social media. This research presents a methodology that leverages social media content to support rapid inundation mapping, including inundation extent and water depth in the case of floods. It uses quantitative data that are estimated from photos extracted from social media posts and their integration with established data. Due to the rapid availability of these posts compared to traditional data sources such as remote sensing data, areas affected by a flood, for example, can be determined quickly. Key challenges are to filter the large number of posts to a manageable amount of potentially useful inundation-related information, and to interpret and integrate the posts into mapping procedures in a timely manner. We present a methodology and a tool ("PostDistiller") to filter geo-located posts from social media services which include links to photos and to further explore this spatial distributed contextualized in situ information for inundation mapping. The June 2013 flood in Dresden is used as an application case study in which we evaluate the utilization of this approach and compare the resulting spatial flood patterns and inundation depths to 'traditional' data sources and mapping approaches like water level observations and remote sensing flood masks. The outcomes of the application case are encouraging. Strengths of the proposed procedure are that information for the estimation of inundation depth is rapidly available, particularly in urban areas where it is of high interest and of great value because alternative information sources like remote sensing data analysis do not perform very well. The uncertainty of derived inundation depth data and the uncontrollable availability of the information sources are major threats to the utility of the approach.

  9. Monitoring the size and protagonists of the drug market: combining supply and demand data sources and estimates.

    PubMed

    Rossi, Carla

    2013-06-01

    The size of the illicit drug market is an important indicator to assess the impact on society of an important part of the illegal economy and to evaluate drug policy and law enforcement interventions. The extent of illicit drug use and of the drug market can essentially only be estimated by indirect methods based on indirect measures and on data from various sources, as administrative data sets and surveys. The combined use of several methodologies and data sets allows to reduce biases and inaccuracies of estimates obtained on the basis of each of them separately. This approach has been applied to Italian data. The estimation methods applied are capture-recapture methods with latent heterogeneity and multiplier methods. Several data sets have been used, both administrative and survey data sets. First, the retail dealer prevalence has been estimated on the basis of administrative data, then the user prevalence by multiplier methods. Using information about behaviour of dealers and consumers from survey data, the average amount of a substance used or sold and the average unit cost have been estimated and allow estimating the size of the drug market. The estimates have been obtained using a supply-side approach and a demand-side approach and have been compared. These results are in turn used for estimating the interception rate for the different substances in term of the value of the substance seized with respect to the total value of the substance to be sold at retail prices.

  10. Effect of body composition methodology on heritability estimation of body fatness

    USDA-ARS?s Scientific Manuscript database

    Heritability estimates of human body fatness vary widely and the contribution of body composition methodology to this variability is unknown. The effect of body composition methodology on estimations of genetic and environmental contributions to body fatness variation was examined in 78 adult male ...

  11. Diver Down: Remote Sensing of Carbon Climate Feedbacks

    NASA Astrophysics Data System (ADS)

    Schimel, D.; Chatterjee, A.; Baker, D. F.; Basu, S.; Denning, A. S.; Schuh, A. E.; Crowell, S.; Jacobson, A. R.; Bowman, K. W.; Liu, J.; O'Dell, C.

    2016-12-01

    What controls the rate of increase of CO2 and CH4 in the atmosphere? It may seem self-evident but actually remains mysterious. The increases of CO2 and CH4 result from a combination of forcing from anthropogenic emissions and Earth System feedbacks that dampen or amplify the effects of those emissions on atmospheric concentrations. The fraction of anthropogenic CO2 remaining in the atmosphere has remained remarkably constant over the last 59 years but has shown recent dynamics and if it changes in the future, will affect the climate impact of any given fossil fuel regime. While greenhouse gases affect the global atmosphere, their sources and sinks are remarkably heterogeneous in time and space, and traditional in situ observing systems do not provide the coverage and resolution to quantify carbon-climate feedbacks or reduce the uncertainty of model predictions. Here we describe an methodology for estimating critical carbon-climate feedback effects of current spaceborne XCO2 measurements, developed by the OCO-2 Flux Group, and applied to OCO-2 and GOSAT data. The methodology allows integration of the space-based carbon budgets with other global data sets, and exposes the impact of residual bias error on the estimated fluxes, allowing the uncertainty of the estimated feedbacks to be quantified. The approach is limited by the short timeseries currently available, but suggests dramatic changes to the carbon cycle over the recent past. We present the methodology, early results and implications for a future, sustained carbon observing system.

  12. Net anthropogenic nitrogen inputs and nitrogen fluxes from Indian watersheds: An initial assessment

    NASA Astrophysics Data System (ADS)

    Swaney, D. P.; Hong, B.; Paneer Selvam, A.; Howarth, R. W.; Ramesh, R.; Purvaja, R.

    2015-01-01

    In this paper, we apply an established methodology for estimating Net Anthropogenic Nitrogen Inputs (NANI) to India and its major watersheds. Our primary goal here is to provide initial estimates of major nitrogen inputs of NANI for India, at the country level and for major Indian watersheds, including data sources and parameter estimates, making some assumptions as needed in areas of limited data availability. Despite data limitations, we believe that it is clear that the main anthropogenic N source is agricultural fertilizer, which is being produced and applied at a growing rate, followed by N fixation associated with rice, leguminous crops, and sugar cane. While India appears to be a net exporter of N in food/feed as reported elsewhere (Lassaletta et al., 2013b), the balance of N associated with exports and imports of protein in food and feedstuffs is sensitive to protein content and somewhat uncertain. While correlating watershed N inputs with riverine N fluxes is problematic due in part to limited available riverine data, we have assembled some data for comparative purposes. We also suggest possible improvements in methods for future studies, and the potential for estimating riverine N fluxes to coastal waters.

  13. Innovative methodology for intercomparison of radionuclide calibrators using short half-life in situ prepared radioactive sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliveira, P. A.; Santos, J. A. M., E-mail: joao.santos@ipoporto.min-saude.pt; Serviço de Física Médica do Instituto Português de Oncologia do Porto Francisco Gentil, EPE, Porto

    2014-07-15

    Purpose: An original radionuclide calibrator method for activity determination is presented. The method could be used for intercomparison surveys for short half-life radioactive sources used in Nuclear Medicine, such as{sup 99m}Tc or most positron emission tomography radiopharmaceuticals. Methods: By evaluation of the resulting net optical density (netOD) using a standardized scanning method of irradiated Gafchromic XRQA2 film, a comparison of the netOD measurement with a previously determined calibration curve can be made and the difference between the tested radionuclide calibrator and a radionuclide calibrator used as reference device can be calculated. To estimate the total expected measurement uncertainties, a carefulmore » analysis of the methodology, for the case of{sup 99m}Tc, was performed: reproducibility determination, scanning conditions, and possible fadeout effects. Since every factor of the activity measurement procedure can influence the final result, the method also evaluates correct syringe positioning inside the radionuclide calibrator. Results: As an alternative to using a calibrated source sent to the surveyed site, which requires a relatively long half-life of the nuclide, or sending a portable calibrated radionuclide calibrator, the proposed method uses a source preparedin situ. An indirect activity determination is achieved by the irradiation of a radiochromic film using {sup 99m}Tc under strictly controlled conditions, and cumulated activity calculation from the initial activity and total irradiation time. The irradiated Gafchromic film and the irradiator, without the source, can then be sent to a National Metrology Institute for evaluation of the results. Conclusions: The methodology described in this paper showed to have a good potential for accurate (3%) radionuclide calibrators intercomparison studies for{sup 99m}Tc between Nuclear Medicine centers without source transfer and can easily be adapted to other short half-life radionuclides.« less

  14. THE IMPACT OF POINT-SOURCE SUBTRACTION RESIDUALS ON 21 cm EPOCH OF REIONIZATION ESTIMATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trott, Cathryn M.; Wayth, Randall B.; Tingay, Steven J., E-mail: cathryn.trott@curtin.edu.au

    Precise subtraction of foreground sources is crucial for detecting and estimating 21 cm H I signals from the Epoch of Reionization (EoR). We quantify how imperfect point-source subtraction due to limitations of the measurement data set yields structured residual signal in the data set. We use the Cramer-Rao lower bound, as a metric for quantifying the precision with which a parameter may be measured, to estimate the residual signal in a visibility data set due to imperfect point-source subtraction. We then propagate these residuals into two metrics of interest for 21 cm EoR experiments-the angular power spectrum and two-dimensional powermore » spectrum-using a combination of full analytic covariant derivation, analytic variant derivation, and covariant Monte Carlo simulations. This methodology differs from previous work in two ways: (1) it uses information theory to set the point-source position error, rather than assuming a global rms error, and (2) it describes a method for propagating the errors analytically, thereby obtaining the full correlation structure of the power spectra. The methods are applied to two upcoming low-frequency instruments that are proposing to perform statistical EoR experiments: the Murchison Widefield Array and the Precision Array for Probing the Epoch of Reionization. In addition to the actual antenna configurations, we apply the methods to minimally redundant and maximally redundant configurations. We find that for peeling sources above 1 Jy, the amplitude of the residual signal, and its variance, will be smaller than the contribution from thermal noise for the observing parameters proposed for upcoming EoR experiments, and that optimal subtraction of bright point sources will not be a limiting factor for EoR parameter estimation. We then use the formalism to provide an ab initio analytic derivation motivating the 'wedge' feature in the two-dimensional power spectrum, complementing previous discussion in the literature.« less

  15. Estimation of the sensitive volume for gravitational-wave source populations using weighted Monte Carlo integration

    NASA Astrophysics Data System (ADS)

    Tiwari, Vaibhav

    2018-07-01

    The population analysis and estimation of merger rates of compact binaries is one of the important topics in gravitational wave astronomy. The primary ingredient in these analyses is the population-averaged sensitive volume. Typically, sensitive volume, of a given search to a given simulated source population, is estimated by drawing signals from the population model and adding them to the detector data as injections. Subsequently injections, which are simulated gravitational waveforms, are searched for by the search pipelines and their signal-to-noise ratio (SNR) is determined. Sensitive volume is estimated, by using Monte-Carlo (MC) integration, from the total number of injections added to the data, the number of injections that cross a chosen threshold on SNR and the astrophysical volume in which the injections are placed. So far, only fixed population models have been used in the estimation of binary black holes (BBH) merger rates. However, as the scope of population analysis broaden in terms of the methodologies and source properties considered, due to an increase in the number of observed gravitational wave (GW) signals, the procedure will need to be repeated multiple times at a large computational cost. In this letter we address the problem by performing a weighted MC integration. We show how a single set of generic injections can be weighted to estimate the sensitive volume for multiple population models; thereby greatly reducing the computational cost. The weights in this MC integral are the ratios of the output probabilities, determined by the population model and standard cosmology, and the injection probability, determined by the distribution function of the generic injections. Unlike analytical/semi-analytical methods, which usually estimate sensitive volume using single detector sensitivity, the method is accurate within statistical errors, comes at no added cost and requires minimal computational resources.

  16. Airborne Quantification of Methane Emissions in the San Francisco Bay Area of California

    NASA Astrophysics Data System (ADS)

    Guha, A.; Newman, S.; Martien, P. T.; Young, A.; Hilken, H.; Faloona, I. C.; Conley, S.

    2017-12-01

    The Bay Area Air Quality Management District, the San Francisco Bay Area's air quality regulatory agency, has set a goal to reduce the region's greenhouse gas (GHG) emissions 80% below 1990 levels by 2050, consistent with the State of California's climate protection goal. The Air District maintains a regional GHG emissions inventory that includes emissions estimates and projections which influence the agency's programs and regulatory activities. The Air District is currently working to better characterize methane emissions in the GHG inventory through source-specific measurements, to resolve differences between top-down regional estimates (Fairley and Fischer, 2015; Jeong et al., 2016) and the bottom-up inventory. The Air District funded and participated in a study in Fall 2016 to quantify methane emissions from a variety of sources from an instrumented Mooney aircraft. This study included 40 hours of cylindrical vertical profile flights that combined methane and wind measurements to derive mass emission rates. Simultaneous measurements of ethane provided source-apportionment between fossil-based and biological methane sources. The facilities sampled included all five refineries in the region, five landfills, two dairy farms and three wastewater treatment plants. The calculated mass emission rates were compared to bottom-up rates generated by the Air District and to those from facility reports to the US EPA as part of the mandatory GHG reporting program. Carbon dioxide emission rates from refineries are found to be similar to bottom-up estimates for all sources, supporting the efficacy of the airborne measurement methodology. However, methane emission estimates from the airborne method showed significant differences for some source categories. For example, methane emission estimates based on airborne measurements were up to an order of magnitude higher for refineries, and up to five times higher for landfills compared to bottom-up methods, suggesting significant underestimation in the inventories and self-reported estimates. Future measurements over the same facilities will reveal if we have seasonal and process-dependent trends in emissions. This will provide a basis for rule making and for designing mitigation and control actions.

  17. Identifing Atmospheric Pollutant Sources Using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Paes, F. F.; Campos, H. F.; Luz, E. P.; Carvalho, A. R.

    2008-05-01

    The estimation of the area source pollutant strength is a relevant issue for atmospheric environment. This characterizes an inverse problem in the atmospheric pollution dispersion. In the inverse analysis, an area source domain is considered, where the strength of such area source term is assumed unknown. The inverse problem is solved by using a supervised artificial neural network: multi-layer perceptron. The conection weights of the neural network are computed from delta rule - learning process. The neural network inversion is compared with results from standard inverse analysis (regularized inverse solution). In the regularization method, the inverse problem is formulated as a non-linear optimization approach, whose the objective function is given by the square difference between the measured pollutant concentration and the mathematical models, associated with a regularization operator. In our numerical experiments, the forward problem is addressed by a source-receptor scheme, where a regressive Lagrangian model is applied to compute the transition matrix. The second order maximum entropy regularization is used, and the regularization parameter is calculated by the L-curve technique. The objective function is minimized employing a deterministic scheme (a quasi-Newton algorithm) [1] and a stochastic technique (PSO: particle swarm optimization) [2]. The inverse problem methodology is tested with synthetic observational data, from six measurement points in the physical domain. The best inverse solutions were obtained with neural networks. References: [1] D. R. Roberti, D. Anfossi, H. F. Campos Velho, G. A. Degrazia (2005): Estimating Emission Rate and Pollutant Source Location, Ciencia e Natura, p. 131-134. [2] E.F.P. da Luz, H.F. de Campos Velho, J.C. Becceneri, D.R. Roberti (2007): Estimating Atmospheric Area Source Strength Through Particle Swarm Optimization. Inverse Problems, Desing and Optimization Symposium IPDO-2007, April 16-18, Miami (FL), USA, vol 1, p. 354-359.

  18. A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem

    PubMed Central

    Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.

    2013-01-01

    Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy, and robustness. PMID:24055554

  19. Improved Bayesian Infrasonic Source Localization for regional infrasound

    DOE PAGES

    Blom, Philip S.; Marcillo, Omar; Arrowsmith, Stephen J.

    2015-10-20

    The Bayesian Infrasonic Source Localization (BISL) methodology is examined and simplified providing a generalized method of estimating the source location and time for an infrasonic event and the mathematical framework is used therein. The likelihood function describing an infrasonic detection used in BISL has been redefined to include the von Mises distribution developed in directional statistics and propagation-based, physically derived celerity-range and azimuth deviation models. Frameworks for constructing propagation-based celerity-range and azimuth deviation statistics are presented to demonstrate how stochastic propagation modelling methods can be used to improve the precision and accuracy of the posterior probability density function describing themore » source localization. Infrasonic signals recorded at a number of arrays in the western United States produced by rocket motor detonations at the Utah Test and Training Range are used to demonstrate the application of the new mathematical framework and to quantify the improvement obtained by using the stochastic propagation modelling methods. Moreover, using propagation-based priors, the spatial and temporal confidence bounds of the source decreased by more than 40 per cent in all cases and by as much as 80 per cent in one case. Further, the accuracy of the estimates remained high, keeping the ground truth within the 99 per cent confidence bounds for all cases.« less

  20. Methodology for estimating soil carbon for the forest carbon budget model of the United States, 2001

    Treesearch

    L. S. Heath; R. A. Birdsey; D. W. Williams

    2002-01-01

    The largest carbon (C) pool in United States forests is the soil C pool. We present methodology and soil C pool estimates used in the FORCARB model, which estimates and projects forest carbon budgets for the United States. The methodology balances knowledge, uncertainties, and ease of use. The estimates are calculated using the USDA Natural Resources Conservation...

  1. Methodology Development of Computationally-Efficient Full Vehicle Simulations for the Entire Blast Event

    DTIC Science & Technology

    2015-08-06

    Philip Kosarek1, Julien Santini1, Ravi Thyagarajan2 1 Altair Product Design, Inc., Troy, MI 2US Army TARDEC, Warren, MI This is a reprint...is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1 . REPORT DATE (DD-MM-YYYY) 06

  2. U.S. Army Research Laboratory (ARL) XPairIt Simulator for Peptide Docking and Analysis

    DTIC Science & Technology

    2014-07-01

    results from a case study, docking a short peptide to a small protein. For this test we choose the 1RXZ system from the Protein Data Bank, which...estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data ...core of XPairIt, which additionally contains many data management and organization options, analysis tools, and custom simulation methodology. Two

  3. [Methodological approaches of a social budget of disability].

    PubMed

    Fardeau, M

    1994-01-01

    By gathering data from different sources, it may be possible to estimate the French social budget of disability. In 1990, approximatively 126.9 million FF were devoted by the nation to its disabled population. One quarter of the amount is "in kind", for financing training centers, nursing homes for the disabled... The three remaining quarters are composed of "cash benefits" (disability allowances, work accident annuities,...). The approach makes it possible the assessment of disability in economic terms.

  4. Aerodynamics and Control of Quadrotors

    NASA Astrophysics Data System (ADS)

    Bangura, Moses

    Quadrotors are aerial vehicles with a four motor-rotor assembly for generating lift and controllability. Their light weight, ease of design and simple dynamics have increased their use in aerial robotics research. There are many quadrotors that are commercially available or under development. Commercial off-the-shelf quadrotors usually lack the ability to be reprogrammed and are unsuitable for use as research platforms. The open-source code developed in this thesis differs from other open-source systems by focusing on the key performance road blocks in implementing high performance experimental quadrotor platforms for research: motor-rotor control for thrust regulation, velocity and attitude estimation, and control for position regulation and trajectory tracking. In all three of these fundamental subsystems, code sub modules for implementation on commonly available hardware are provided. In addition, the thesis provides guidance on scoping and commissioning open-source hardware components to build a custom quadrotor. A key contribution of the thesis is then a design methodology for the development of experimental quadrotor platforms from open-source or commercial off-the-shelf software and hardware components that have active community support. Quadrotors built following the methodology allows the user access to the operation of the subsystems and, in particular, the user can tune the gains of the observers and controllers in order to push the overall system to its performance limits. This enables the quadrotor framework to be used for a variety of applications such as heavy lifting and high performance aggressive manoeuvres by both the hobby and academic communities. To address the question of thrust control, momentum and blade element theories are used to develop aerodynamic models for rotor blades specific to quadrotors. With the aerodynamic models, a novel thrust estimation and control scheme that improves on existing RPM (revolutions per minute) control of rotors is proposed. The approach taken uses the measured electrical power into the rotors compensating for electrical loses, to estimate changing aerodynamic conditions around a rotor as well as the aerodynamic thrust force. The resulting control algorithms are implemented in real-time on the embedded electronic speed controller (ESC) hardware. Using the estimates of the aerodynamic conditions around the rotor at this level improves the dynamic response to gust as the low-level thrust control is the fastest dynamic level on the vehicle. The aerodynamic estimation scheme enables the vehicle to react almost instantaneously to aerodynamic changes in the environment without affecting the overall dynamic performance of the vehicle. (Abstract shortened by ProQuest.).

  5. Quantifying methane emissions from natural gas production in north-eastern Pennsylvania

    NASA Astrophysics Data System (ADS)

    Barkley, Zachary R.; Lauvaux, Thomas; Davis, Kenneth J.; Deng, Aijun; Miles, Natasha L.; Richardson, Scott J.; Cao, Yanni; Sweeney, Colm; Karion, Anna; Smith, MacKenzie; Kort, Eric A.; Schwietzke, Stefan; Murphy, Thomas; Cervone, Guido; Martins, Douglas; Maasakkers, Joannes D.

    2017-11-01

    Natural gas infrastructure releases methane (CH4), a potent greenhouse gas, into the atmosphere. The estimated emission rate associated with the production and transportation of natural gas is uncertain, hindering our understanding of its greenhouse footprint. This study presents a new application of inverse methodology for estimating regional emission rates from natural gas production and gathering facilities in north-eastern Pennsylvania. An inventory of CH4 emissions was compiled for major sources in Pennsylvania. This inventory served as input emission data for the Weather Research and Forecasting model with chemistry enabled (WRF-Chem), and atmospheric CH4 mole fraction fields were generated at 3 km resolution. Simulated atmospheric CH4 enhancements from WRF-Chem were compared to observations obtained from a 3-week flight campaign in May 2015. Modelled enhancements from sources not associated with upstream natural gas processes were assumed constant and known and therefore removed from the optimization procedure, creating a set of observed enhancements from natural gas only. Simulated emission rates from unconventional production were then adjusted to minimize the mismatch between aircraft observations and model-simulated mole fractions for 10 flights. To evaluate the method, an aircraft mass balance calculation was performed for four flights where conditions permitted its use. Using the model optimization approach, the weighted mean emission rate from unconventional natural gas production and gathering facilities in north-eastern Pennsylvania approach is found to be 0.36 % of total gas production, with a 2σ confidence interval between 0.27 and 0.45 % of production. Similarly, the mean emission estimates using the aircraft mass balance approach are calculated to be 0.40 % of regional natural gas production, with a 2σ confidence interval between 0.08 and 0.72 % of production. These emission rates as a percent of production are lower than rates found in any other basin using a top-down methodology, and may be indicative of some characteristics of the basin that make sources from the north-eastern Marcellus region unique.

  6. The HIV care cascade: a systematic review of data sources, methodology and comparability.

    PubMed

    Medland, Nicholas A; McMahon, James H; Chow, Eric P F; Elliott, Julian H; Hoy, Jennifer F; Fairley, Christopher K

    2015-01-01

    The cascade of HIV diagnosis, care and treatment (HIV care cascade) is increasingly used to direct and evaluate interventions to increase population antiretroviral therapy (ART) coverage, a key component of treatment as prevention. The ability to compare cascades over time, sub-population, jurisdiction or country is important. However, differences in data sources and methodology used to construct the HIV care cascade might limit its comparability and ultimately its utility. Our aim was to review systematically the different methods used to estimate and report the HIV care cascade and their comparability. A search of published and unpublished literature through March 2015 was conducted. Cascades that reported the continuum of care from diagnosis to virological suppression in a demographically definable population were included. Data sources and methods of measurement or estimation were extracted. We defined the most comparable cascade elements as those that directly measured diagnosis or care from a population-based data set. Thirteen reports were included after screening 1631 records. The undiagnosed HIV-infected population was reported in seven cascades, each of which used different data sets and methods and could not be considered to be comparable. All 13 used mandatory HIV diagnosis notification systems to measure the diagnosed population. Population-based data sets, derived from clinical data or mandatory reporting of CD4 cell counts and viral load tests from all individuals, were used in 6 of 12 cascades reporting linkage, 6 of 13 reporting retention, 3 of 11 reporting ART and 6 of 13 cascades reporting virological suppression. Cascades with access to population-based data sets were able to directly measure cascade elements and are therefore comparable over time, place and sub-population. Other data sources and methods are less comparable. To ensure comparability, countries wishing to accurately measure the cascade should utilize complete population-based data sets from clinical data from elements of a centralized healthcare setting, where available, or mandatory CD4 cell count and viral load test result reporting. Additionally, virological suppression should be presented both as percentage of diagnosed and percentage of estimated total HIV-infected population, until methods to calculate the latter have been standardized.

  7. 2-D or not 2-D, that is the question: A Northern California test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayeda, K; Malagnini, L; Phillips, W S

    2005-06-06

    Reliable estimates of the seismic source spectrum are necessary for accurate magnitude, yield, and energy estimation. In particular, how seismic radiated energy scales with increasing earthquake size has been the focus of recent debate within the community and has direct implications on earthquake source physics studies as well as hazard mitigation. The 1-D coda methodology of Mayeda et al. has provided the lowest variance estimate of the source spectrum when compared against traditional approaches that use direct S-waves, thus making it ideal for networks that have sparse station distribution. The 1-D coda methodology has been mostly confined to regions ofmore » approximately uniform complexity. For larger, more geophysically complicated regions, 2-D path corrections may be required. The complicated tectonics of the northern California region coupled with high quality broadband seismic data provides for an ideal ''apples-to-apples'' test of 1-D and 2-D path assumptions on direct waves and their coda. Using the same station and event distribution, we compared 1-D and 2-D path corrections and observed the following results: (1) 1-D coda results reduced the amplitude variance relative to direct S-waves by roughly a factor of 8 (800%); (2) Applying a 2-D correction to the coda resulted in up to 40% variance reduction from the 1-D coda results; (3) 2-D direct S-wave results, though better than 1-D direct waves, were significantly worse than the 1-D coda. We found that coda-based moment-rate source spectra derived from the 2-D approach were essentially identical to those from the 1-D approach for frequencies less than {approx}0.7-Hz, however for the high frequencies (0.7{le} f {le} 8.0-Hz), the 2-D approach resulted in inter-station scatter that was generally 10-30% smaller. For complex regions where data are plentiful, a 2-D approach can significantly improve upon the simple 1-D assumption. In regions where only 1-D coda correction is available it is still preferable over 2-D direct wave-based measures.« less

  8. A 3-D model analysis of the slowdown and interannual variability in the methane growth rate from 1988 to 1997

    NASA Astrophysics Data System (ADS)

    Wang, James S.; Logan, Jennifer A.; McElroy, Michael B.; Duncan, Bryan N.; Megretskaia, Inna A.; Yantosca, Robert M.

    2004-09-01

    Methane has exhibited significant interannual variability with a slowdown in its growth rate beginning in the 1980s. We use a 3-D chemical transport model accounting for interannually varying emissions, transport, and sinks to analyze trends in CH4 from 1988 to 1997. Variations in CH4 sources were based on meteorological and country-level socioeconomic data. An inverse method was used to optimize the strengths of sources and sinks for a base year, 1994. We present a best-guess budget along with sensitivity tests. The analysis suggests that the sum of emissions from animals, fossil fuels, landfills, and wastewater estimated using Intergovernmental Panel on Climate Change default methodology is too high. Recent bottom-up estimates of the source from rice paddies appear to be too low. Previous top-down estimates of emissions from wetlands may be a factor of 2 higher than bottom-up estimates because of possible overestimates of OH. The model captures the general decrease in the CH4 growth rate observed from 1988 to 1997 and the anomalously low growth rates during 1992-1993. The slowdown in the growth rate is attributed to a combination of slower growth of sources and increases in OH. The economic downturn in the former Soviet Union and Eastern Europe made a significant contribution to the decrease in the growth rate of emissions. The 1992-1993 anomaly can be explained by fluctuations in wetland emissions and OH after the eruption of Mount Pinatubo. The results suggest that the recent slowdown of CH4 may be temporary.

  9. A decline in the prevalence of injecting drug users in Estonia, 2005–2009

    PubMed Central

    Uusküla, A; Rajaleid, K; Talu, A; Abel-Ollo, K; Des Jarlais, DC

    2013-01-01

    Aims and setting Descriptions of behavioural epidemics have received little attention compared with infectious disease epidemics in Eastern Europe. Here we report a study aimed at estimating trends in the prevalence of injection drug use between 2005 and 2009 in Estonia. Design and methods The number of injection drug users (IDUs) aged 15–44 each year between 2005 and 2009 was estimated using capture-recapture methodology based on 4 data sources (2 treatment data bases: drug abuse and non-fatal overdose treatment; criminal justice (drug related offences) and mortality (injection drug use related deaths) data). Poisson log-linear regression models were applied to the matched data, with interactions between data sources fitted to replicate the dependencies between the data sources. Linear regression was used to estimate average change over time. Findings there were 24305, 12292, 238, 545 records and 8100, 1655, 155, 545 individual IDUs identified in the four capture sources (Police, drug treatment, overdose, and death registry, accordingly) over the period 2005 – 2009. The estimated prevalence of IDUs among the population aged 15–44 declined from 2.7% (1.8–7.9%) in 2005 to 2.0% (1.4–5.0%) in 2008, and 0.9% (0.7–1.7%) in 2009. Regression analysis indicated an average reduction of over 1700 injectors per year. Conclusion While the capture-recapture method has known limitations, the results are consistent with other data from Estonia. Identifying the drivers of change in the prevalence of injection drug use warrants further research. PMID:23290632

  10. Accurate Influenza Monitoring and Forecasting Using Novel Internet Data Streams: A Case Study in the Boston Metropolis

    PubMed Central

    Lu, Fred Sun; Hou, Suqin; Baltrusaitis, Kristin; Shah, Manan; Leskovec, Jure; Sosic, Rok; Hawkins, Jared; Brownstein, John; Conidi, Giuseppe; Gunn, Julia; Gray, Josh; Zink, Anna

    2018-01-01

    Background Influenza outbreaks pose major challenges to public health around the world, leading to thousands of deaths a year in the United States alone. Accurate systems that track influenza activity at the city level are necessary to provide actionable information that can be used for clinical, hospital, and community outbreak preparation. Objective Although Internet-based real-time data sources such as Google searches and tweets have been successfully used to produce influenza activity estimates ahead of traditional health care–based systems at national and state levels, influenza tracking and forecasting at finer spatial resolutions, such as the city level, remain an open question. Our study aimed to present a precise, near real-time methodology capable of producing influenza estimates ahead of those collected and published by the Boston Public Health Commission (BPHC) for the Boston metropolitan area. This approach has great potential to be extended to other cities with access to similar data sources. Methods We first tested the ability of Google searches, Twitter posts, electronic health records, and a crowd-sourced influenza reporting system to detect influenza activity in the Boston metropolis separately. We then adapted a multivariate dynamic regression method named ARGO (autoregression with general online information), designed for tracking influenza at the national level, and showed that it effectively uses the above data sources to monitor and forecast influenza at the city level 1 week ahead of the current date. Finally, we presented an ensemble-based approach capable of combining information from models based on multiple data sources to more robustly nowcast as well as forecast influenza activity in the Boston metropolitan area. The performances of our models were evaluated in an out-of-sample fashion over 4 influenza seasons within 2012-2016, as well as a holdout validation period from 2016 to 2017. Results Our ensemble-based methods incorporating information from diverse models based on multiple data sources, including ARGO, produced the most robust and accurate results. The observed Pearson correlations between our out-of-sample flu activity estimates and those historically reported by the BPHC were 0.98 in nowcasting influenza and 0.94 in forecasting influenza 1 week ahead of the current date. Conclusions We show that information from Internet-based data sources, when combined using an informed, robust methodology, can be effectively used as early indicators of influenza activity at fine geographic resolutions. PMID:29317382

  11. Estimating the Entropy of Binary Time Series: Methodology, Some Theory and a Simulation Study

    NASA Astrophysics Data System (ADS)

    Gao, Yun; Kontoyiannis, Ioannis; Bienenstock, Elie

    2008-06-01

    Partly motivated by entropy-estimation problems in neuroscience, we present a detailed and extensive comparison between some of the most popular and effective entropy estimation methods used in practice: The plug-in method, four different estimators based on the Lempel-Ziv (LZ) family of data compression algorithms, an estimator based on the Context-Tree Weighting (CTW) method, and the renewal entropy estimator. METHODOLOGY: Three new entropy estimators are introduced; two new LZ-based estimators, and the “renewal entropy estimator,” which is tailored to data generated by a binary renewal process. For two of the four LZ-based estimators, a bootstrap procedure is described for evaluating their standard error, and a practical rule of thumb is heuristically derived for selecting the values of their parameters in practice. THEORY: We prove that, unlike their earlier versions, the two new LZ-based estimators are universally consistent, that is, they converge to the entropy rate for every finite-valued, stationary and ergodic process. An effective method is derived for the accurate approximation of the entropy rate of a finite-state hidden Markov model (HMM) with known distribution. Heuristic calculations are presented and approximate formulas are derived for evaluating the bias and the standard error of each estimator. SIMULATION: All estimators are applied to a wide range of data generated by numerous different processes with varying degrees of dependence and memory. The main conclusions drawn from these experiments include: (i) For all estimators considered, the main source of error is the bias. (ii) The CTW method is repeatedly and consistently seen to provide the most accurate results. (iii) The performance of the LZ-based estimators is often comparable to that of the plug-in method. (iv) The main drawback of the plug-in method is its computational inefficiency; with small word-lengths it fails to detect longer-range structure in the data, and with longer word-lengths the empirical distribution is severely undersampled, leading to large biases.

  12. Impact of road traffic emissions on ambient air quality in an industrialized area.

    PubMed

    Garcia, Sílvia M; Domingues, Gonçalo; Gomes, Carla; Silva, Alexandra V; Almeida, S Marta

    2013-01-01

    Several epidemiological studies showed a correlation between airborne particulate matter(PM) and the incidence of several diseases in exposed populations. Consequently, the European Commission reinforced the need and obligation of member-states to monitor exposure levels of PM and adopt measures to reduce this exposure. However, in order to plan appropriate actions, it is necessary to understand the main sources of air pollution and their relative contributions to the formation of the ambient aerosol. The aim of this study was to develop a methodology to assess the contribution of vehicles to the atmospheric aerosol,which may constitute a useful tool to assess the effectiveness of planned mitigation actions.This methodology is based on three main steps: (1) estimation of traffic emissions provided from the vehicles exhaust and resuspension; (2) use of the dispersion model TAPM (“The Air Pollution Model”) to estimate the contribution of traffic for the atmospheric aerosol; and(3) use of geographic information system (GIS) tools to map the PM10 concentrations provided from traffic in the surroundings of a target area. The methodology was applied to an industrial area, and results showed that the highest contribution of traffic for the PM10 concentrations resulted from dust resuspension and that heavy vehicles were the type that most contributed to the PM10 concentration.

  13. Efficient methods for joint estimation of multiple fundamental frequencies in music signals

    NASA Astrophysics Data System (ADS)

    Pertusa, Antonio; Iñesta, José M.

    2012-12-01

    This study presents efficient techniques for multiple fundamental frequency estimation in music signals. The proposed methodology can infer harmonic patterns from a mixture considering interactions with other sources and evaluate them in a joint estimation scheme. For this purpose, a set of fundamental frequency candidates are first selected at each frame, and several hypothetical combinations of them are generated. Combinations are independently evaluated, and the most likely is selected taking into account the intensity and spectral smoothness of its inferred patterns. The method is extended considering adjacent frames in order to smooth the detection in time, and a pitch tracking stage is finally performed to increase the temporal coherence. The proposed algorithms were evaluated in MIREX contests yielding state of the art results with a very low computational burden.

  14. Improving risk estimates of runoff producing areas: formulating variable source areas as a bivariate process.

    PubMed

    Cheng, Xiaoya; Shaw, Stephen B; Marjerison, Rebecca D; Yearick, Christopher D; DeGloria, Stephen D; Walter, M Todd

    2014-05-01

    Predicting runoff producing areas and their corresponding risks of generating storm runoff is important for developing watershed management strategies to mitigate non-point source pollution. However, few methods for making these predictions have been proposed, especially operational approaches that would be useful in areas where variable source area (VSA) hydrology dominates storm runoff. The objective of this study is to develop a simple approach to estimate spatially-distributed risks of runoff production. By considering the development of overland flow as a bivariate process, we incorporated both rainfall and antecedent soil moisture conditions into a method for predicting VSAs based on the Natural Resource Conservation Service-Curve Number equation. We used base-flow immediately preceding storm events as an index of antecedent soil wetness status. Using nine sub-basins of the Upper Susquehanna River Basin, we demonstrated that our estimated runoff volumes and extent of VSAs agreed with observations. We further demonstrated a method for mapping these areas in a Geographic Information System using a Soil Topographic Index. The proposed methodology provides a new tool for watershed planners for quantifying runoff risks across watersheds, which can be used to target water quality protection strategies. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Evaluation of Earthquake-Induced Effects on Neighbouring Faults and Volcanoes: Application to the 2016 Pedernales Earthquake

    NASA Astrophysics Data System (ADS)

    Bejar, M.; Alvarez Gomez, J. A.; Staller, A.; Luna, M. P.; Perez Lopez, R.; Monserrat, O.; Chunga, K.; Herrera, G.; Jordá, L.; Lima, A.; Martínez-Díaz, J. J.

    2017-12-01

    It has long been recognized that earthquakes change the stress in the upper crust around the fault rupture and can influence the short-term behaviour of neighbouring faults and volcanoes. Rapid estimates of these stress changes can provide the authorities managing the post-disaster situation with a useful tool to identify and monitor potential threads and to update the estimates of seismic and volcanic hazard in a region. Space geodesy is now routinely used following an earthquake to image the displacement of the ground and estimate the rupture geometry and the distribution of slip. Using the obtained source model, it is possible to evaluate the remaining moment deficit and to infer the stress changes on nearby faults and volcanoes produced by the earthquake, which can be used to identify which faults and volcanoes are brought closer to failure or activation. Although these procedures are commonly used today, the transference of these results to the authorities managing the post-disaster situation is not straightforward and thus its usefulness is reduced in practice. Here we propose a methodology to evaluate the potential influence of an earthquake on nearby faults and volcanoes and create easy-to-understand maps for decision-making support after an earthquake. We apply this methodology to the Mw 7.8, 2016 Ecuador earthquake. Using Sentinel-1 SAR and continuous GPS data, we measure the coseismic ground deformation and estimate the distribution of slip. Then we use this model to evaluate the moment deficit on the subduction interface and changes of stress on the surrounding faults and volcanoes. The results are compared with the seismic and volcanic events that have occurred after the earthquake. We discuss potential and limits of the methodology and the lessons learnt from discussion with local authorities.

  16. A new methodology based on sensitivity analysis to simplify the recalibration of functional-structural plant models in new conditions.

    PubMed

    Mathieu, Amélie; Vidal, Tiphaine; Jullien, Alexandra; Wu, QiongLi; Chambon, Camille; Bayol, Benoit; Cournède, Paul-Henry

    2018-06-19

    Functional-structural plant models (FSPMs) describe explicitly the interactions between plants and their environment at organ to plant scale. However, the high level of description of the structure or model mechanisms makes this type of model very complex and hard to calibrate. A two-step methodology to facilitate the calibration process is proposed here. First, a global sensitivity analysis method was applied to the calibration loss function. It provided first-order and total-order sensitivity indexes that allow parameters to be ranked by importance in order to select the most influential ones. Second, the Akaike information criterion (AIC) was used to quantify the model's quality of fit after calibration with different combinations of selected parameters. The model with the lowest AIC gives the best combination of parameters to select. This methodology was validated by calibrating the model on an independent data set (same cultivar, another year) with the parameters selected in the second step. All the parameters were set to their nominal value; only the most influential ones were re-estimated. Sensitivity analysis applied to the calibration loss function is a relevant method to underline the most significant parameters in the estimation process. For the studied winter oilseed rape model, 11 out of 26 estimated parameters were selected. Then, the model could be recalibrated for a different data set by re-estimating only three parameters selected with the model selection method. Fitting only a small number of parameters dramatically increases the efficiency of recalibration, increases the robustness of the model and helps identify the principal sources of variation in varying environmental conditions. This innovative method still needs to be more widely validated but already gives interesting avenues to improve the calibration of FSPMs.

  17. A high-resolution, empirical approach to climate impact assessment for regulatory analysis

    NASA Astrophysics Data System (ADS)

    Delgado, M.; Simcock, J. G.; Greenstone, M.; Hsiang, S. M.; Kopp, R. E.; Carleton, T.; Hultgren, A.; Jina, A.; Rising, J. A.; Nath, I.; Yuan, J.; Rode, A.; Chong, T.; Dobbels, G.; Hussain, A.; Wang, J.; Song, Y.; Mohan, S.; Larsen, K.; Houser, T.

    2017-12-01

    Recent breakthroughs in computing, data availability, and methodology have precipitated significant advances in the understanding of the relationship between climate and socioeconomic outcomes [1]. And while the use of estimates of the global marginal costs of greenhouse gas emissions (e.g. the SCC) are a mandatory component of regulatory policy in many jurisdictions, existing SCC-IAMs have lagged advances in impact assessment and valuation [2]. Recent work shows that incorporating high spatial and temporal resolution can significantly affect the observed relationships of economic outcomes to climate and socioeconomic factors [3] and that maintaining this granularity is critical to understanding the sensitivity of aggregate measures of valuation to inequality and risk adjustment methodologies [4]. We propose a novel framework that decomposes uncertainty in the SCC along multiple sources, including aggregate climate response parameters, the translation of global climate into local weather, the effect of weather on physical and economic systems, human and macro-economic responses, and impact valuation methodologies. This work extends Hsiang et al. (2017) [4] to directly estimate local response functions for multiple sectors in each of 24,378 global regions and to estimate impacts at this resolution daily, incorporating endogenous, empirically-estimated adaptation and costs. The goal of this work is to provide insight into the heterogeneity of climate impacts and to work with other modeling teams to enhance the empirical grounding of integrated climate impact assessment in more complex energy-environment-economics models. [1] T. Carleton and S. Hsiang (2016), DOI: 10.1126/science.aad9837. [2] National Academies of Sciences, Engineering, and Medicine (2017), DOI: 10.17226/24651. [3] Burke, M., S. Hsiang, and E. Miguel (2015), DOI: 10.1038/nature15725. [4] S. Hsiang et al. (2017), DOI: 10.1126/science.aal4369.

  18. Combining controlled-source seismology and receiver function information to derive 3-D Moho topography for Italy

    NASA Astrophysics Data System (ADS)

    Spada, M.; Bianchi, I.; Kissling, E.; Agostinetti, N. Piana; Wiemer, S.

    2013-08-01

    The accurate definition of 3-D crustal structures and, in primis, the Moho depth, are the most important requirement for seismological, geophysical and geodynamic modelling in complex tectonic regions. In such areas, like the Mediterranean region, various active and passive seismic experiments are performed, locally reveal information on Moho depth, average and gradient crustal Vp velocity and average Vp/Vs velocity ratios. Until now, the most reliable information on crustal structures stems from controlled-source seismology experiments. In most parts of the Alpine region, a relatively large number of controlled-source seismology information are available though the overall coverage in the central Mediterranean area is still sparse due to high costs of such experiments. Thus, results from other seismic methodologies, such as local earthquake tomography, receiver functions and ambient noise tomography can be used to complement the controlled-source seismology information to increase coverage and thus the quality of 3-D crustal models. In this paper, we introduce a methodology to directly combine controlled-source seismology and receiver functions information relying on the strengths of each method and in relation to quantitative uncertainty estimates for all data to derive a well resolved Moho map for Italy. To obtain a homogeneous elaboration of controlled-source seismology and receiver functions results, we introduce a new classification/weighting scheme based on uncertainty assessment for receiver functions data. In order to tune the receiver functions information quality, we compare local receiver functions Moho depths and uncertainties with a recently derived well-resolved local earthquake tomography-derived Moho map and with controlled-source seismology information. We find an excellent correlation in the Moho information obtained by these three methodologies in Italy. In the final step, we interpolate the controlled-source seismology and receiver functions information to derive the map of Moho topography in Italy and surrounding regions. Our results show high-frequency undulation in the Moho topography of three different Moho interfaces, the European, the Adriatic-Ionian, and the Liguria-Corsica-Sardinia-Tyrrhenia, reflecting the complexity of geodynamical evolution.

  19. K-Means Subject Matter Expert Refined Topic Model Methodology

    DTIC Science & Technology

    2017-01-01

    Refined Topic Model Methodology Topic Model Estimation via K-Means U.S. Army TRADOC Analysis Center-Monterey 700 Dyer Road...January 2017 K-means Subject Matter Expert Refined Topic Model Methodology Topic Model Estimation via K-Means Theodore T. Allen, Ph.D. Zhenhuan...Matter Expert Refined Topic Model Methodology Topic Model Estimation via K-means 5a. CONTRACT NUMBER W9124N-15-P-0022 5b. GRANT NUMBER 5c

  20. Grid-search Moment Tensor Estimation: Implementation and CTBT-related Application

    NASA Astrophysics Data System (ADS)

    Stachnik, J. C.; Baker, B. I.; Rozhkov, M.; Friberg, P. A.; Leifer, J. M.

    2017-12-01

    This abstract presents a review work related to moment tensor estimation for Expert Technical Analysis at the Comprehensive Test Ban Treaty Organization. In this context of event characterization, estimation of key source parameters provide important insights into the nature of failure in the earth. For example, if the recovered source parameters are indicative of a shallow source with large isotropic component then one conclusion is that it is a human-triggered explosive event. However, an important follow-up question in this application is - does an alternative hypothesis like a deeper source with a large double couple component explain the data approximately as well as the best solution? Here we address the issue of both finding a most likely source and assessing its uncertainty. Using the uniform moment tensor discretization of Tape and Tape (2015) we exhaustively interrogate and tabulate the source eigenvalue distribution (i.e., the source characterization), tensor orientation, magnitude, and source depth. The benefit of the grid-search is that we can quantitatively assess the extent to which model parameters are resolved. This provides a valuable opportunity during the assessment phase to focus interpretation on source parameters that are well-resolved. Another benefit of the grid-search is that it proves to be a flexible framework where different pieces of information can be easily incorporated. To this end, this work is particularly interested in fitting teleseismic body waves and regional surface waves as well as incorporating teleseismic first motions when available. Being that the moment tensor search methodology is well-established we primarily focus on the implementation and application. We present a highly scalable strategy for systematically inspecting the entire model parameter space. We then focus on application to regional and teleseismic data recorded during a handful of natural and anthropogenic events, report on the grid-search optimum, and discuss the resolution of interesting and/or important recovered source properties.

  1. Evaluation of beam divergence of a negative hydrogen ion beam using Doppler shift spectroscopy diagnostics

    NASA Astrophysics Data System (ADS)

    Deka, A. J.; Bharathi, P.; Pandya, K.; Bandyopadhyay, M.; Bhuyan, M.; Yadav, R. K.; Tyagi, H.; Gahlaut, A.; Chakraborty, A.

    2018-01-01

    The Doppler Shift Spectroscopy (DSS) diagnostic is in the conceptual stage to estimate beam divergence, stripping losses, and beam uniformity of the 100 keV hydrogen Diagnostics Neutral Beam of International Thermonuclear Experimental Reactor. This DSS diagnostic is used to measure the above-mentioned parameters with an error of less than 10%. To aid the design calculations and to establish a methodology for estimation of the beam divergence, DSS measurements were carried out on the existing prototype ion source RF Operated Beam Source in India for Negative ion Research. Emissions of the fast-excited neutrals that are generated from the extracted negative ions were collected in the target tank, and the line broadening of these emissions were used for estimating beam divergence. The observed broadening is a convolution of broadenings due to beam divergence, collection optics, voltage ripple, beam focusing, and instrumental broadening. Hence, for estimating the beam divergence from the observed line broadening, a systematic line profile analysis was performed. To minimize the error in the divergence measurements, a study on error propagation in the beam divergence measurements was carried out and the error was estimated. The measurements of beam divergence were done at a constant RF power of 50 kW and a source pressure of 0.6 Pa by varying the extraction voltage from 4 kV to10 kV and the acceleration voltage from 10 kV to 15 kV. These measurements were then compared with the calorimetric divergence, and the results seemed to agree within 10%. A minimum beam divergence of ˜3° was obtained when the source was operated at an extraction voltage of ˜5 kV and at a ˜10 kV acceleration voltage, i.e., at a total applied voltage of 15 kV. This is in agreement with the values reported in experiments carried out on similar sources elsewhere.

  2. Dose rate prediction methodology for remote handled transuranic waste workers at the waste isolation pilot plant.

    PubMed

    Hayes, Robert

    2002-10-01

    An approach is described for estimating future dose rates to Waste Isolation Pilot Plant workers processing remote handled transuranic waste. The waste streams will come from the entire U.S. Department of Energy complex and can take on virtually any form found from the processing sequences for defense-related production, radiochemistry, activation and related work. For this reason, the average waste matrix from all generator sites is used to estimate the average radiation fields over the facility lifetime. Innovative new techniques were applied to estimate expected radiation fields. Non-linear curve fitting techniques were used to predict exposure rate profiles from cylindrical sources using closed form equations for lines and disks. This information becomes the basis for Safety Analysis Report dose rate estimates and for present and future ALARA design reviews when attempts are made to reduce worker doses.

  3. [The effect of tobacco prices on consumption: a time series data analysis for Mexico].

    PubMed

    Olivera-Chávez, Rosa Itandehui; Cermeño-Bazán, Rodolfo; de Miera-Juárez, Belén Sáenz; Jiménez-Ruiz, Jorge Alberto; Reynales-Shigematsu, Luz Myriam

    2010-01-01

    To estimate the price elasticity of the demand for cigarettes in Mexico based on data sources and a methodology different from the ones used in previous studies on the topic. Quarterly time series of consumption, income and price for the time period 1994 to 2005 were used. A long-run demand model was estimated using Ordinary Least Squares (OLS) and the existence of a cointegration relationship was investigated. Also, a model using Dinamic Ordinary Least Squares (DOLS) was estimated to correct for potential endogeneity of independent variables and autocorrelation of the residuals. DOLS estimates showed that a 10% increase in cigarette prices could reduce consumption in 2.5% (p<0.05) and increase government revenue in 16.11%. The results confirmed the effectiveness of taxes as an instrument for tobacco control in Mexico. An increase in taxes can be used to increase cigarette prices and therefore to reduce consumption and increase government revenue.

  4. Advanced Reactor PSA Methodologies for System Reliability Analysis and Source Term Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabaskas, D.; Brunett, A.; Passerini, S.

    Beginning in 2015, a project was initiated to update and modernize the probabilistic safety assessment (PSA) of the GE-Hitachi PRISM sodium fast reactor. This project is a collaboration between GE-Hitachi and Argonne National Laboratory (Argonne), and funded in part by the U.S. Department of Energy. Specifically, the role of Argonne is to assess the reliability of passive safety systems, complete a mechanistic source term calculation, and provide component reliability estimates. The assessment of passive system reliability focused on the performance of the Reactor Vessel Auxiliary Cooling System (RVACS) and the inherent reactivity feedback mechanisms of the metal fuel core. Themore » mechanistic source term assessment attempted to provide a sequence specific source term evaluation to quantify offsite consequences. Lastly, the reliability assessment focused on components specific to the sodium fast reactor, including electromagnetic pumps, intermediate heat exchangers, the steam generator, and sodium valves and piping.« less

  5. Variation in center of mass estimates for extant sauropsids and its importance for reconstructing inertial properties of extinct archosaurs.

    PubMed

    Allen, Vivian; Paxton, Heather; Hutchinson, John R

    2009-09-01

    Inertial properties of animal bodies and segments are critical input parameters for biomechanical analysis of standing and moving, and thus are important for paleobiological inquiries into the broader behaviors, ecology and evolution of extinct taxa such as dinosaurs. But how accurately can these be estimated? Computational modeling was used to estimate the inertial properties including mass, density, and center of mass (COM) for extant crocodiles (adult and juvenile Crocodylus johnstoni) and birds (Gallus gallus; junglefowl and broiler chickens), to identify the chief sources of variation and methodological errors, and their significance. High-resolution computed tomography scans were segmented into 3D objects and imported into inertial property estimation software that allowed for the examination of variable body segment densities (e.g., air spaces such as lungs, and deformable body outlines). Considerable biological variation of inertial properties was found within groups due to ontogenetic changes as well as evolutionary changes between chicken groups. COM positions shift in variable directions during ontogeny in different groups. Our method was repeatable and the resolution was sufficient for accurate estimations of mass and density in particular. However, we also found considerable potential methodological errors for COM related to (1) assumed body segment orientation, (2) what frames of reference are used to normalize COM for size-independent comparisons among animals, and (3) assumptions about tail shape. Methods and assumptions are suggested to minimize these errors in the future and thereby improve estimation of inertial properties for extant and extinct animals. In the best cases, 10%-15% errors in these estimates are unavoidable, but particularly for extinct taxa errors closer to 50% should be expected, and therefore, cautiously investigated. Nonetheless in the best cases these methods allow rigorous estimation of inertial properties. (c) 2009 Wiley-Liss, Inc.

  6. A methodology for estimating dog noise in an animal housing facility

    NASA Technical Reports Server (NTRS)

    Sierens, S. E.; Ingley, H. A.; Besch, E. L.

    1977-01-01

    A rectangular reverberation chamber was designed, constructed and calibrated for the experimental measurement of the sound power level (acoustic power) of a dog. Calibration of the chamber consisted of comparing the acoustic power measured for a random noise source in the chamber with that for the identical source in a free field environment. Data from dogs indicate that barking noise can be modeled as a square wave pattern with short duration and peak sound power levels in the 500 Hz octave band. A-weighted sound pressure levels of up to 114.7 dBA were absorbed, indicating a potential concern for both animals and man chronically exposed to such environments.

  7. A new methodology for estimating nuclear casualties as a function of time.

    PubMed

    Zirkle, Robert A; Walsh, Terri J; Disraelly, Deena S; Curling, Carl A

    2011-09-01

    The Human Response Injury Profile (HRIP) nuclear methodology provides an estimate of casualties occurring as a consequence of nuclear attacks against military targets for planning purposes. The approach develops user-defined, time-based casualty and fatality estimates based on progressions of underlying symptoms and their severity changes over time. This paper provides a description of the HRIP nuclear methodology and its development, including inputs, human response and the casualty estimation process.

  8. Is case-chaos methodology an appropriate alternative to conventional case-control studies for investigating outbreaks?

    PubMed

    Edelstein, Michael; Wallensten, Anders; Kühlmann-Berenzon, Sharon

    2014-08-15

    Case-chaos methodology is a proposed alternative to case-control studies that simulates controls by randomly reshuffling the exposures of cases. We evaluated the method using data on outbreaks in Sweden. We identified 5 case-control studies from foodborne illness outbreaks that occurred between 2005 and 2012. Using case-chaos methodology, we calculated odds ratios 1,000 times for each exposure. We used the median as the point estimate and the 2.5th and 97.5th percentiles as the confidence interval. We compared case-chaos matched odds ratios with their respective case-control odds ratios in terms of statistical significance. Using Spearman's correlation, we estimated the correlation between matched odds ratios and the proportion of cases exposed to each exposure and quantified the relationship between the 2 using a normal linear mixed model. Each case-control study identified an outbreak vehicle (odds ratios = 4.9-45). Case-chaos methodology identified the outbreak vehicle 3 out of 5 times. It identified significant associations in 22 of 113 exposures that were not associated with outcome and 5 of 18 exposures that were significantly associated with outcome. Log matched odds ratios correlated with their respective proportion of cases exposed (Spearman ρ = 0.91) and increased significantly with the proportion of cases exposed (b = 0.054). Case-chaos methodology missed the outbreak source 2 of 5 times and identified spurious associations between a number of exposures and outcome. Measures of association correlated with the proportion of cases exposed. We recommended against using case-chaos analysis during outbreak investigations. © The Author 2014. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Cost benefits of advanced software: A review of methodology used at Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Joglekar, Prafulla N.

    1993-01-01

    To assist rational investments in advanced software, a formal, explicit, and multi-perspective cost-benefit analysis methodology is proposed. The methodology can be implemented through a six-stage process which is described and explained. The current practice of cost-benefit analysis at KSC is reviewed in the light of this methodology. The review finds that there is a vicious circle operating. Unsound methods lead to unreliable cost-benefit estimates. Unreliable estimates convince management that cost-benefit studies should not be taken seriously. Then, given external demands for cost-benefit estimates, management encourages software enginees to somehow come up with the numbers for their projects. Lacking the expertise needed to do a proper study, courageous software engineers with vested interests use ad hoc and unsound methods to generate some estimates. In turn, these estimates are unreliable, and the cycle continues. The proposed methodology should help KSC to break out of this cycle.

  10. Validation of GOES-9 Satellite-Derived Cloud Properties over the Tropical Western Pacific Region

    NASA Technical Reports Server (NTRS)

    Khaiyer, Mandana M.; Nordeen, Michele L.; Doeling, David R.; Chakrapani, Venkatasan; Minnis, Patrick; Smith, William L., Jr.

    2004-01-01

    Real-time processing of hourly GOES-9 images in the ARM TWP region began operationally in October 2003 and is continuing. The ARM sites provide an excellent source for validating this new satellitederived cloud and radiation property dataset. Derived cloud amounts, heights, and broadband shortwave fluxes are compared with similar quantities derived from ground-based instrumentation. The results will provide guidance for estimating uncertainties in the GOES-9 products and to develop improvements in the retrieval methodologies and input.

  11. Impact of the 1980 BEIR-III report on low-level radiation risk assessment, radiation protection guides, and public health policy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fabrikant, J.I.

    1981-06-01

    The author deals with the scientific basis for establishing appropriate radiation protection guides, and this effect on evaluation of societal activities concerned with the health effects in human populations exposed to low-level radiation. Methodology is discussed for estimating risks of radio-induced cancer and genetically related ill-health in man, the sources of data, the dose-response models used, and the precision ascribed to the process. (PSB)

  12. Model-Data Fusion and Adaptive Sensing for Large Scale Systems: Applications to Atmospheric Release Incidents

    NASA Astrophysics Data System (ADS)

    Madankan, Reza

    All across the world, toxic material clouds are emitted from sources, such as industrial plants, vehicular traffic, and volcanic eruptions can contain chemical, biological or radiological material. With the growing fear of natural, accidental or deliberate release of toxic agents, there is tremendous interest in precise source characterization and generating accurate hazard maps of toxic material dispersion for appropriate disaster management. In this dissertation, an end-to-end framework has been developed for probabilistic source characterization and forecasting of atmospheric release incidents. The proposed methodology consists of three major components which are combined together to perform the task of source characterization and forecasting. These components include Uncertainty Quantification, Optimal Information Collection, and Data Assimilation. Precise approximation of prior statistics is crucial to ensure performance of the source characterization process. In this work, an efficient quadrature based method has been utilized for quantification of uncertainty in plume dispersion models that are subject to uncertain source parameters. In addition, a fast and accurate approach is utilized for the approximation of probabilistic hazard maps, based on combination of polynomial chaos theory and the method of quadrature points. Besides precise quantification of uncertainty, having useful measurement data is also highly important to warranty accurate source parameter estimation. The performance of source characterization is highly affected by applied sensor orientation for data observation. Hence, a general framework has been developed for the optimal allocation of data observation sensors, to improve performance of the source characterization process. The key goal of this framework is to optimally locate a set of mobile sensors such that measurement of textit{better} data is guaranteed. This is achieved by maximizing the mutual information between model predictions and observed data, given a set of kinetic constraints on mobile sensors. Dynamic Programming method has been utilized to solve the resulting optimal control problem. To complete the loop of source characterization process, two different estimation techniques, minimum variance estimation framework and Bayesian Inference method has been developed to fuse model forecast with measurement data. Incomplete information regarding the distribution of associated noise signal in measurement data, is another major challenge in the source characterization of plume dispersion incidents. This frequently happens in data assimilation of atmospheric data by using the satellite imagery. This occurs due to the fact that satellite imagery data can be polluted with noise, depending on weather conditions, clouds, humidity, etc. Unfortunately, there is no accurate procedure to quantify the error in recorded satellite data. Hence, using classical data assimilation methods in this situation is not straight forward. In this dissertation, the basic idea of a novel approach has been proposed to tackle these types of real world problems with more accuracy and robustness. A simple example demonstrating the real-world scenario is presented to validate the developed methodology.

  13. Noise and analyzer-crystal angular position analysis for analyzer-based phase-contrast imaging

    NASA Astrophysics Data System (ADS)

    Majidi, Keivan; Li, Jun; Muehleman, Carol; Brankov, Jovan G.

    2014-04-01

    The analyzer-based phase-contrast x-ray imaging (ABI) method is emerging as a potential alternative to conventional radiography. Like many of the modern imaging techniques, ABI is a computed imaging method (meaning that images are calculated from raw data). ABI can simultaneously generate a number of planar parametric images containing information about absorption, refraction, and scattering properties of an object. These images are estimated from raw data acquired by measuring (sampling) the angular intensity profile of the x-ray beam passed through the object at different angular positions of the analyzer crystal. The noise in the estimated ABI parametric images depends upon imaging conditions like the source intensity (flux), measurements angular positions, object properties, and the estimation method. In this paper, we use the Cramér-Rao lower bound (CRLB) to quantify the noise properties in parametric images and to investigate the effect of source intensity, different analyzer-crystal angular positions and object properties on this bound, assuming a fixed radiation dose delivered to an object. The CRLB is the minimum bound for the variance of an unbiased estimator and defines the best noise performance that one can obtain regardless of which estimation method is used to estimate ABI parametric images. The main result of this paper is that the variance (hence the noise) in parametric images is directly proportional to the source intensity and only a limited number of analyzer-crystal angular measurements (eleven for uniform and three for optimal non-uniform) are required to get the best parametric images. The following angular measurements only spread the total dose to the measurements without improving or worsening CRLB, but the added measurements may improve parametric images by reducing estimation bias. Next, using CRLB we evaluate the multiple-image radiography, diffraction enhanced imaging and scatter diffraction enhanced imaging estimation techniques, though the proposed methodology can be used to evaluate any other ABI parametric image estimation technique.

  14. Noise and Analyzer-Crystal Angular Position Analysis for Analyzer-Based Phase-Contrast Imaging

    PubMed Central

    Majidi, Keivan; Li, Jun; Muehleman, Carol; Brankov, Jovan G.

    2014-01-01

    The analyzer-based phase-contrast X-ray imaging (ABI) method is emerging as a potential alternative to conventional radiography. Like many of the modern imaging techniques, ABI is a computed imaging method (meaning that images are calculated from raw data). ABI can simultaneously generate a number of planar parametric images containing information about absorption, refraction, and scattering properties of an object. These images are estimated from raw data acquired by measuring (sampling) the angular intensity profile (AIP) of the X-ray beam passed through the object at different angular positions of the analyzer crystal. The noise in the estimated ABI parametric images depends upon imaging conditions like the source intensity (flux), measurements angular positions, object properties, and the estimation method. In this paper, we use the Cramér-Rao lower bound (CRLB) to quantify the noise properties in parametric images and to investigate the effect of source intensity, different analyzer-crystal angular positions and object properties on this bound, assuming a fixed radiation dose delivered to an object. The CRLB is the minimum bound for the variance of an unbiased estimator and defines the best noise performance that one can obtain regardless of which estimation method is used to estimate ABI parametric images. The main result of this manuscript is that the variance (hence the noise) in parametric images is directly proportional to the source intensity and only a limited number of analyzer-crystal angular measurements (eleven for uniform and three for optimal non-uniform) are required to get the best parametric images. The following angular measurements only spread the total dose to the measurements without improving or worsening CRLB, but the added measurements may improve parametric images by reducing estimation bias. Next, using CRLB we evaluate the Multiple-Image Radiography (MIR), Diffraction Enhanced Imaging (DEI) and Scatter Diffraction Enhanced Imaging (S-DEI) estimation techniques, though the proposed methodology can be used to evaluate any other ABI parametric image estimation technique. PMID:24651402

  15. Shallow seismic source parameter determination using intermediate-period surface wave amplitude spectra

    NASA Astrophysics Data System (ADS)

    Fox, Benjamin D.; Selby, Neil D.; Heyburn, Ross; Woodhouse, John H.

    2012-09-01

    Estimating reliable depths for shallow seismic sources is important in both seismo-tectonic studies and in seismic discrimination studies. Surface wave excitation is sensitive to source depth, especially at intermediate and short-periods, owing to the approximate exponential decay of surface wave displacements with depth. A new method is presented here to retrieve earthquake source parameters from regional and teleseismic intermediate period (100-15 s) fundamental-mode surface wave recordings. This method makes use of advances in mapping global dispersion to allow higher frequency surface wave recordings at regional and teleseismic distances to be used with more confidence than in previous studies and hence improve the resolution of depth estimates. Synthetic amplitude spectra are generated using surface wave theory combined with a great circle path approximation, and a grid of double-couple sources are compared with the data. Source parameters producing the best-fitting amplitude spectra are identified by minimizing the least-squares misfit in logarithmic amplitude space. The F-test is used to search the solution space for statistically acceptable parameters and the ranges of these variables are used to place constraints on the best-fitting source. Estimates of focal mechanism, depth and scalar seismic moment are determined for 20 small to moderate sized (4.3 ≤Mw≤ 6.4) earthquakes. These earthquakes are situated across a wide range of geographic and tectonic locations and describe a range of faulting styles over the depth range 4-29 km. For the larger earthquakes, comparisons with other studies are favourable, however existing source determination procedures, such as the CMT technique, cannot be performed for the smaller events. By reducing the magnitude threshold at which robust source parameters can be determined, the accuracy, especially at shallow depths, of seismo-tectonic studies, seismic hazard assessments, and seismic discrimination investigations can be improved by the application of this methodology.

  16. Mapping Surface Heat Fluxes by Assimilating SMAP Soil Moisture and GOES Land Surface Temperature Data

    NASA Astrophysics Data System (ADS)

    Lu, Yang; Steele-Dunne, Susan C.; Farhadi, Leila; van de Giesen, Nick

    2017-12-01

    Surface heat fluxes play a crucial role in the surface energy and water balance. In situ measurements are costly and difficult, and large-scale flux mapping is hindered by surface heterogeneity. Previous studies have demonstrated that surface heat fluxes can be estimated by assimilating land surface temperature (LST) and soil moisture to determine two key parameters: a neutral bulk heat transfer coefficient (CHN) and an evaporative fraction (EF). Here a methodology is proposed to estimate surface heat fluxes by assimilating Soil Moisture Active Passive (SMAP) soil moisture data and Geostationary Operational Environmental Satellite (GOES) LST data into a dual-source (DS) model using a hybrid particle assimilation strategy. SMAP soil moisture data are assimilated using a particle filter (PF), and GOES LST data are assimilated using an adaptive particle batch smoother (APBS) to account for the large gap in the spatial and temporal resolution. The methodology is implemented in an area in the U.S. Southern Great Plains. Assessment against in situ observations suggests that soil moisture and LST estimates are in better agreement with observations after assimilation. The RMSD for 30 min (daytime) flux estimates is reduced by 6.3% (8.7%) and 31.6% (37%) for H and LE on average. Comparison against a LST-only and a soil moisture-only assimilation case suggests that despite the coarse resolution, assimilating SMAP soil moisture data is not only beneficial but also crucial for successful and robust flux estimation, particularly when the uncertainties in the model estimates are large.

  17. Urban Earthquake Shaking and Loss Assessment

    NASA Astrophysics Data System (ADS)

    Hancilar, U.; Tuzun, C.; Yenidogan, C.; Zulfikar, C.; Durukal, E.; Erdik, M.

    2009-04-01

    This study, conducted under the JRA-3 component of the EU NERIES Project, develops a methodology and software (ELER) for the rapid estimation of earthquake shaking and losses the Euro-Mediterranean region. This multi-level methodology developed together with researchers from Imperial College, NORSAR and ETH-Zurich is capable of incorporating regional variability and sources of uncertainty stemming from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships. GRM Risk Management, Inc. of Istanbul serves as sub-contractor tor the coding of the ELER software. The methodology encompasses the following general steps: 1. Finding of the most likely location of the source of the earthquake using regional seismotectonic data base and basic source parameters, and if and when possible, by the estimation of fault rupture parameters from rapid inversion of data from on-line stations. 2. Estimation of the spatial distribution of selected ground motion parameters through region specific ground motion attenuation relationships and using shear wave velocity distributions.(Shake Mapping) 4. Incorporation of strong ground motion and other empirical macroseismic data for the improvement of Shake Map 5. Estimation of the losses (damage, casualty and economic) at different levels of sophistication (0, 1 and 2) that commensurate with the availability of inventory of human built environment (Loss Mapping) Level 2 analysis of the ELER Software (similar to HAZUS and SELENA) is essentially intended for earthquake risk assessment (building damage, consequential human casualties and macro economic loss quantifiers) in urban areas. The basic Shake Mapping is similar to the Level 0 and Level 1 analysis however, options are available for more sophisticated treatment of site response through externally entered data and improvement of the shake map through incorporation of accelerometric and other macroseismic data (similar to the USGS ShakeMap System). The building inventory data for the Level 2 analysis will consist of grid (geo-cell) based urban building and demographic inventories. For building grouping the European building typology developed within the EU-FP5 RISK-EU project is used. The building vulnerability/fragility relationships to be used can be user selected from a list of applicable relationships developed on the basis of a comprehensive study, Both empirical and analytical relationships (based on the Coefficient Method, Equivalent Linearization Method and the Reduction Factor Method of analysis) can be employed. Casualties in Level 2 analysis are estimated based on the number of buildings in different damaged states and the casualty rates for each building type and damage level. Modifications to the casualty rates can be used if necessary. ELER Level 2 analysis will include calculation of direct monetary losses as a result building damage that will allow for repair-cost estimations and specific investigations associated with earthquake insurance applications (PML and AAL estimations). ELER Level 2 analysis loss results obtained for Istanbul for a scenario earthquake using different techniques will be presented with comparisons using different earthquake damage assessment software. The urban earthquake shaking and loss information is intented for dissemination in a timely manner to related agencies for the planning and coordination of the post-earthquake emergency response. However the same software can also be used for scenario earthquake loss estimation, related Monte-Carlo type simulations and eathquake insurance applications.

  18. Comparison of food consumption in Indian adults between national and sub-national dietary data sources.

    PubMed

    Aleksandrowicz, Lukasz; Tak, Mehroosh; Green, Rosemary; Kinra, Sanjay; Haines, Andy

    2017-04-01

    Accurate data on dietary intake are important for public health, nutrition and agricultural policy. The National Sample Survey is widely used by policymakers in India to estimate nutritional outcomes in the country, but has not been compared with other dietary data sources. To assess relative differences across available Indian dietary data sources, we compare intake of food groups across six national and sub-national surveys between 2004 and 2012, representing various dietary intake estimation methodologies, including Household Consumption Expenditure Surveys (HCES), FFQ, food balance sheets (FBS), and 24-h recall (24HR) surveys. We matched data for relevant years, regions and economic groups, for ages 16-59. One set of national HCES and the 24HR showed a decline in food intake in India between 2004-2005 and 2011-2012, whereas another HCES and FBS showed an increase. Differences in intake were smallest between the two HCES (1 % relative difference). Relative to these, FFQ and FBS had higher intake (13 and 35 %), and the 24HR lower intake (-9 %). Cereal consumption had high agreement across comparisons (average 5 % difference), whereas fruit and nuts, eggs, meat and fish and sugar had the least (120, 119, 56 and 50 % average differences, respectively). Spearman's coefficients showed high correlation of ranked food group intake across surveys. The underlying methods of the compared data highlight possible sources of under- or over-estimation, and influence their relevance for addressing various research questions and programmatic needs.

  19. Origins and Asteroid Main-Belt Stratigraphy for H-, L-, LL-Chondrite Meteorites

    NASA Astrophysics Data System (ADS)

    Binzel, Richard; DeMeo, Francesca; Burbine, Thomas; Polishook, David; Birlan, Mirel

    2016-10-01

    We trace the origins of ordinary chondrite meteorites to their main-belt sources using their (presumably) larger counterparts observable as near-Earth asteroids (NEAs). We find the ordinary chondrite stratigraphy in the main belt to be LL, H, L (increasing distance from the Sun). We derive this result using spectral information from more than 1000 near-Earth asteroids [1]. Our methodology is to correlate each NEA's main-belt source region [2] with its modeled mineralogy [3]. We find LL chondrites predominantly originate from the inner edge of the asteroid belt (nu6 region at 2.1 AU), H chondrites from the 3:1 resonance region (2.5 AU), and the L chondrites from the outer belt 5:2 resonance region (2.8 AU). Each of these source regions has been cited by previous researchers [e.g. 4, 5, 6], but this work uses an independent methodology that simultaneously solves for the LL, H, L stratigraphy. We seek feedback from the planetary origins and meteoritical communities on the viability or implications of this stratrigraphy.Methodology: Spectroscopic and taxonomic data are from the NASA IRTF MIT-Hawaii Near-Earth Object Spectroscopic Survey (MITHNEOS) [1]. For each near-Earth asteroid, we use the Bottke source model [2] to assign a probability that the object is derived from five different main-belt source regions. For each spectrum, we apply the Shkuratov model [3] for radiative transfer within compositional mixing to derive estimates for the ol / (ol+px) ratio (and its uncertainty). The Bottke source region model [2] and the Shkuratov mineralogic model [3] each deliver a probability distribution. For each NEA, we convolve its source region probability distribution with its meteorite class distribution to yield a likelihood for where that class originates. Acknowledgements: This work supported by the National Science Foundation Grant 0907766 and NASA Grant NNX10AG27G.References: [1] Binzel et al. (2005), LPSC XXXVI, 36.1817. [2] Bottke et al. (2002). Icarus 156, 399. [3] Shkuratov et al. (1999). Icarus 137, 222. [4] Vernazza et al. (2008). Nature 454, 858. [5] Thomas et al. (2010). Icarus 205, 419. [6] Nesvorný et al.(2009). Icarus 200, 698.

  20. Estimating the uncertainty from sampling in pollution crime investigation: The importance of metrology in the forensic interpretation of environmental data.

    PubMed

    Barazzetti Barbieri, Cristina; de Souza Sarkis, Jorge Eduardo

    2018-07-01

    The forensic interpretation of environmental analytical data is usually challenging due to the high geospatial variability of these data. The measurements' uncertainty includes contributions from the sampling and from the sample handling and preparation processes. These contributions are often disregarded in analytical techniques results' quality assurance. A pollution crime investigation case was used to carry out a methodology able to address these uncertainties in two different environmental compartments, freshwater sediments and landfill leachate. The methodology used to estimate the uncertainty was the duplicate method (that replicates predefined steps of the measurement procedure in order to assess its precision) and the parameters used to investigate the pollution were metals (Cr, Cu, Ni, and Zn) in the leachate, the suspect source, and in the sediment, the possible sink. The metal analysis results were compared to statutory limits and it was demonstrated that Cr and Ni concentrations in sediment samples exceeded the threshold levels at all sites downstream the pollution sources, considering the expanded uncertainty U of the measurements and a probability of contamination >0.975, at most sites. Cu and Zn concentrations were above the statutory limits at two sites, but the classification was inconclusive considering the uncertainties of the measurements. Metal analyses in leachate revealed that Cr concentrations were above the statutory limits with a probability of contamination >0.975 in all leachate ponds while the Cu, Ni and Zn probability of contamination was below 0.025. The results demonstrated that the estimation of the sampling uncertainty, which was the dominant component of the combined uncertainty, is required for a comprehensive interpretation of the environmental analyses results, particularly in forensic cases. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Iterative Bayesian Estimation of Travel Times on Urban Arterials: Fusing Loop Detector and Probe Vehicle Data.

    PubMed

    Liu, Kai; Cui, Meng-Ying; Cao, Peng; Wang, Jiang-Bo

    2016-01-01

    On urban arterials, travel time estimation is challenging especially from various data sources. Typically, fusing loop detector data and probe vehicle data to estimate travel time is a troublesome issue while considering the data issue of uncertain, imprecise and even conflicting. In this paper, we propose an improved data fusing methodology for link travel time estimation. Link travel times are simultaneously pre-estimated using loop detector data and probe vehicle data, based on which Bayesian fusion is then applied to fuse the estimated travel times. Next, Iterative Bayesian estimation is proposed to improve Bayesian fusion by incorporating two strategies: 1) substitution strategy which replaces the lower accurate travel time estimation from one sensor with the current fused travel time; and 2) specially-designed conditions for convergence which restrict the estimated travel time in a reasonable range. The estimation results show that, the proposed method outperforms probe vehicle data based method, loop detector based method and single Bayesian fusion, and the mean absolute percentage error is reduced to 4.8%. Additionally, iterative Bayesian estimation performs better for lighter traffic flows when the variability of travel time is practically higher than other periods.

  2. Iterative Bayesian Estimation of Travel Times on Urban Arterials: Fusing Loop Detector and Probe Vehicle Data

    PubMed Central

    Cui, Meng-Ying; Cao, Peng; Wang, Jiang-Bo

    2016-01-01

    On urban arterials, travel time estimation is challenging especially from various data sources. Typically, fusing loop detector data and probe vehicle data to estimate travel time is a troublesome issue while considering the data issue of uncertain, imprecise and even conflicting. In this paper, we propose an improved data fusing methodology for link travel time estimation. Link travel times are simultaneously pre-estimated using loop detector data and probe vehicle data, based on which Bayesian fusion is then applied to fuse the estimated travel times. Next, Iterative Bayesian estimation is proposed to improve Bayesian fusion by incorporating two strategies: 1) substitution strategy which replaces the lower accurate travel time estimation from one sensor with the current fused travel time; and 2) specially-designed conditions for convergence which restrict the estimated travel time in a reasonable range. The estimation results show that, the proposed method outperforms probe vehicle data based method, loop detector based method and single Bayesian fusion, and the mean absolute percentage error is reduced to 4.8%. Additionally, iterative Bayesian estimation performs better for lighter traffic flows when the variability of travel time is practically higher than other periods. PMID:27362654

  3. Caffeine as an indicator for the quantification of untreated wastewater in karst systems.

    PubMed

    Hillebrand, Olav; Nödler, Karsten; Licha, Tobias; Sauter, Martin; Geyer, Tobias

    2012-02-01

    Contamination from untreated wastewater leakage and related bacterial contamination poses a threat to drinking water quality. However, a quantification of the magnitude of leakage is difficult. The objective of this work is to provide a highly sensitive methodology for the estimation of the mass of untreated wastewater entering karst aquifers with rapid recharge. For this purpose a balance approach is adapted. It is based on the mass flow of caffeine in spring water, the load of caffeine in untreated wastewater and the daily water consumption per person in a spring catchment area. Caffeine is a source-specific indicator for wastewater, consumed and discharged in quantities allowing detection in a karst spring. The methodology was applied to estimate the amount of leaking and infiltrating wastewater to a well investigated karst aquifer on a daily basis. The calculated mean volume of untreated wastewater entering the aquifer was found to be 2.2 ± 0.5 m(3) d(-1) (undiluted wastewater). It corresponds to approximately 0.4% of the total amount of wastewater within the spring catchment. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Mapping of reproductive health financing: methodological challenges.

    PubMed

    Pradhan, Jalandhar; Sidze, Estelle Monique; Khanna, Anoop; Beekink, Erik

    2014-10-01

    Low level of funding for reproductive health (RH) is a cause for concern, given that RH service utilization in the vast majority of the developing world is well below the desired level. Though there is an urgent need to track the domestic and international financial resource flows for RH, the instruments through which financial resources are tracked in developing countries are limited. In this paper we examined the methodological and conceptual challenges of monitoring financial resources for RH services at international and national level. At the international level, there are a number of estimates that highlights the need for financial resources for RH programmes but the estimates vary significantly. At the national level, Reproductive Health Accounts (RHA) in the framework of National Health Accounts (NHA) is considered to be the ideal source to track domestic financial flows for RH activities. However, the weak link between data production by the RHA and its application by the stakeholders as well as lack of political will impedes the institutionalization of RHA at the country level. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Crude and intrinsic birth rates for Asian countries.

    PubMed

    Rele, J R

    1978-01-01

    An attempt to estimate birth rates for Asian countries. The main sources of information in developing countries has been census age-sex distribution, although inaccuracies in the basic data have made it difficult to reach a high degree of accuracy. Different methods bring widely varying results. The methodology presented here is based on the use of the conventional child-woman ratio from the census age-sex distribution, with a rough estimate of the expectation of life at birth. From the established relationships between child-woman ratio and the intrinsic birth rate of the nature y = a + bx + cx(2) at each level of life expectation, the intrinsic birth rate is first computed using coefficients already computed. The crude birth rate is obtained using the adjustment based on the census age-sex distribution. An advantage to this methodology is that the intrinsic birth rate, normally an involved computation, can be obtained relatively easily as a biproduct of the crude birth rates and the bases for the calculations for each of 33 Asian countries, in some cases over several time periods.

  6. Hybrid time-variant reliability estimation for active control structures under aleatory and epistemic uncertainties

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Xiong, Chuang; Wang, Xiaojun; Li, Yunlong; Xu, Menghui

    2018-04-01

    Considering that multi-source uncertainties from inherent nature as well as the external environment are unavoidable and severely affect the controller performance, the dynamic safety assessment with high confidence is of great significance for scientists and engineers. In view of this, the uncertainty quantification analysis and time-variant reliability estimation corresponding to the closed-loop control problems are conducted in this study under a mixture of random, interval, and convex uncertainties. By combining the state-space transformation and the natural set expansion, the boundary laws of controlled response histories are first confirmed with specific implementation of random items. For nonlinear cases, the collocation set methodology and fourth Rounge-Kutta algorithm are introduced as well. Enlightened by the first-passage model in random process theory as well as by the static probabilistic reliability ideas, a new definition of the hybrid time-variant reliability measurement is provided for the vibration control systems and the related solution details are further expounded. Two engineering examples are eventually presented to demonstrate the validity and applicability of the methodology developed.

  7. The determination of operational and support requirements and costs during the conceptual design of space systems

    NASA Technical Reports Server (NTRS)

    Ebeling, Charles; Beasley, Kenneth D.

    1992-01-01

    The first year of research to provide NASA support in predicting operational and support parameters and costs of proposed space systems is reported. Some of the specific research objectives were (1) to develop a methodology for deriving reliability and maintainability parameters and, based upon their estimates, determine the operational capability and support costs, and (2) to identify data sources and establish an initial data base to implement the methodology. Implementation of the methodology is accomplished through the development of a comprehensive computer model. While the model appears to work reasonably well when applied to aircraft systems, it was not accurate when used for space systems. The model is dynamic and should be updated as new data become available. It is particularly important to integrate the current aircraft data base with data obtained from the Space Shuttle and other space systems since subsystems unique to a space vehicle require data not available from aircraft. This research only addressed the major subsystems on the vehicle.

  8. A method to assess the population-level consequences of wind energy facilities on bird and bat species: Chapter

    USGS Publications Warehouse

    Diffendorfer, James E.; Beston, Julie A.; Merrill, Matthew; Stanton, Jessica C.; Corum, Margo D.; Loss, Scott R.; Thogmartin, Wayne E.; Johnson, Douglas H.; Erickson, Richard A.; Heist, Kevin W.

    2016-01-01

    For this study, a methodology was developed for assessing impacts of wind energy generation on populations of birds and bats at regional to national scales. The approach combines existing methods in applied ecology for prioritizing species in terms of their potential risk from wind energy facilities and estimating impacts of fatalities on population status and trend caused by collisions with wind energy infrastructure. Methods include a qualitative prioritization approach, demographic models, and potential biological removal. The approach can be used to prioritize species in need of more thorough study as well as to identify species with minimal risk. However, the components of this methodology require simplifying assumptions and the data required may be unavailable or of poor quality for some species. These issues should be carefully considered before using the methodology. The approach will increase in value as more data become available and will broaden the understanding of anthropogenic sources of mortality on bird and bat populations.

  9. A methodology to compile food metrics related to diet sustainability into a single food database: Application to the French case.

    PubMed

    Gazan, Rozenn; Barré, Tangui; Perignon, Marlène; Maillot, Matthieu; Darmon, Nicole; Vieux, Florent

    2018-01-01

    The holistic approach required to assess diet sustainability is hindered by lack of comprehensive databases compiling relevant food metrics. Those metrics are generally scattered in different data sources with various levels of aggregation hampering their matching. The objective was to develop a general methodology to compile food metrics describing diet sustainability dimensions into a single database and to apply it to the French context. Each step of the methodology is detailed: indicators and food metrics identification and selection, food list definition, food matching and values assignment. For the French case, nutrient and contaminant content, bioavailability factors, distribution of dietary intakes, portion sizes, food prices, greenhouse gas emission, acidification and marine eutrophication estimates were allocated to 212 commonly consumed generic foods. This generic database compiling 279 metrics will allow the simultaneous evaluation of the four dimensions of diet sustainability, namely health, economic, social and environmental, dimensions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Inference of emission rates from multiple sources using Bayesian probability theory.

    PubMed

    Yee, Eugene; Flesch, Thomas K

    2010-03-01

    The determination of atmospheric emission rates from multiple sources using inversion (regularized least-squares or best-fit technique) is known to be very susceptible to measurement and model errors in the problem, rendering the solution unusable. In this paper, a new perspective is offered for this problem: namely, it is argued that the problem should be addressed as one of inference rather than inversion. Towards this objective, Bayesian probability theory is used to estimate the emission rates from multiple sources. The posterior probability distribution for the emission rates is derived, accounting fully for the measurement errors in the concentration data and the model errors in the dispersion model used to interpret the data. The Bayesian inferential methodology for emission rate recovery is validated against real dispersion data, obtained from a field experiment involving various source-sensor geometries (scenarios) consisting of four synthetic area sources and eight concentration sensors. The recovery of discrete emission rates from three different scenarios obtained using Bayesian inference and singular value decomposition inversion are compared and contrasted.

  11. Separation of simultaneous sources using a structural-oriented median filter in the flattened dimension

    NASA Astrophysics Data System (ADS)

    Gan, Shuwei; Wang, Shoudong; Chen, Yangkang; Chen, Xiaohong; Xiang, Kui

    2016-01-01

    Simultaneous-source shooting can help tremendously shorten the acquisition period and improve the quality of seismic data for better subsalt seismic imaging, but at the expense of introducing strong interference (blending noise) to the acquired seismic data. We propose to use a structural-oriented median filter to attenuate the blending noise along the structural direction of seismic profiles. The principle of the proposed approach is to first flatten the seismic record in local spatial windows and then to apply a traditional median filter (MF) to the third flattened dimension. The key component of the proposed approach is the estimation of the local slope, which can be calculated by first scanning the NMO velocity and then transferring the velocity to the local slope. Both synthetic and field data examples show that the proposed approach can successfully separate the simultaneous-source data into individual sources. We provide an open-source toy example to better demonstratethe proposed methodology.

  12. Practical guide: Tools and methodologies for an oil and gas industry emission inventory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, C.C.; Killian, T.L.

    1996-12-31

    During the preparation of Title V Permit applications, the quantification and speciation of emission sources from oil and gas facilities were reevaluated to determine the {open_quotes}potential-to-emit.{close_quotes} The existing emissions were primarily based on EPA emission factors such as AP-42, for tanks, combustion sources, and fugitive emissions from component leaks. Emissions from insignificant activities and routine operations that are associated with maintenance, startups and shutdowns, and releases to control devices also required quantification. To reconcile EPA emission factors with test data, process knowledge, and manufacturer`s data, a careful review of other estimation options was performed. This paper represents the results ofmore » this analysis of emission sources at oil and gas facilities, including exploration and production, compressor stations and gas plants.« less

  13. An integrative cross-design synthesis approach to estimate the cost of illness: an applied case to the cost of depression in Catalonia.

    PubMed

    Bendeck, Murielle; Serrano-Blanco, Antoni; García-Alonso, Carlos; Bonet, Pere; Jordà, Esther; Sabes-Figuera, Ramon; Salvador-Carulla, Luis

    2013-04-01

    Cost of illness (COI) studies are carried out under conditions of uncertainty and with incomplete information. There are concerns regarding their generalisability, accuracy and usability in evidence-informed care. A hybrid methodology is used to estimate the regional costs of depression in Catalonia (Spain) following an integrative approach. The cross-design synthesis included nominal groups and quantitative analysis of both top-down and bottom-up studies, and incorporated primary and secondary data from different sources of information in Catalonia. Sensitivity analysis used probabilistic Monte Carlo simulation modelling. A dissemination strategy was planned, including a standard form adapted from cost-effectiveness studies to summarise methods and results. The method used allows for a comprehensive estimate of the cost of depression in Catalonia. Health officers and decision-makers concluded that this methodology provided useful information and knowledge for evidence-informed planning in mental health. The mix of methods, combined with a simulation model, contributed to a reduction in data gaps and, in conditions of uncertainty, supplied more complete information on the costs of depression in Catalonia. This approach to COI should be differentiated from other COI designs to allow like-with-like comparisons. A consensus on COI typology, procedures and dissemination is needed.

  14. Regional prediction of long-term landfill gas to energy potential.

    PubMed

    Amini, Hamid R; Reinhart, Debra R

    2011-01-01

    Quantifying landfill gas to energy (LFGTE) potential as a source of renewable energy is difficult due to the challenges involved in modeling landfill gas (LFG) generation. In this paper a methodology is presented to estimate LFGTE potential on a regional scale over a 25-year timeframe with consideration of modeling uncertainties. The methodology was demonstrated for the US state of Florida, as a case study, and showed that Florida could increase the annual LFGTE production by more than threefold by 2035 through installation of LFGTE facilities at all landfills. The estimated electricity production potential from Florida LFG is equivalent to removing some 70 million vehicles from highways or replacing over 800 million barrels of oil consumption during the 2010-2035 timeframe. Diverting food waste could significantly reduce fugitive LFG emissions, while having minimal effect on the LFGTE potential; whereas, achieving high diversion goals through increased recycling will result in reduced uncollected LFG and significant loss of energy production potential which may be offset by energy savings from material recovery and reuse. Estimates showed that the power density for Florida LFGTE production could reach as high as 10 Wm(-2) with optimized landfill operation and energy production practices. The environmental benefits from increased lifetime LFG collection efficiencies magnify the value of LFGTE projects. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Focal mechanism of the seismic series prior to the 2011 El Hierro eruption

    NASA Astrophysics Data System (ADS)

    del Fresno, C.; Buforn, E.; Cesca, S.; Domínguez Cerdeña, I.

    2015-12-01

    The onset of the submarine eruption of El Hierro (10-Oct-2011) was preceded by three months of low-magnitude seismicity (Mw<4.0) characterized by a well documented hypocenter migration from the center to the south of the island. Seismic sources of this series have been studied in order to understand the physical process of magma migration. Different methodologies were used to obtain focal mechanisms of largest shocks. Firstly, we have estimated the joint fault plane solutions for 727 shocks using first motion P polarities to infer the stress pattern of the sequence and to determine the time evolution of principle axes orientation. Results show almost vertical T-axes during the first two months of the series and horizontal P-axes on N-S direction coinciding with the migration. Secondly, a point source MT inversion was performed with data of the largest 21 earthquakes of the series (M>3.5). Amplitude spectra was fitted at local distances (<20km). Reliability and stability of the results were evaluated with synthetic data. Results show a change in the focal mechanism pattern within the first days of October, varying from complex sources of higher non-double-couple components before that date to a simpler strike-slip mechanism with horizontal tension axes on E-W direction the week prior to the eruption onset. A detailed study was carried out for the 8 October 2011 earthquake (Mw=4.0). Focal mechanism was retrieved using a MT inversion at regional and local distances. Results indicate an important component of strike-slip fault and null isotropic component. The stress pattern obtained corresponds to horizontal compression in a NNW-SSE direction, parallel to the southern ridge of the island, and a quasi-horizontal extension in an EW direction. Finally, a simple source time function of 0.3s has been estimated for this shock using the Empirical Green function methodology.

  16. Health effects from low-frequency noise and infrasound in the general population: Is it time to listen? A systematic review of observational studies.

    PubMed

    Baliatsas, Christos; van Kamp, Irene; van Poll, Ric; Yzermans, Joris

    2016-07-01

    A systematic review of observational studies was conducted to assess the association between everyday life low-frequency noise (LFN) components, including infrasound and health effects in the general population. Literature databases Pubmed, Embase and PsycInfo and additional bibliographic sources such as reference sections of key publications and journal databases were searched for peer-reviewed studies published from 2000 to 2015. Seven studies met the inclusion criteria. Most of them examined subjective annoyance as primary outcome. The adequacy of provided information in the included papers and methodological quality of studies was also addressed. Moreover, studies were screened for meta-analysis eligibility. Some associations were observed between exposure to LFN and annoyance, sleep-related problems, concentration difficulties and headache in the adult population living in the vicinity of a range of LFN sources. However, evidence, especially in relation to chronic medical conditions, was very limited. The estimated pooled prevalence of high subjective annoyance attributed to LFN was about 10%. Epidemiological research on LFN and health effects is scarce and suffers from methodological shortcomings. Low frequency noise in the everyday environment constitutes an issue that requires more research attention, particularly for people living in the vicinity of relevant sources. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Methodology for Estimating Total Automotive Manufacturing Costs

    DOT National Transportation Integrated Search

    1983-04-01

    A number of methodologies for estimating manufacturing costs have been developed. This report discusses the different approaches and shows that an approach to estimating manufacturing costs in the automobile industry based on surrogate plants is pref...

  18. A methodological approach to a realistic evaluation of skin absorbed doses during manipulation of radioactive sources by means of GAMOS Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Italiano, Antonio; Amato, Ernesto; Auditore, Lucrezia; Baldari, Sergio

    2018-05-01

    The accurate evaluation of the radiation burden associated with radiation absorbed doses to the skin of the extremities during the manipulation of radioactive sources is a critical issue in operational radiological protection, deserving the most accurate calculation approaches available. Monte Carlo simulation of the radiation transport and interaction is the gold standard for the calculation of dose distributions in complex geometries and in presence of extended spectra of multi-radiation sources. We propose the use of Monte Carlo simulations in GAMOS, in order to accurately estimate the dose to the extremities during manipulation of radioactive sources. We report the results of these simulations for 90Y, 131I, 18F and 111In nuclides in water solutions enclosed in glass or plastic receptacles, such as vials or syringes. Skin equivalent doses at 70 μm of depth and dose-depth profiles are reported for different configurations, highlighting the importance of adopting a realistic geometrical configuration in order to get accurate dosimetric estimations. Due to the easiness of implementation of GAMOS simulations, case-specific geometries and nuclides can be adopted and results can be obtained in less than about ten minutes of computation time with a common workstation.

  19. Determination of Time Dependent Virus Inactivation Rates

    NASA Astrophysics Data System (ADS)

    Chrysikopoulos, C. V.; Vogler, E. T.

    2003-12-01

    A methodology is developed for estimating temporally variable virus inactivation rate coefficients from experimental virus inactivation data. The methodology consists of a technique for slope estimation of normalized virus inactivation data in conjunction with a resampling parameter estimation procedure. The slope estimation technique is based on a relatively flexible geostatistical method known as universal kriging. Drift coefficients are obtained by nonlinear fitting of bootstrap samples and the corresponding confidence intervals are obtained by bootstrap percentiles. The proposed methodology yields more accurate time dependent virus inactivation rate coefficients than those estimated by fitting virus inactivation data to a first-order inactivation model. The methodology is successfully applied to a set of poliovirus batch inactivation data. Furthermore, the importance of accurate inactivation rate coefficient determination on virus transport in water saturated porous media is demonstrated with model simulations.

  20. Expanded uncertainty estimation methodology in determining the sandy soils filtration coefficient

    NASA Astrophysics Data System (ADS)

    Rusanova, A. D.; Malaja, L. D.; Ivanov, R. N.; Gruzin, A. V.; Shalaj, V. V.

    2018-04-01

    The combined standard uncertainty estimation methodology in determining the sandy soils filtration coefficient has been developed. The laboratory researches were carried out which resulted in filtration coefficient determination and combined uncertainty estimation obtaining.

  1. The Development of a Methodology for Estimating the Cost of Air Force On-the-Job Training.

    ERIC Educational Resources Information Center

    Samers, Bernard N.; And Others

    The Air Force uses a standardized costing methodology for resident technical training schools (TTS); no comparable methodology exists for computing the cost of on-the-job training (OJT). This study evaluates three alternative survey methodologies and a number of cost models for estimating the cost of OJT for airmen training in the Administrative…

  2. Identifying rural food deserts: Methodological considerations for food environment interventions.

    PubMed

    Lebel, Alexandre; Noreau, David; Tremblay, Lucie; Oberlé, Céline; Girard-Gadreau, Maurie; Duguay, Mathieu; Block, Jason P

    2016-06-09

    Food insecurity in an important public health issue and affects 13% of Canadian households. It is associated with poor accessibility to fresh, diverse and affordable food products. However, measurement of the food environment is challenging in rural settings since the proximity of food supply sources is unevenly distributed. The objective of this study was to develop a methodology to identify food deserts in rural environments. In-store evaluations of 25 food products were performed for all food stores located in four contiguous rural counties in Quebec. The quality of food products was estimated using four indices: freshness, affordability, diversity and the relative availability. Road network distance between all residences to the closest food store with a favourable score on the four dimensions was mapped to identify residential clusters located in deprived communities without reasonable access to a "good" food source. The result was compared with the food desert parameters proposed by the US Department of Agriculture (USDA), as well as with the perceptions of a group of regional stakeholders. When food quality was considered, food deserts appeared more prevalent than when only the USDA definition was used. Objective measurements of the food environment matched stakeholders' perceptions. Food stores' characteristics are different in rural areas and require an in-store estimation to identify potential rural food deserts. The objective measurements of the food environment combined with the field knowledge of stakeholders may help to shape stronger arguments to gain the support of decision-makers to develop relevant interventions.

  3. Optimal Multi-Type Sensor Placement for Structural Identification by Static-Load Testing

    PubMed Central

    Papadopoulou, Maria; Vernay, Didier; Smith, Ian F. C.

    2017-01-01

    Assessing ageing infrastructure is a critical challenge for civil engineers due to the difficulty in the estimation and integration of uncertainties in structural models. Field measurements are increasingly used to improve knowledge of the real behavior of a structure; this activity is called structural identification. Error-domain model falsification (EDMF) is an easy-to-use model-based structural-identification methodology which robustly accommodates systematic uncertainties originating from sources such as boundary conditions, numerical modelling and model fidelity, as well as aleatory uncertainties from sources such as measurement error and material parameter-value estimations. In most practical applications of structural identification, sensors are placed using engineering judgment and experience. However, since sensor placement is fundamental to the success of structural identification, a more rational and systematic method is justified. This study presents a measurement system design methodology to identify the best sensor locations and sensor types using information from static-load tests. More specifically, three static-load tests were studied for the sensor system design using three types of sensors for a performance evaluation of a full-scale bridge in Singapore. Several sensor placement strategies are compared using joint entropy as an information-gain metric. A modified version of the hierarchical algorithm for sensor placement is proposed to take into account mutual information between load tests. It is shown that a carefully-configured measurement strategy that includes multiple sensor types and several load tests maximizes information gain. PMID:29240684

  4. Bayesian data fusion for spatial prediction of categorical variables in environmental sciences

    NASA Astrophysics Data System (ADS)

    Gengler, Sarah; Bogaert, Patrick

    2014-12-01

    First developed to predict continuous variables, Bayesian Maximum Entropy (BME) has become a complete framework in the context of space-time prediction since it has been extended to predict categorical variables and mixed random fields. This method proposes solutions to combine several sources of data whatever the nature of the information. However, the various attempts that were made for adapting the BME methodology to categorical variables and mixed random fields faced some limitations, as a high computational burden. The main objective of this paper is to overcome this limitation by generalizing the Bayesian Data Fusion (BDF) theoretical framework to categorical variables, which is somehow a simplification of the BME method through the convenient conditional independence hypothesis. The BDF methodology for categorical variables is first described and then applied to a practical case study: the estimation of soil drainage classes using a soil map and point observations in the sandy area of Flanders around the city of Mechelen (Belgium). The BDF approach is compared to BME along with more classical approaches, as Indicator CoKringing (ICK) and logistic regression. Estimators are compared using various indicators, namely the Percentage of Correctly Classified locations (PCC) and the Average Highest Probability (AHP). Although BDF methodology for categorical variables is somehow a simplification of BME approach, both methods lead to similar results and have strong advantages compared to ICK and logistic regression.

  5. Methodology to Estimate the Quantity, Composition, and ...

    EPA Pesticide Factsheets

    This report, Methodology to Estimate the Quantity, Composition and Management of Construction and Demolition Debris in the US, was developed to expand access to data on CDD in the US and to support research on CDD and sustainable materials management. Since past US EPA CDD estimates have been limited to building-related CDD, a goal in the development of this methodology was to use data originating from CDD facilities and contractors to better capture the current picture of total CDD management, including materials from roads, bridges and infrastructure. This report, Methodology to Estimate the Quantity, Composition and Management of Construction and Demolition Debris in the US, was developed to expand access to data on CDD in the US and to support research on CDD and sustainable materials management. Since past US EPA CDD estimates have been limited to building-related CDD, a goal in the development of this methodology was to use data originating from CDD facilities and contractors to better capture the current picture of total CDD management, including materials from roads, bridges and infrastructure.

  6. Methodological Rigor in Preclinical Cardiovascular Studies

    PubMed Central

    Ramirez, F. Daniel; Motazedian, Pouya; Jung, Richard G.; Di Santo, Pietro; MacDonald, Zachary D.; Moreland, Robert; Simard, Trevor; Clancy, Aisling A.; Russo, Juan J.; Welch, Vivian A.; Wells, George A.

    2017-01-01

    Rationale: Methodological sources of bias and suboptimal reporting contribute to irreproducibility in preclinical science and may negatively affect research translation. Randomization, blinding, sample size estimation, and considering sex as a biological variable are deemed crucial study design elements to maximize the quality and predictive value of preclinical experiments. Objective: To examine the prevalence and temporal patterns of recommended study design element implementation in preclinical cardiovascular research. Methods and Results: All articles published over a 10-year period in 5 leading cardiovascular journals were reviewed. Reports of in vivo experiments in nonhuman mammals describing pathophysiology, genetics, or therapeutic interventions relevant to specific cardiovascular disorders were identified. Data on study design and animal model use were collected. Citations at 60 months were additionally examined as a surrogate measure of research impact in a prespecified subset of studies, stratified by individual and cumulative study design elements. Of 28 636 articles screened, 3396 met inclusion criteria. Randomization was reported in 21.8%, blinding in 32.7%, and sample size estimation in 2.3%. Temporal and disease-specific analyses show that the implementation of these study design elements has overall not appreciably increased over the past decade, except in preclinical stroke research, which has uniquely demonstrated significant improvements in methodological rigor. In a subset of 1681 preclinical studies, randomization, blinding, sample size estimation, and inclusion of both sexes were not associated with increased citations at 60 months. Conclusions: Methodological shortcomings are prevalent in preclinical cardiovascular research, have not substantially improved over the past 10 years, and may be overlooked when basing subsequent studies. Resultant risks of bias and threats to study validity have the potential to hinder progress in cardiovascular medicine as preclinical research often precedes and informs clinical trials. Stroke research quality has uniquely improved in recent years, warranting a closer examination for interventions to model in other cardiovascular fields. PMID:28373349

  7. Methodological Rigor in Preclinical Cardiovascular Studies: Targets to Enhance Reproducibility and Promote Research Translation.

    PubMed

    Ramirez, F Daniel; Motazedian, Pouya; Jung, Richard G; Di Santo, Pietro; MacDonald, Zachary D; Moreland, Robert; Simard, Trevor; Clancy, Aisling A; Russo, Juan J; Welch, Vivian A; Wells, George A; Hibbert, Benjamin

    2017-06-09

    Methodological sources of bias and suboptimal reporting contribute to irreproducibility in preclinical science and may negatively affect research translation. Randomization, blinding, sample size estimation, and considering sex as a biological variable are deemed crucial study design elements to maximize the quality and predictive value of preclinical experiments. To examine the prevalence and temporal patterns of recommended study design element implementation in preclinical cardiovascular research. All articles published over a 10-year period in 5 leading cardiovascular journals were reviewed. Reports of in vivo experiments in nonhuman mammals describing pathophysiology, genetics, or therapeutic interventions relevant to specific cardiovascular disorders were identified. Data on study design and animal model use were collected. Citations at 60 months were additionally examined as a surrogate measure of research impact in a prespecified subset of studies, stratified by individual and cumulative study design elements. Of 28 636 articles screened, 3396 met inclusion criteria. Randomization was reported in 21.8%, blinding in 32.7%, and sample size estimation in 2.3%. Temporal and disease-specific analyses show that the implementation of these study design elements has overall not appreciably increased over the past decade, except in preclinical stroke research, which has uniquely demonstrated significant improvements in methodological rigor. In a subset of 1681 preclinical studies, randomization, blinding, sample size estimation, and inclusion of both sexes were not associated with increased citations at 60 months. Methodological shortcomings are prevalent in preclinical cardiovascular research, have not substantially improved over the past 10 years, and may be overlooked when basing subsequent studies. Resultant risks of bias and threats to study validity have the potential to hinder progress in cardiovascular medicine as preclinical research often precedes and informs clinical trials. Stroke research quality has uniquely improved in recent years, warranting a closer examination for interventions to model in other cardiovascular fields. © 2017 The Authors.

  8. The first Extreme Ultraviolet Explorer source catalog

    NASA Technical Reports Server (NTRS)

    Bowyer, S.; Lieu, R.; Lampton, M.; Lewis, J.; Wu, X.; Drake, J. J.; Malina, R. F.

    1994-01-01

    The Extreme Ultraviolet Explorer (EUVE) has conducted an all-sky survey to locate and identify point sources of emission in four extreme ultraviolet wavelength bands centered at approximately 100, 200, 400, and 600 A. A companion deep survey of a strip along half the ecliptic plane was simultaneously conducted. In this catalog we report the sources found in these surveys using rigorously defined criteria uniformly applied to the data set. These are the first surveys to be made in the three longer wavelength bands, and a substantial number of sources were detected in these bands. We present a number of statistical diagnostics of the surveys, including their source counts, their sensitivites, and their positional error distributions. We provide a separate list of those sources reported in the EUVE Bright Source List which did not meet our criteria for inclusion in our primary list. We also provide improved count rate and position estimates for a majority of these sources based on the improved methodology used in this paper. In total, this catalog lists a total of 410 point sources, of which 372 have plausible optical ultraviolet, or X-ray identifications, which are also listed.

  9. Characterization of particulate emissions from Australian open-cut coal mines: Toward improved emission estimates.

    PubMed

    Richardson, Claire; Rutherford, Shannon; Agranovski, Igor

    2018-06-01

    Given the significance of mining as a source of particulates, accurate characterization of emissions is important for the development of appropriate emission estimation techniques for use in modeling predictions and to inform regulatory decisions. The currently available emission estimation methods for Australian open-cut coal mines relate primarily to total suspended particulates and PM 10 (particulate matter with an aerodynamic diameter <10 μm), and limited data are available relating to the PM 2.5 (<2.5 μm) size fraction. To provide an initial analysis of the appropriateness of the currently available emission estimation techniques, this paper presents results of sampling completed at three open-cut coal mines in Australia. The monitoring data demonstrate that the particulate size fraction varies for different mining activities, and that the region in which the mine is located influences the characteristics of the particulates emitted to the atmosphere. The proportion of fine particulates in the sample increased with distance from the source, with the coarse fraction being a more significant proportion of total suspended particulates close to the source of emissions. In terms of particulate composition, the results demonstrate that the particulate emissions are predominantly sourced from naturally occurring geological material, and coal comprises less than 13% of the overall emissions. The size fractionation exhibited by the sampling data sets is similar to that adopted in current Australian emission estimation methods but differs from the size fractionation presented in the U.S. Environmental Protection Agency methodology. Development of region-specific emission estimation techniques for PM 10 and PM 2.5 from open-cut coal mines is necessary to allow accurate prediction of particulate emissions to inform regulatory decisions and for use in modeling predictions. Development of region-specific emission estimation techniques for PM 10 and PM 2.5 from open-cut coal mines is necessary to allow accurate prediction of particulate emissions to inform regulatory decisions and for use in modeling predictions. Comprehensive air quality monitoring was undertaken, and corresponding recommendations were provided.

  10. 43 CFR 11.83 - Damage determination phase-use value methodologies.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...

  11. 43 CFR 11.83 - Damage determination phase-use value methodologies.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...

  12. 43 CFR 11.83 - Damage determination phase-use value methodologies.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...

  13. 43 CFR 11.83 - Damage determination phase-use value methodologies.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...

  14. 43 CFR 11.83 - Damage determination phase-use value methodologies.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...

  15. Component costs of foodborne illness: a scoping review

    PubMed Central

    2014-01-01

    Background Governments require high-quality scientific evidence to prioritize resource allocation and the cost-of-illness (COI) methodology is one technique used to estimate the economic burden of a disease. However, variable cost inventories make it difficult to interpret and compare costs across multiple studies. Methods A scoping review was conducted to identify the component costs and the respective data sources used for estimating the cost of foodborne illnesses in a population. This review was accomplished by: (1) identifying the research question and relevant literature, (2) selecting the literature, (3) charting, collating, and summarizing the results. All pertinent data were extracted at the level of detail reported in a study, and the component cost and source data were subsequently grouped into themes. Results Eighty-four studies were identified that described the cost of foodborne illness in humans. Most studies (80%) were published in the last two decades (1992–2012) in North America and Europe. The 10 most frequently estimated costs were due to illnesses caused by bacterial foodborne pathogens, with non-typhoidal Salmonella spp. being the most commonly studied. Forty studies described both individual (direct and indirect) and societal level costs. The direct individual level component costs most often included were hospital services, physician personnel, and drug costs. The most commonly reported indirect individual level component cost was productivity losses due to sick leave from work. Prior estimates published in the literature were the most commonly used source of component cost data. Data sources were not provided or specifically linked to component costs in several studies. Conclusions The results illustrated a highly variable depth and breadth of individual and societal level component costs, and a wide range of data sources being used. This scoping review can be used as evidence that there is a lack of standardization in cost inventories in the cost of foodborne illness literature, and to promote greater transparency and detail of data source reporting. By conforming to a more standardized cost inventory, and by reporting data sources in more detail, there will be an increase in cost of foodborne illness research that can be interpreted and compared in a meaningful way. PMID:24885154

  16. Nutrition surveillance using a small open cohort: experience from Burkina Faso.

    PubMed

    Altmann, Mathias; Fermanian, Christophe; Jiao, Boshen; Altare, Chiara; Loada, Martin; Myatt, Mark

    2016-01-01

    Nutritional surveillance remains generally weak and early warning systems are needed in areas with high burden of acute under-nutrition. In order to enhance insight into nutritional surveillance, a community-based sentinel sites approach, known as the Listening Posts (LP) Project, was piloted in Burkina Faso by Action Contre la Faim (ACF). This paper presents ACF's experience with the LP approach and investigates potential selection and observational biases. Six primary sampling units (PSUs) were selected in each livelihood zone using the centric systematic area sampling methodology. In each PSU, 22 children aged between 6 and 24 months were selected by proximity sampling. The prevalence of GAM for each month from January 2011 to December 2013 was estimated using a Bayesian normal-normal conjugate analysis followed by PROBIT estimation. To validate the LP approach in detecting changes over time, the time trends of MUAC from LP and from five cross-sectional surveys were modelled using polynomial regression and compared by using a Wald test. The differences between prevalence estimates from the two data sources were used to assess selection and observational biases. The 95 % credible interval around GAM prevalence estimates using LP approach ranged between +6.5 %/-6.0 % on a prevalence of 36.1 % and +3.5 %/-2.9 % on a prevalence of 10.8 %. LP and cross-sectional surveys time trend models were well correlated (p = 0.6337). Although LP showed a slight but significant trend for GAM to decrease over time at a rate of -0.26 %/visit, the prevalence estimates from the two data sources showed good agreement over a 3-year period. The LP methodology has proved to be valid in following trends of GAM prevalence for a period of 3 years without selection bias. However, a slight observational bias was observed, requiring a periodical reselection of the sentinel sites. This kind of surveillance project is suited to use in areas with high burden of acute under-nutrition where early warning systems are strongly needed. Advocacy is necessary to develop sustainable nutrition surveillance system and to support the use of surveillance data in guiding nutritional programs.

  17. The USA Nr Inventory: Dominant Sources and Primary Transport Pathways

    NASA Astrophysics Data System (ADS)

    Sabo, R. D.; Clark, C.; Sobota, D. J.; Compton, J.; Cooter, E. J.; Schwede, D. B.; Bash, J. O.; Rea, A.; Dobrowolski, J. P.

    2016-12-01

    Efforts to mitigate the deleterious effects of excess reactive nitrogen (Nr) on human health and ecosystem goods and service while ensuring food, biofuel, and fiber availability, is one of the most pressing environmental management challenges of this century. Effective management of Nr requires up to date inventories that quantitatively characterize the sources, transport, and transformation of Nr through the environment. The inherent complexity of the nitrogen cycle, however, through multiple exchange points across air, water, and terrestrial media, renders such inventories difficult to compile and manage. Previous Nr Inventories are for 2002 and 2007, and used data sources that have since been improved. Thus, this recent inventory will substantially advance the methodology across many sectors of the inventory (e.g. deposition and biological fixation in crops and natural systems) and create a recent snapshot that is sorely needed for policy planning and trends analysis. Here we use a simple mass balance approach to estimate the input-output budgets for all United States Geologic Survey Hydrologic Unit Code-8 watersheds. We focus on a recent year (i.e. 2012) to update the Nr Inventory, but apply the analytical approach for multiple years where possible to assess trends through time. We also compare various sector estimates using multiple methodologies. Assembling datasets that account for new Nr inputs into watersheds (e.g., atmospheric NOy deposition, food imports, biologic N fixation) and internal fluxes of recycled Nr (e.g., manure, Nr emmissions/volatilization) provide an unprecedented, data driven computation of N flux. Input-output budgets will offer insight into 1) the dominant sources of Nr in a watershed (e.g., food imports, atmospheric N deposition, or fertilizer), 2) the primary loss pathways for Nr (e.g., crop N harvest, volatilization/emissions), and 3) what watersheds are net sources versus sinks of Nr. These insights will provide needed clarity for managers looking to minimize the loss of Nr to atmospheric and aquatic compartments, while also providing a foundational database for researchers assessing the dominant controls of N retention and loss in natural and anthropogenically dominated ecosystems. Disclaimer: Views expressed are the authors' and not views or polices of the U.S.EPA.

  18. Influence of Gridded Standoff Measurement Resolution on Numerical Bathymetric Inversion

    NASA Astrophysics Data System (ADS)

    Hesser, T.; Farthing, M. W.; Brodie, K.

    2016-02-01

    The bathymetry from the surfzone to the shoreline incurs frequent, active movement due to wave energy interacting with the seafloor. Methodologies to measure bathymetry range from point-source in-situ instruments, vessel-mounted single-beam or multi-beam sonar surveys, airborne bathymetric lidar, as well as inversion techniques from standoff measurements of wave processes from video or radar imagery. Each type of measurement has unique sources of error and spatial and temporal resolution and availability. Numerical bathymetry estimation frameworks can use these disparate data types in combination with model-based inversion techniques to produce a "best-estimate of bathymetry" at a given time. Understanding how the sources of error and varying spatial or temporal resolution of each data type affect the end result is critical for determining best practices and in turn increase the accuracy of bathymetry estimation techniques. In this work, we consider an initial step in the development of a complete framework for estimating bathymetry in the nearshore by focusing on gridded standoff measurements and in-situ point observations in model-based inversion at the U.S. Army Corps of Engineers Field Research Facility in Duck, NC. The standoff measurement methods return wave parameters computed using linear wave theory from the direct measurements. These gridded datasets can range in temporal and spatial resolution that do not match the desired model parameters and therefore could lead to a reduction in the accuracy of these methods. Specifically, we investigate the affect of numerical resolution on the accuracy of an Ensemble Kalman Filter bathymetric inversion technique in relation to the spatial and temporal resolution of the gridded standoff measurements. The accuracies of the bathymetric estimates are compared with both high-resolution Real Time Kinematic (RTK) single-beam surveys as well as alternative direct in-situ measurements using sonic altimeters.

  19. Efficient inversion of volcano deformation based on finite element models : An application to Kilauea volcano, Hawaii

    NASA Astrophysics Data System (ADS)

    Charco, María; González, Pablo J.; Galán del Sastre, Pedro

    2017-04-01

    The Kilauea volcano (Hawaii, USA) is one of the most active volcanoes world-wide and therefore one of the better monitored volcanoes around the world. Its complex system provides a unique opportunity to investigate the dynamics of magma transport and supply. Geodetic techniques, as Interferometric Synthetic Aperture Radar (InSAR) are being extensively used to monitor ground deformation at volcanic areas. The quantitative interpretation of such surface ground deformation measurements using geodetic data requires both, physical modelling to simulate the observed signals and inversion approaches to estimate the magmatic source parameters. Here, we use synthetic aperture radar data from Sentinel-1 radar interferometry satellite mission to image volcano deformation sources during the inflation along Kilauea's Southwest Rift Zone in April-May 2015. We propose a Finite Element Model (FEM) for the calculation of Green functions in a mechanically heterogeneous domain. The key aspect of the methodology lies in applying the reciprocity relationship of the Green functions between the station and the source for efficient numerical inversions. The search for the best-fitting magmatic (point) source(s) is generally conducted for an array of 3-D locations extending below a predefined volume region. However, our approach allows to reduce the total number of Green functions to the number of the observation points by using the, above mentioned, reciprocity relationship. This new methodology is able to accurately represent magmatic processes using physical models capable of simulating volcano deformation in non-uniform material properties distribution domains, which eventually will lead to better description of the status of the volcano.

  20. Advances in using Internet searches to track dengue

    PubMed Central

    Yang, Shihao; Kou, Samuel C.; Brownstein, John S.; Brooke, Nicholas

    2017-01-01

    Dengue is a mosquito-borne disease that threatens over half of the world’s population. Despite being endemic to more than 100 countries, government-led efforts and tools for timely identification and tracking of new infections are still lacking in many affected areas. Multiple methodologies that leverage the use of Internet-based data sources have been proposed as a way to complement dengue surveillance efforts. Among these, dengue-related Google search trends have been shown to correlate with dengue activity. We extend a methodological framework, initially proposed and validated for flu surveillance, to produce near real-time estimates of dengue cases in five countries/states: Mexico, Brazil, Thailand, Singapore and Taiwan. Our result shows that our modeling framework can be used to improve the tracking of dengue activity in multiple locations around the world. PMID:28727821

  1. Binational arsenic exposure survey: methodology and estimated arsenic intake from drinking water and urinary arsenic concentrations.

    PubMed

    Roberge, Jason; O'Rourke, Mary Kay; Meza-Montenegro, Maria Mercedes; Gutiérrez-Millán, Luis Enrique; Burgess, Jefferey L; Harris, Robin B

    2012-04-01

    The Binational Arsenic Exposure Survey (BAsES) was designed to evaluate probable arsenic exposures in selected areas of southern Arizona and northern Mexico, two regions with known elevated levels of arsenic in groundwater reserves. This paper describes the methodology of BAsES and the relationship between estimated arsenic intake from beverages and arsenic output in urine. Households from eight communities were selected for their varying groundwater arsenic concentrations in Arizona, USA and Sonora, Mexico. Adults responded to questionnaires and provided dietary information. A first morning urine void and water from all household drinking sources were collected. Associations between urinary arsenic concentration (total, organic, inorganic) and estimated level of arsenic consumed from water and other beverages were evaluated through crude associations and by random effects models. Median estimated total arsenic intake from beverages among participants from Arizona communities ranged from 1.7 to 14.1 µg/day compared to 0.6 to 3.4 µg/day among those from Mexico communities. In contrast, median urinary inorganic arsenic concentrations were greatest among participants from Hermosillo, Mexico (6.2 µg/L) whereas a high of 2.0 µg/L was found among participants from Ajo, Arizona. Estimated arsenic intake from drinking water was associated with urinary total arsenic concentration (p < 0.001), urinary inorganic arsenic concentration (p < 0.001), and urinary sum of species (p < 0.001). Urinary arsenic concentrations increased between 7% and 12% for each one percent increase in arsenic consumed from drinking water. Variability in arsenic intake from beverages and urinary arsenic output yielded counter intuitive results. Estimated intake of arsenic from all beverages was greatest among Arizonans yet participants in Mexico had higher urinary total and inorganic arsenic concentrations. Other contributors to urinary arsenic concentrations should be evaluated.

  2. Binational Arsenic Exposure Survey: Methodology and Estimated Arsenic Intake from Drinking Water and Urinary Arsenic Concentrations

    PubMed Central

    Roberge, Jason; O’Rourke, Mary Kay; Meza-Montenegro, Maria Mercedes; Gutiérrez-Millán, Luis Enrique; Burgess, Jefferey L.; Harris, Robin B.

    2012-01-01

    The Binational Arsenic Exposure Survey (BAsES) was designed to evaluate probable arsenic exposures in selected areas of southern Arizona and northern Mexico, two regions with known elevated levels of arsenic in groundwater reserves. This paper describes the methodology of BAsES and the relationship between estimated arsenic intake from beverages and arsenic output in urine. Households from eight communities were selected for their varying groundwater arsenic concentrations in Arizona, USA and Sonora, Mexico. Adults responded to questionnaires and provided dietary information. A first morning urine void and water from all household drinking sources were collected. Associations between urinary arsenic concentration (total, organic, inorganic) and estimated level of arsenic consumed from water and other beverages were evaluated through crude associations and by random effects models. Median estimated total arsenic intake from beverages among participants from Arizona communities ranged from 1.7 to 14.1 µg/day compared to 0.6 to 3.4 µg/day among those from Mexico communities. In contrast, median urinary inorganic arsenic concentrations were greatest among participants from Hermosillo, Mexico (6.2 µg/L) whereas a high of 2.0 µg/L was found among participants from Ajo, Arizona. Estimated arsenic intake from drinking water was associated with urinary total arsenic concentration (p < 0.001), urinary inorganic arsenic concentration (p < 0.001), and urinary sum of species (p < 0.001). Urinary arsenic concentrations increased between 7% and 12% for each one percent increase in arsenic consumed from drinking water. Variability in arsenic intake from beverages and urinary arsenic output yielded counter intuitive results. Estimated intake of arsenic from all beverages was greatest among Arizonans yet participants in Mexico had higher urinary total and inorganic arsenic concentrations. Other contributors to urinary arsenic concentrations should be evaluated. PMID:22690182

  3. Using food intake records to estimate compliance with the Eatwell Plate dietary guidelines.

    PubMed

    Whybrow, S; Macdiarmid, J I; Craig, L C A; Clark, H; McNeill, G

    2016-04-01

    The UK Eatwell Plate is consumer based advice recommending the proportions of five food groups for a balanced diet: starchy foods, fruit and vegetables, dairy foods, nondairy sources of protein and foods and drinks high in fat or sugar. Many foods comprise ingredients from several food groups and consumers need to consider how these fit with the proportions of the Eatwell Plate. This involves disaggregating composite dishes into proportions of individual food components. The present study aimed to match the diets of adults in Scotland to the Eatwell Plate dietary recommendations and to describe the assumptions and methodological issues associated with estimating Eatwell Plate proportions from dietary records. Foods from weighed intake records of 161 females and 151 males were assigned to a single Eatwell group based on the main ingredient for composite foods, and the overall Eatwell Plate proportions of each subject's diet were calculated. Food group proportions were then recalculated after disaggregating composite foods. The fruit and vegetables and starchy food groups consumed were significantly lower than recommended in the Eatwell Plate, whereas the proportions of the protein and foods high in fat or sugar were significantly higher. Failing to disaggregate composite foods gave an inaccurate estimate of the food group composition of the diet. Estimating Eatwell Plate proportions from dietary records is not straightforward, and is reliant on methodological assumptions. These need to be standardised and disseminated to ensure consistent analysis. © 2015 The British Dietetic Association Ltd.

  4. Inverse modelling for real-time estimation of radiological consequences in the early stage of an accidental radioactivity release.

    PubMed

    Pecha, Petr; Šmídl, Václav

    2016-11-01

    A stepwise sequential assimilation algorithm is proposed based on an optimisation approach for recursive parameter estimation and tracking of radioactive plume propagation in the early stage of a radiation accident. Predictions of the radiological situation in each time step of the plume propagation are driven by an existing short-term meteorological forecast and the assimilation procedure manipulates the model parameters to match the observations incoming concurrently from the terrain. Mathematically, the task is a typical ill-posed inverse problem of estimating the parameters of the release. The proposed method is designated as a stepwise re-estimation of the source term release dynamics and an improvement of several input model parameters. It results in a more precise determination of the adversely affected areas in the terrain. The nonlinear least-squares regression methodology is applied for estimation of the unknowns. The fast and adequately accurate segmented Gaussian plume model (SGPM) is used in the first stage of direct (forward) modelling. The subsequent inverse procedure infers (re-estimates) the values of important model parameters from the actual observations. Accuracy and sensitivity of the proposed method for real-time forecasting of the accident propagation is studied. First, a twin experiment generating noiseless simulated "artificial" observations is studied to verify the minimisation algorithm. Second, the impact of the measurement noise on the re-estimated source release rate is examined. In addition, the presented method can be used as a proposal for more advanced statistical techniques using, e.g., importance sampling. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. A Bayesian Network Based Global Sensitivity Analysis Method for Identifying Dominant Processes in a Multi-physics Model

    NASA Astrophysics Data System (ADS)

    Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.

    2016-12-01

    Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can provide useful information for environmental management and decision-makers to formulate policies and strategies.

  6. Methodological uncertainties in multi-regression analyses of middle-atmospheric data series.

    PubMed

    Kerzenmacher, Tobias E; Keckhut, Philippe; Hauchecorne, Alain; Chanin, Marie-Lise

    2006-07-01

    Multi-regression analyses have often been used recently to detect trends, in particular in ozone or temperature data sets in the stratosphere. The confidence in detecting trends depends on a number of factors which generate uncertainties. Part of these uncertainties comes from the random variability and these are what is usually considered. They can be statistically estimated from residual deviations between the data and the fitting model. However, interferences between different sources of variability affecting the data set, such as the Quasi-Biennal Oscillation (QBO), volcanic aerosols, solar flux variability and the trend can also be a critical source of errors. This type of error has hitherto not been well quantified. In this work an artificial data series has been generated to carry out such estimates. The sources of errors considered here are: the length of the data series, the dependence on the choice of parameters used in the fitting model and the time evolution of the trend in the data series. Curves provided here, will permit future studies to test the magnitude of the methodological bias expected for a given case, as shown in several real examples. It is found that, if the data series is shorter than a decade, the uncertainties are very large, whatever factors are chosen to identify the source of the variability. However the errors can be limited when dealing with natural variability, if a sufficient number of periods (for periodic forcings) are covered by the analysed dataset. However when analysing the trend, the response to volcanic eruption induces a bias, whatever the length of the data series. The signal to noise ratio is a key factor: doubling the noise increases the period for which data is required in order to obtain an error smaller than 10%, from 1 to 3-4 decades. Moreover, if non-linear trends are superimposed on the data, and if the length of the series is longer than five years, a non-linear function has to be used to estimate trends. When applied to real data series, and when a breakpoint in the series occurs, the study reveals that data extending over 5 years are needed to detect a significant change in the slope of the ozone trends at mid-latitudes.

  7. 2-D Path Corrections for Local and Regional Coda Waves: A Test of Transportability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayeda, K M; Malagnini, L; Phillips, W S

    2005-07-13

    Reliable estimates of the seismic source spectrum are necessary for accurate magnitude, yield, and energy estimation. In particular, how seismic radiated energy scales with increasing earthquake size has been the focus of recent debate within the community and has direct implications on earthquake source physics studies as well as hazard mitigation. The 1-D coda methodology of Mayeda et al. [2003] has provided the lowest variance estimate of the source spectrum when compared against traditional approaches that use direct S-waves, thus making it ideal for networks that have sparse station distribution. The 1-D coda methodology has been mostly confined to regionsmore » of approximately uniform complexity. For larger, more geophysically complicated regions, 2-D path corrections may be required. We will compare performance of 1-D versus 2-D path corrections in a variety of regions. First, the complicated tectonics of the northern California region coupled with high quality broadband seismic data provides for an ideal ''apples-to-apples'' test of 1-D and 2-D path assumptions on direct waves and their coda. Next, we will compare results for the Italian Alps using high frequency data from the University of Genoa. For Northern California, we used the same station and event distribution and compared 1-D and 2-D path corrections and observed the following results: (1) 1-D coda results reduced the amplitude variance relative to direct S-waves by roughly a factor of 8 (800%); (2) Applying a 2-D correction to the coda resulted in up to 40% variance reduction from the 1-D coda results; (3) 2-D direct S-wave results, though better than 1-D direct waves, were significantly worse than the 1-D coda. We found that coda-based moment-rate source spectra derived from the 2-D approach were essentially identical to those from the 1-D approach for frequencies less than {approx}0.7-Hz, however for the high frequencies (0.7 {le} f {le} 8.0-Hz), the 2-D approach resulted in inter-station scatter that was generally 10-30% smaller. For complex regions where data are plentiful, a 2-D approach can significantly improve upon the simple 1-D assumption. In regions where only 1-D coda correction is available it is still preferable over 2-D direct wave-based measures.« less

  8. Methodologies for Estimating Cumulative Human Exposures to Current-Use Pyrethroid Pesticides

    EPA Science Inventory

    We estimated cumulative residential pesticide exposures for a group of nine young children (4–6 years) using three different methodologies developed by the US Environmental Protection Agency and compared the results with estimates derived from measured urinary metabolite concentr...

  9. CONCEPTUAL DESIGNS FOR A NEW HIGHWAY VEHICLE EMISSIONS ESTIMATION METHODOLOGY

    EPA Science Inventory

    The report discusses six conceptual designs for a new highway vehicle emissions estimation methodology and summarizes the recommendations of each design for improving the emissions and activity factors in the emissions estimation process. he complete design reports are included a...

  10. Probabilistic estimation of long-term volcanic hazard under evolving tectonic conditions in a 1 Ma timeframe

    NASA Astrophysics Data System (ADS)

    Jaquet, O.; Lantuéjoul, C.; Goto, J.

    2017-10-01

    Risk assessments in relation to the siting of potential deep geological repositories for radioactive wastes demand the estimation of long-term tectonic hazards such as volcanicity and rock deformation. Owing to their tectonic situation, such evaluations concern many industrial regions around the world. For sites near volcanically active regions, a prevailing source of uncertainty is related to volcanic hazard. For specific situations, in particular in relation to geological repository siting, the requirements for the assessment of volcanic and tectonic hazards have to be expanded to 1 million years. At such time scales, tectonic changes are likely to influence volcanic hazard and therefore a particular stochastic model needs to be developed for the estimation of volcanic hazard. The concepts and theoretical basis of the proposed model are given and a methodological illustration is provided using data from the Tohoku region of Japan.

  11. Downward longwave surface radiation from sun-synchronous satellite data - Validation of methodology

    NASA Technical Reports Server (NTRS)

    Darnell, W. L.; Gupta, S. K.; Staylor, W. F.

    1986-01-01

    An extensive study has been carried out to validate a satellite technique for estimating downward longwave radiation at the surface. The technique, mostly developed earlier, uses operational sun-synchronous satellite data and a radiative transfer model to provide the surface flux estimates. The satellite-derived fluxes were compared directly with corresponding ground-measured fluxes at four different sites in the United States for a common one-year period. This provided a study of seasonal variations as well as a diversity of meteorological conditions. Dome heating errors in the ground-measured fluxes were also investigated and were corrected prior to the comparisons. Comparison of the monthly averaged fluxes from the satellite and ground sources for all four sites for the entire year showed a correlation coefficient of 0.98 and a standard error of estimate of 10 W/sq m. A brief description of the technique is provided, and the results validating the technique are presented.

  12. MNE software for processing MEG and EEG data

    PubMed Central

    Gramfort, A.; Luessi, M.; Larson, E.; Engemann, D.; Strohmeier, D.; Brodbeck, C.; Parkkonen, L.; Hämäläinen, M.

    2013-01-01

    Magnetoencephalography and electroencephalography (M/EEG) measure the weak electromagnetic signals originating from neural currents in the brain. Using these signals to characterize and locate brain activity is a challenging task, as evidenced by several decades of methodological contributions. MNE, whose name stems from its capability to compute cortically-constrained minimum-norm current estimates from M/EEG data, is a software package that provides comprehensive analysis tools and workflows including preprocessing, source estimation, time–frequency analysis, statistical analysis, and several methods to estimate functional connectivity between distributed brain regions. The present paper gives detailed information about the MNE package and describes typical use cases while also warning about potential caveats in analysis. The MNE package is a collaborative effort of multiple institutes striving to implement and share best methods and to facilitate distribution of analysis pipelines to advance reproducibility of research. Full documentation is available at http://martinos.org/mne. PMID:24161808

  13. PAF: A software tool to estimate free-geometry extended bodies of anomalous pressure from surface deformation data

    NASA Astrophysics Data System (ADS)

    Camacho, A. G.; Fernández, J.; Cannavò, F.

    2018-02-01

    We present a software package to carry out inversions of surface deformation data (any combination of InSAR, GPS, and terrestrial data, e.g., EDM, levelling) as produced by 3D free-geometry extended bodies with anomalous pressure changes. The anomalous structures are described as an aggregation of elementary cells (whose effects are estimated as coming from point sources) in an elastic half space. The linear inverse problem (considering some simple regularization conditions) is solved by means of an exploratory approach. This software represents the open implementation of a previously published methodology (Camacho et al., 2011). It can be freely used with large data sets (e.g. InSAR data sets) or with data coming from small control networks (e.g. GPS monitoring data), mainly in volcanic areas, to estimate the expected pressure bodies representing magmatic intrusions. Here, the software is applied to some real test cases.

  14. The economic value of remote sensing by satellite: An ERTS overview and the value of continuity of service. Volume 2: Source document

    NASA Technical Reports Server (NTRS)

    Andrews, J.; Donziger, A.; Hazelrigg, G. A., Jr.; Heiss, K. P.; Sand, F.; Stevenson, P.

    1974-01-01

    The economic value of an ERS system with a technical capability similar to ERTS, allowing for increased coverage obtained through the use of multiple active satellites in orbit is presented. A detailed breakdown of the benefits achievable from an ERS system is given and a methodology for their estimation is established. The ECON case studies in agriculture, water use, and land cover are described along with the current ERTS system. The cost for a projected ERS system is given.

  15. Economic consequences for Medicaid of human immunodeficiency virus infection

    PubMed Central

    Baily, Mary Ann; Bilheimer, Linda; Wooldridge, Judith; well, Kathryn Lang; Greenberg, Warren

    1990-01-01

    Medicaid is currently a major source of financing for health care for those with acquired immunodeficiency syndrome (AIDS) and to a lesser extent, for those with other manifestations of human immunodeficiency virus (HIV) infection. It is likely to become even more important in the future. This article focuses on the structure of Medicaid in the context of the HIV epidemic, covering epidemiological issues, eligibility, service coverage and use, and reimbursement. A simple methodology for estimating HI\\'-related Medicaid costs under alternative assumptions about the future is also explained. PMID:10113503

  16. Incorporating Variational Local Analysis and Prediction System (vLAPS) Analyses with Nudging Data Assimilation: Methodology and Initial Results

    DTIC Science & Technology

    2017-09-01

    Robert E Dumais Jr  Computational and  Information  Sciences Directorate, ARL    Yuanfu Xie  National Oceanic and Atmospheric Administration, Boulder, CO...No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for...reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection information

  17. Calculating system reliability with SRFYDO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morzinski, Jerome; Anderson - Cook, Christine M; Klamann, Richard M

    2010-01-01

    SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for themore » system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.« less

  18. Prevalence of Borderline Personality Disorder in University Samples: Systematic Review, Meta-Analysis and Meta-Regression.

    PubMed

    Meaney, Rebecca; Hasking, Penelope; Reupert, Andrea

    2016-01-01

    To determine pooled prevalence of clinically significant traits or features of Borderline Personality Disorder among college students, and explore the influence of methodological factors on reported prevalence figures, and temporal trends. Electronic databases (1994-2014: AMED; Biological Abstracts; Embase; MEDLINE; PsycARTICLES; CINAHL Plus; Current Contents Connect; EBM Reviews; Google Scholar; Ovid Medline; Proquest central; PsychINFO; PubMed; Scopus; Taylor & Francis; Web of Science (1998-2014), and hand searches. Forty-three college-based studies reporting estimates of clinically significant BPD symptoms were identified (5.7% of original search). One author (RM) extracted clinically relevant BPD prevalence estimates, year of publication, demographic variables, and method from each publication or through correspondence with the authors. The prevalence of BPD in college samples ranged from 0.5% to 32.1%, with lifetime prevalence of 9.7% (95% CI, 7.7-12.0; p < .005). Methodological factors contributing considerable between-study heterogeneity in univariate meta-analyses were participant anonymity, incentive type, research focus and participant type. Study and sample characteristics related to between study heterogeneity were sample size, and self-identifying as Asian or "other" race. The prevalence of BPD varied over time: 7.8% (95% CI 4.2-13.9) between 1994 and 2000; 6.5% (95% CI 4.0-10.5) during 2001 to 2007; and 11.6% (95% CI 8.8-15.1) from 2008 to 2014, yet was not a source of heterogeneity (p = .09). BPD prevalence estimates are influenced by the methodological or study sample factors measured. There is a need for consistency in measurement across studies to increase reliability in establishing the scope and characteristics of those with BPD engaged in tertiary study.

  19. Near-field hazard assessment of March 11, 2011 Japan Tsunami sources inferred from different methods

    USGS Publications Warehouse

    Wei, Y.; Titov, V.V.; Newman, A.; Hayes, G.; Tang, L.; Chamberlin, C.

    2011-01-01

    Tsunami source is the origin of the subsequent transoceanic water waves, and thus the most critical component in modern tsunami forecast methodology. Although impractical to be quantified directly, a tsunami source can be estimated by different methods based on a variety of measurements provided by deep-ocean tsunameters, seismometers, GPS, and other advanced instruments, some in real time, some in post real-time. Here we assess these different sources of the devastating March 11, 2011 Japan tsunami by model-data comparison for generation, propagation and inundation in the near field of Japan. This study provides a comparative study to further understand the advantages and shortcomings of different methods that may be potentially used in real-time warning and forecast of tsunami hazards, especially in the near field. The model study also highlights the critical role of deep-ocean tsunami measurements for high-quality tsunami forecast, and its combination with land GPS measurements may lead to better understanding of both the earthquake mechanisms and tsunami generation process. ?? 2011 MTS.

  20. Estimates of Fossil Fuel Carbon Dioxide Emissions From Mexico at Monthly Time Intervals

    NASA Astrophysics Data System (ADS)

    Losey, L. M.; Andres, R. J.

    2003-12-01

    Human consumption of fossil fuels has greatly contributed to the rise of carbon dioxide in the Earth's atmosphere. To better understand the global carbon cycle, it is important to identify the major sources of these fossil fuels. Mexico is among the top fifteen nations in the world for producing fossil fuel carbon dioxide emissions. Based on this information and that emissions from Mexico are a focus of the North American Carbon Program, Mexico was selected for this study. Mexican monthly inland sales volumes for January 1988-May 2003 were collected on natural gas and liquid fuels from the Energy Information Agency in the United States Department of Energy. These sales figures represent a major portion of the total fossil fuel consumption in Mexico. The fraction of a particular fossil fuel consumed in a given month was determined by dividing the monthly sales volumes by the annual sum of monthly sales volumes for a given year. This fraction was then multiplied by the annual carbon dioxide values reported by the Carbon Dioxide Information Analysis Center (CDIAC) at Oak Ridge National Laboratory (ORNL) to estimate the monthly carbon dioxide emissions from the respective fuels. The advantages of this methodology are: 1) monthly fluxes are consistent with the annual flux as determined by the widely-accepted CDIAC values, and 2) its general application can be easily adapted to other nations for determining their sub-annual time scale emissions. The major disadvantage of this methodology is the proxy nature inherent to it. Only a fraction of the total emissions are used as an estimate in determining the seasonal cycle. The error inherent in this approach increases as the fraction of total emissions represented by the proxy decreases. These data are part of a long-term project between researchers at the University of North Dakota and ORNL which attempts to identify and understand the source(s) of seasonal variations of global, fossil-fuel derived, carbon dioxide emissions. Better knowledge of the temporal variation of the annual fossil fuel flux will lead to a better understanding of the global carbon cycle. This research will be archived at CDIAC for public access.

  1. General multiyear aggregation technology: Methodology and software documentation. [estimating seasonal crop acreage proportions

    NASA Technical Reports Server (NTRS)

    Baker, T. C. (Principal Investigator)

    1982-01-01

    A general methodology is presented for estimating a stratum's at-harvest crop acreage proportion for a given crop year (target year) from the crop's estimated acreage proportion for sample segments from within the stratum. Sample segments from crop years other than the target year are (usually) required for use in conjunction with those from the target year. In addition, the stratum's (identifiable) crop acreage proportion may be estimated for times other than at-harvest in some situations. A by-product of the procedure is a methodology for estimating the change in the stratum's at-harvest crop acreage proportion from crop year to crop year. An implementation of the proposed procedure as a statistical analysis system routine using the system's matrix language module, PROC MATRIX, is described and documented. Three examples illustrating use of the methodology and algorithm are provided.

  2. Detection and characterization of lightning-based sources using continuous wavelet transform: application to audio-magnetotellurics

    NASA Astrophysics Data System (ADS)

    Larnier, H.; Sailhac, P.; Chambodut, A.

    2018-01-01

    Atmospheric electromagnetic waves created by global lightning activity contain information about electrical processes of the inner and the outer Earth. Large signal-to-noise ratio events are particularly interesting because they convey information about electromagnetic properties along their path. We introduce a new methodology to automatically detect and characterize lightning-based waves using a time-frequency decomposition obtained through the application of continuous wavelet transform. We focus specifically on three types of sources, namely, atmospherics, slow tails and whistlers, that cover the frequency range 10 Hz to 10 kHz. Each wave has distinguishable characteristics in the time-frequency domain due to source shape and dispersion processes. Our methodology allows automatic detection of each type of event in the time-frequency decomposition thanks to their specific signature. Horizontal polarization attributes are also recovered in the time-frequency domain. This procedure is first applied to synthetic extremely low frequency time-series with different signal-to-noise ratios to test for robustness. We then apply it on real data: three stations of audio-magnetotelluric data acquired in Guadeloupe, oversea French territories. Most of analysed atmospherics and slow tails display linear polarization, whereas analysed whistlers are elliptically polarized. The diversity of lightning activity is finally analysed in an audio-magnetotelluric data processing framework, as used in subsurface prospecting, through estimation of the impedance response functions. We show that audio-magnetotelluric processing results depend mainly on the frequency content of electromagnetic waves observed in processed time-series, with an emphasis on the difference between morning and afternoon acquisition. Our new methodology based on the time-frequency signature of lightning-induced electromagnetic waves allows automatic detection and characterization of events in audio-magnetotelluric time-series, providing the means to assess quality of response functions obtained through processing.

  3. The activity-based methodology to assess ship emissions - A review.

    PubMed

    Nunes, R A O; Alvim-Ferraz, M C M; Martins, F G; Sousa, S I V

    2017-12-01

    Several studies tried to estimate atmospheric emissions with origin in the maritime sector, concluding that it contributed to the global anthropogenic emissions through the emission of pollutants that have a strong impact on hu' health and also on climate change. Thus, this paper aimed to review published studies since 2010 that used activity-based methodology to estimate ship emissions, to provide a summary of the available input data. After exclusions, 26 articles were analysed and the main information were scanned and registered, namely technical information about ships, ships activity and movement information, engines, fuels, load and emission factors. The larger part of studies calculating in-port ship emissions concluded that the majority was emitted during hotelling and most of the authors allocating emissions by ship type concluded that containerships were the main pollutant emitters. To obtain technical information about ships the combined use of data from Lloyd's Register of Shipping database with other sources such as port authority's databases, engine manufactures and ship-owners seemed the best approach. The use of AIS data has been growing in recent years and seems to be the best method to report activities and movements of ships. To predict ship powers the Hollenbach (1998) method which estimates propelling power as a function of instantaneous speed based on total resistance and use of load balancing schemes for multi-engine installations seemed to be the best practices for more accurate ship emission estimations. For emission factors improvement, new on-board measurement campaigns or studies should be undertaken. Regardless of the effort that has been performed in the last years to obtain more accurate shipping emission inventories, more precise input data (technical information about ships, engines, load and emission factors) should be obtained to improve the methodology to develop global and universally accepted emission inventories for an effective environmental policy plan. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Peaks Over Threshold (POT): A methodology for automatic threshold estimation using goodness of fit p-value

    NASA Astrophysics Data System (ADS)

    Solari, Sebastián.; Egüen, Marta; Polo, María. José; Losada, Miguel A.

    2017-04-01

    Threshold estimation in the Peaks Over Threshold (POT) method and the impact of the estimation method on the calculation of high return period quantiles and their uncertainty (or confidence intervals) are issues that are still unresolved. In the past, methods based on goodness of fit tests and EDF-statistics have yielded satisfactory results, but their use has not yet been systematized. This paper proposes a methodology for automatic threshold estimation, based on the Anderson-Darling EDF-statistic and goodness of fit test. When combined with bootstrapping techniques, this methodology can be used to quantify both the uncertainty of threshold estimation and its impact on the uncertainty of high return period quantiles. This methodology was applied to several simulated series and to four precipitation/river flow data series. The results obtained confirmed its robustness. For the measured series, the estimated thresholds corresponded to those obtained by nonautomatic methods. Moreover, even though the uncertainty of the threshold estimation was high, this did not have a significant effect on the width of the confidence intervals of high return period quantiles.

  5. Vulnerability Assessment of Groundwater Resources by Nutrient Source Apportionment to Individual Groundwater Wells: A Case Study in North Carolina

    NASA Astrophysics Data System (ADS)

    Ayub, R.; Obenour, D. R.; Keyworth, A. J.; Genereux, D. P.; Mahinthakumar, K.

    2016-12-01

    Groundwater contamination by nutrients (nitrogen and phosphorus) is a major concern in water table aquifers that underlie agricultural areas in the mid-Atlantic Coastal Plain of the United States. High nutrient concentrations leaching into shallow groundwater can lead to human health problems and eutrophication of receiving surface waters. Liquid manure from concentrated animal feeding operations (CAFOs) stored in open-air lagoons and applied to spray fields can be a significant source of nutrients to groundwater, along with septic waste. In this study, we developed a model-based methodology for source apportionment and vulnerability assessment using sparse groundwater quality sampling measurements for Duplin County, North Carolina (NC), obtained by the NC Department of Environmental Quality (NC DEQ). This model provides information relevant to management by estimating the nutrient transport through the aquifer from different sources and addressing the uncertainty of nutrient contaminant propagation. First, the zones of influence (dependent on nutrient pathways) for individual groundwater monitoring wells were identified using a two-dimensional vertically averaged groundwater flow and transport model incorporating geologic uncertainty for the surficial aquifer system. A multiple linear regression approach is then applied to estimate the contribution weights for different nutrient source types using the nutrient measurements from monitoring wells and the potential sources within each zone of influence. Using the source contribution weights and their uncertainty, a probabilistic vulnerability assessment of the study area due to nutrient contamination is performed. Knowledge of the contribution of different nutrient sources to contamination at receptor locations (e.g., private wells, municipal wells, stream beds etc.) will be helpful in planning and implementation of appropriate mitigation measures.

  6. Systematic Review: Impact of point sources on antibiotic-resistant bacteria in the natural environment.

    PubMed

    Bueno, I; Williams-Nguyen, J; Hwang, H; Sargeant, J M; Nault, A J; Singer, R S

    2018-02-01

    Point sources such as wastewater treatment plants and agricultural facilities may have a role in the dissemination of antibiotic-resistant bacteria (ARB) and antibiotic resistance genes (ARG). To analyse the evidence for increases in ARB in the natural environment associated with these point sources of ARB and ARG, we conducted a systematic review. We evaluated 5,247 records retrieved through database searches, including both studies that ascertained ARG and ARB outcomes. All studies were subjected to a screening process to assess relevance to the question and methodology to address our review question. A risk of bias assessment was conducted upon the final pool of studies included in the review. This article summarizes the evidence only for those studies with ARB outcomes (n = 47). Thirty-five studies were at high (n = 11) or at unclear (n = 24) risk of bias in the estimation of source effects due to lack of information and/or failure to control for confounders. Statistical analysis was used in ten studies, of which one assessed the effect of multiple sources using modelling approaches; none reported effect measures. Most studies reported higher ARB prevalence or concentration downstream/near the source. However, this evidence was primarily descriptive and it could not be concluded that there is a clear impact of point sources on increases in ARB in the environment. To quantify increases in ARB in the environment due to specific point sources, there is a need for studies that stress study design, control of biases and analytical tools to provide effect measure estimates. © 2017 Blackwell Verlag GmbH.

  7. Rule-Based Flight Software Cost Estimation

    NASA Technical Reports Server (NTRS)

    Stukes, Sherry A.; Spagnuolo, John N. Jr.

    2015-01-01

    This paper discusses the fundamental process for the computation of Flight Software (FSW) cost estimates. This process has been incorporated in a rule-based expert system [1] that can be used for Independent Cost Estimates (ICEs), Proposals, and for the validation of Cost Analysis Data Requirements (CADRe) submissions. A high-level directed graph (referred to here as a decision graph) illustrates the steps taken in the production of these estimated costs and serves as a basis of design for the expert system described in this paper. Detailed discussions are subsequently given elaborating upon the methodology, tools, charts, and caveats related to the various nodes of the graph. We present general principles for the estimation of FSW using SEER-SEM as an illustration of these principles when appropriate. Since Source Lines of Code (SLOC) is a major cost driver, a discussion of various SLOC data sources for the preparation of the estimates is given together with an explanation of how contractor SLOC estimates compare with the SLOC estimates used by JPL. Obtaining consistency in code counting will be presented as well as factors used in reconciling SLOC estimates from different code counters. When sufficient data is obtained, a mapping into the JPL Work Breakdown Structure (WBS) from the SEER-SEM output is illustrated. For across the board FSW estimates, as was done for the NASA Discovery Mission proposal estimates performed at JPL, a comparative high-level summary sheet for all missions with the SLOC, data description, brief mission description and the most relevant SEER-SEM parameter values is given to illustrate an encapsulation of the used and calculated data involved in the estimates. The rule-based expert system described provides the user with inputs useful or sufficient to run generic cost estimation programs. This system's incarnation is achieved via the C Language Integrated Production System (CLIPS) and will be addressed at the end of this paper.

  8. Lung cancer risk from PAHs emitted from biomass combustion.

    PubMed

    Sarigiannis, Dimosthenis Α; Karakitsios, Spyros P; Zikopoulos, Dimitrios; Nikolaki, Spyridoula; Kermenidou, Marianthi

    2015-02-01

    This study deals with the assessment of the cancer risk attributable to PAH exposure, attributable to the increased use of biomass for space heating in Greece in the winter of 2012-2013. Three fractions of particulates (PM1, PM2.5 and PM10) were measured in two sampling sites (urban/residential and traffic-influenced) followed by chemical analysis of 19 PAHs and levoglucosan (used as a biomarker tracer). PAH-induced lung cancer risk was estimated by a comprehensive methodology that incorporated human respiratory tract deposition modelling in order to estimate the toxic equivalent concentration (TEQ) at each target tissue. This allowed us to further differentiate internal exposure and risk by age groups. Results showed that all PM fractions are higher in Greece during the cold months of the year, mainly due to biomass use for space heating. PAH and levoglucosan levels were highly correlated, indicating that particles emitted from biomass combustion are more toxic than PM emitted from other sources. The estimated lung cancer risk was non-negligible for residents close to the urban background monitoring site. Higher risk was estimated for infants and children, due to the higher bodyweight normalized dose and the human respiratory tract (HRT) physiology. HRT structure and physiology in youngsters favor deposition of particles that are smaller and more toxic per unit mass. In all cases, the estimated risk (5.7E-07 and 1.4E-06 for the urban background site and 1.4E-07 to 5.0E-07 for the traffic site) was lower to the one estimated by the conventional methodology (2.8E-06 and 9.7E-07 for the urban background and the traffic site respectively) that is based on Inhalation Unit Risk; the latter assumes that all PAHs adsorbed on particles are taken up by humans. With the methodology proposed herein, the estimated risk presents a 5-7 times difference between the two sampling sites (depending on the age group). These differences could not have been identified had we relied only on conventional risk assessment method. Consequently, the actual cancer risk attributable to PAHs on PM emitted from biomass burning would have been significantly underestimated. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. New Methodology for Natural Gas Production Estimates

    EIA Publications

    2010-01-01

    A new methodology is implemented with the monthly natural gas production estimates from the EIA-914 survey this month. The estimates, to be released April 29, 2010, include revisions for all of 2009. The fundamental changes in the new process include the timeliness of the historical data used for estimation and the frequency of sample updates, both of which are improved.

  10. Revised estimates for direct-effect recreational jobs in the interior Columbia River basin.

    Treesearch

    Lisa K. Crone; Richard W. Haynes

    1999-01-01

    This paper reviews the methodology used to derive the original estimates for direct employment associated with recreation on Federal lands in the interior Columbia River basin (the basin), and details the changes in methodology and data used to derive new estimates. The new analysis resulted in an estimate of 77,655 direct-effect jobs associated with recreational...

  11. Petroleum Market Model of the National Energy Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1997-01-01

    The purpose of this report is to define the objectives of the Petroleum Market Model (PMM), describe its basic approach, and provide detail on how it works. This report is intended as a reference document for model analysts, users, and the public. The PMM models petroleum refining activities, the marketing of petroleum products to consumption regions. The production of natural gas liquids in gas processing plants, and domestic methanol production. The PMM projects petroleum product prices and sources of supply for meeting petroleum product demand. The sources of supply include crude oil, both domestic and imported; other inputs including alcoholsmore » and ethers; natural gas plant liquids production; petroleum product imports; and refinery processing gain. In addition, the PMM estimates domestic refinery capacity expansion and fuel consumption. Product prices are estimated at the Census division level and much of the refining activity information is at the Petroleum Administration for Defense (PAD) District level. This report is organized as follows: Chapter 2, Model Purpose; Chapter 3, Model Overview and Rationale; Chapter 4, Model Structure; Appendix A, Inventory of Input Data, Parameter Estimates, and Model Outputs; Appendix B, Detailed Mathematical Description of the Model; Appendix C, Bibliography; Appendix D, Model Abstract; Appendix E, Data Quality; Appendix F, Estimation methodologies; Appendix G, Matrix Generator documentation; Appendix H, Historical Data Processing; and Appendix I, Biofuels Supply Submodule.« less

  12. [Analysis of the quality of data issued from Beirut's hospitals in order to measure short-term health effects of air pollution].

    PubMed

    Mrad Nakhlé, M; Farah, W; Ziade, N; Abboud, M; Gerard, J; Zaarour, R; Saliba, N; Dabar, G; Abdel Massih, T; Zoghbi, A; Coussa-Koniski, M-L; Annesi-Maesano, I

    2013-12-01

    The effects of air pollution on human health have been the subject of much public health research. Several techniques and methods of analysis have been developed. Thus, Beirut Air Pollution and Health Effects (BAPHE) was designed to develop a methodology adapted to the context of the city of Beirut in order to quantify the short-term health effects of air pollution. The quality of data collected from emergency units was analyzed in order to properly estimate hospitalizations via these units. This study examined the process of selecting and validating health and pollution indicators. The different sources of data from emergency units were not correlated. BAPHE was therefore reoriented towards collecting health data from the emergency registry of each hospital. A pilot study determined the appropriate health indicators for BAPHE and created a classification methodology for data collection. In Lebanon, several studies have attempted to indirectly assess the impact of air pollution on health. They had limitations and weaknesses and offered no recommendations regarding the sources and quality of data. The present analysis will be useful for BAPHE and for planning further studies. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  13. Multichannel myopic deconvolution in underwater acoustic channels via low-rank recovery

    PubMed Central

    Tian, Ning; Byun, Sung-Hoon; Sabra, Karim; Romberg, Justin

    2017-01-01

    This paper presents a technique for solving the multichannel blind deconvolution problem. The authors observe the convolution of a single (unknown) source with K different (unknown) channel responses; from these channel outputs, the authors want to estimate both the source and the channel responses. The authors show how this classical signal processing problem can be viewed as solving a system of bilinear equations, and in turn can be recast as recovering a rank-1 matrix from a set of linear observations. Results of prior studies in the area of low-rank matrix recovery have identified effective convex relaxations for problems of this type and efficient, scalable heuristic solvers that enable these techniques to work with thousands of unknown variables. The authors show how a priori information about the channels can be used to build a linear model for the channels, which in turn makes solving these systems of equations well-posed. This study demonstrates the robustness of this methodology to measurement noises and parametrization errors of the channel impulse responses with several stylized and shallow water acoustic channel simulations. The performance of this methodology is also verified experimentally using shipping noise recorded on short bottom-mounted vertical line arrays. PMID:28599565

  14. Measurement of Phased Array Point Spread Functions for Use with Beamforming

    NASA Technical Reports Server (NTRS)

    Bahr, Chris; Zawodny, Nikolas S.; Bertolucci, Brandon; Woolwine, Kyle; Liu, Fei; Li, Juan; Sheplak, Mark; Cattafesta, Louis

    2011-01-01

    Microphone arrays can be used to localize and estimate the strengths of acoustic sources present in a region of interest. However, the array measurement of a region, or beam map, is not an accurate representation of the acoustic field in that region. The true acoustic field is convolved with the array s sampling response, or point spread function (PSF). Many techniques exist to remove the PSF's effect on the beam map via deconvolution. Currently these methods use a theoretical estimate of the array point spread function and perhaps account for installation offsets via determination of the microphone locations. This methodology fails to account for any reflections or scattering in the measurement setup and still requires both microphone magnitude and phase calibration, as well as a separate shear layer correction in an open-jet facility. The research presented seeks to investigate direct measurement of the array's PSF using a non-intrusive acoustic point source generated by a pulsed laser system. Experimental PSFs of the array are computed for different conditions to evaluate features such as shift-invariance, shear layers and model presence. Results show that experimental measurements trend with theory with regard to source offset. The source shows expected behavior due to shear layer refraction when observed in a flow, and application of a measured PSF to NACA 0012 aeroacoustic trailing-edge noise data shows a promising alternative to a classic shear layer correction method.

  15. Contact inspection of Si nanowire with SEM voltage contrast

    NASA Astrophysics Data System (ADS)

    Ohashi, Takeyoshi; Yamaguchi, Atsuko; Hasumi, Kazuhisa; Ikota, Masami; Lorusso, Gian; Horiguchi, Naoto

    2018-03-01

    A methodology to evaluate the electrical contact between nanowire (NW) and source/drain (SD) in NW FETs was investigated with SEM voltage contrast (VC). The electrical defects were robustly detected by VC. The validity of the inspection result was verified by TEM physical observations. Moreover, estimation of the parasitic resistance and capacitance was achieved from the quantitative analysis of VC images which were acquired with different scan conditions of electron beam (EB). A model considering the dynamics of EB-induce charging was proposed to calculate the VC. The resistance and capacitance can be determined by comparing the model-based VC with experimentally obtained VC. Quantitative estimation of resistance and capacitance would be valuable not only for more accurate inspection, but also for identification of the defect point.

  16. Model documentation report: Residential sector demand module of the national energy modeling system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    This report documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Residential Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, and FORTRAN source code. This reference document provides a detailed description for energy analysts, other users, and the public. The NEMS Residential Sector Demand Module is currently used for mid-term forecasting purposes and energy policy analysis over the forecast horizon of 1993 through 2020. The model generates forecasts of energy demand for the residential sector by service, fuel, and Census Division. Policy impacts resulting from new technologies,more » market incentives, and regulatory changes can be estimated using the module. 26 refs., 6 figs., 5 tabs.« less

  17. Methods used for immunization coverage assessment in Canada, a Canadian Immunization Research Network (CIRN) study.

    PubMed

    Wilson, Sarah E; Quach, Susan; MacDonald, Shannon E; Naus, Monika; Deeks, Shelley L; Crowcroft, Natasha S; Mahmud, Salaheddin M; Tran, Dat; Kwong, Jeff; Tu, Karen; Gilbert, Nicolas L; Johnson, Caitlin; Desai, Shalini

    2017-08-03

    Accurate and complete immunization data are necessary to assess vaccine coverage, safety and effectiveness. Across Canada, different methods and data sources are used to assess vaccine coverage, but these have not been systematically described. Our primary objective was to examine and describe the methods used to determine immunization coverage in Canada. The secondary objective was to compare routine infant and childhood coverage estimates derived from the Canadian 2013 Childhood National Immunization Coverage Survey (cNICS) with estimates collected from provinces and territories (P/Ts). We collected information from key informants regarding their provincial, territorial or federal methods for assessing immunization coverage. We also collected P/T coverage estimates for select antigens and birth cohorts to determine absolute differences between these and estimates from cNICS. Twenty-six individuals across 16 public health organizations participated between April and August 2015. Coverage surveys are conducted regularly for toddlers in Quebec and in one health authority in British Columbia. Across P/Ts, different methodologies for measuring coverage are used (e.g., valid doses, grace periods). Most P/Ts, except Ontario, measure up-to-date (UTD) coverage and 4 P/Ts also assess on-time coverage. The degree of concordance between P/T and cNICS coverage estimates varied by jurisdiction, antigen and age group. In addition to differences in the data sources and processes used for coverage assessment, there are also differences between Canadian P/Ts in the methods used for calculating immunization coverage. Comparisons between P/T and cNICS estimates leave remaining questions about the proportion of children fully vaccinated in Canada.

  18. Methods used for immunization coverage assessment in Canada, a Canadian Immunization Research Network (CIRN) study

    PubMed Central

    Quach, Susan; MacDonald, Shannon E.; Naus, Monika; Deeks, Shelley L.; Crowcroft, Natasha S.; Mahmud, Salaheddin M.; Tran, Dat; Kwong, Jeff; Tu, Karen; Johnson, Caitlin; Desai, Shalini

    2017-01-01

    ABSTRACT Accurate and complete immunization data are necessary to assess vaccine coverage, safety and effectiveness. Across Canada, different methods and data sources are used to assess vaccine coverage, but these have not been systematically described. Our primary objective was to examine and describe the methods used to determine immunization coverage in Canada. The secondary objective was to compare routine infant and childhood coverage estimates derived from the Canadian 2013 Childhood National Immunization Coverage Survey (cNICS) with estimates collected from provinces and territories (P/Ts). We collected information from key informants regarding their provincial, territorial or federal methods for assessing immunization coverage. We also collected P/T coverage estimates for select antigens and birth cohorts to determine absolute differences between these and estimates from cNICS. Twenty-six individuals across 16 public health organizations participated between April and August 2015. Coverage surveys are conducted regularly for toddlers in Quebec and in one health authority in British Columbia. Across P/Ts, different methodologies for measuring coverage are used (e.g., valid doses, grace periods). Most P/Ts, except Ontario, measure up-to-date (UTD) coverage and 4 P/Ts also assess on-time coverage. The degree of concordance between P/T and cNICS coverage estimates varied by jurisdiction, antigen and age group. In addition to differences in the data sources and processes used for coverage assessment, there are also differences between Canadian P/Ts in the methods used for calculating immunization coverage. Comparisons between P/T and cNICS estimates leave remaining questions about the proportion of children fully vaccinated in Canada. PMID:28708945

  19. Methodology for Airborne Quantification of NOx fluxes over Central London and Comparison to Emission Inventories

    NASA Astrophysics Data System (ADS)

    Vaughan, A. R.; Lee, J. D.; Lewis, A. C.; Purvis, R.; Carslaw, D.; Misztal, P. K.; Metzger, S.; Beevers, S.; Goldstein, A. H.; Hewitt, C. N.; Shaw, M.; Karl, T.; Davison, B.

    2015-12-01

    The emission of pollutants is a major problem in today's cities. Emission inventories are a key tool for air quality management, with the United Kingdom's National and London Atmospheric Emission Inventories (NAEI & LAEI) being good examples. Assessing the validity of such inventoried is important. Here we report on the technical methodology of matching flux measurements of NOx over a city to inventory estimates. We used an eddy covariance technique to directly measure NOx fluxes from central London on an aircraft flown at low altitude. NOx mixing ratios were measured at 10 Hz time resolution using chemiluminescence (to measure NO) and highly specific photolytic conversion of NO2 to NO (to measure NO2). Wavelet transformation was used to calculate instantaneous fluxes along the flight track for each flight leg. The transformation allows for both frequency and time information to be extracted from a signal, where we quantify the covariance between the de-trended vertical wind and concentration to derive a flux. Comparison between the calculated fluxes and emission inventory data was achieved using a footprint model, which accounts for contributing source. Using both a backwards lagrangian model and cross-wind dispersion function, we find the footprint extent ranges from 5 to 11 Km in distance from the sample point. We then calculate a relative weighting matrix for each emission inventory within the calculated footprint. The inventories are split into their contributing source sectors with each scaled using up to date emission factors, giving a month; day and hourly scaled estimate which is then compared to the measurement.

  20. Investigating Causality Between Interacting Brain Areas with Multivariate Autoregressive Models of MEG Sensor Data

    PubMed Central

    Michalareas, George; Schoffelen, Jan-Mathijs; Paterson, Gavin; Gross, Joachim

    2013-01-01

    Abstract In this work, we investigate the feasibility to estimating causal interactions between brain regions based on multivariate autoregressive models (MAR models) fitted to magnetoencephalographic (MEG) sensor measurements. We first demonstrate the theoretical feasibility of estimating source level causal interactions after projection of the sensor-level model coefficients onto the locations of the neural sources. Next, we show with simulated MEG data that causality, as measured by partial directed coherence (PDC), can be correctly reconstructed if the locations of the interacting brain areas are known. We further demonstrate, if a very large number of brain voxels is considered as potential activation sources, that PDC as a measure to reconstruct causal interactions is less accurate. In such case the MAR model coefficients alone contain meaningful causality information. The proposed method overcomes the problems of model nonrobustness and large computation times encountered during causality analysis by existing methods. These methods first project MEG sensor time-series onto a large number of brain locations after which the MAR model is built on this large number of source-level time-series. Instead, through this work, we demonstrate that by building the MAR model on the sensor-level and then projecting only the MAR coefficients in source space, the true casual pathways are recovered even when a very large number of locations are considered as sources. The main contribution of this work is that by this methodology entire brain causality maps can be efficiently derived without any a priori selection of regions of interest. Hum Brain Mapp, 2013. © 2012 Wiley Periodicals, Inc. PMID:22328419

  1. Production of NOx by Lightning and its Effects on Atmospheric Chemistry

    NASA Technical Reports Server (NTRS)

    Pickering, Kenneth E.

    2009-01-01

    Production of NO(x) by lightning remains the NO(x) source with the greatest uncertainty. Current estimates of the global source strength range over a factor of four (from 2 to 8 TgN/year). Ongoing efforts to reduce this uncertainty through field programs, cloud-resolved modeling, global modeling, and satellite data analysis will be described in this seminar. Representation of the lightning source in global or regional chemical transport models requires three types of information: the distribution of lightning flashes as a function of time and space, the production of NO(x) per flash, and the effective vertical distribution of the lightning-injected NO(x). Methods of specifying these items in a model will be discussed. For example, the current method of specifying flash rates in NASA's Global Modeling Initiative (GMI) chemical transport model will be discussed, as well as work underway in developing algorithms for use in the regional models CMAQ and WRF-Chem. A number of methods have been employed to estimate either production per lightning flash or the production per unit flash length. Such estimates derived from cloud-resolved chemistry simulations and from satellite NO2 retrievals will be presented as well as the methodologies employed. Cloud-resolved model output has also been used in developing vertical profiles of lightning NO(x) for use in global models. Effects of lightning NO(x) on O3 and HO(x) distributions will be illustrated regionally and globally.

  2. Overdiagnosis across medical disciplines: a scoping review.

    PubMed

    Jenniskens, Kevin; de Groot, Joris A H; Reitsma, Johannes B; Moons, Karel G M; Hooft, Lotty; Naaktgeboren, Christiana A

    2017-12-27

    To provide insight into how and in what clinical fields overdiagnosis is studied and give directions for further applied and methodological research. Scoping review. Medline up to August 2017. All English studies on humans, in which overdiagnosis was discussed as a dominant theme. Studies were assessed on clinical field, study aim (ie, methodological or non-methodological), article type (eg, primary study, review), the type and role of diagnostic test(s) studied and the context in which these studies discussed overdiagnosis. From 4896 studies, 1851 were included for analysis. Half of all studies on overdiagnosis were performed in the field of oncology (50%). Other prevalent clinical fields included mental disorders, infectious diseases and cardiovascular diseases accounting for 9%, 8% and 6% of studies, respectively. Overdiagnosis was addressed from a methodological perspective in 20% of studies. Primary studies were the most common article type (58%). The type of diagnostic tests most commonly studied were imaging tests (32%), although these were predominantly seen in oncology and cardiovascular disease (84%). Diagnostic tests were studied in a screening setting in 43% of all studies, but as high as 75% of all oncological studies. The context in which studies addressed overdiagnosis related most frequently to its estimation, accounting for 53%. Methodology on overdiagnosis estimation and definition provided a source for extensive discussion. Other contexts of discussion included definition of disease, overdiagnosis communication, trends in increasing disease prevalence, drivers and consequences of overdiagnosis, incidental findings and genomics. Overdiagnosis is discussed across virtually all clinical fields and in different contexts. The variability in characteristics between studies and lack of consensus on overdiagnosis definition indicate the need for a uniform typology to improve coherence and comparability of studies on overdiagnosis. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  3. Techniques used for the screening of hemoglobin levels in blood donors: current insights and future directions.

    PubMed

    Chaudhary, Rajendra; Dubey, Anju; Sonker, Atul

    2017-01-01

    Blood donor hemoglobin (Hb) estimation is an important donation test that is performed prior to blood donation. It serves the dual purpose of protecting the donors' health against anemia and ensuring good quality of blood components, which has an implication on recipients' health. Diverse cutoff criteria have been defined world over depending on population characteristics; however, no testing methodology and sample requirement have been specified for Hb screening. Besides the technique, there are several physiological and methodological factors that affect accuracy and reliability of Hb estimation. These include the anatomical source of blood sample, posture of the donor, timing of sample and several other biological factors. Qualitative copper sulfate gravimetric method has been the archaic time-tested method that is still used in resource-constrained settings. Portable hemoglobinometers are modern quantitative devices that have been further modified to reagent-free cuvettes. Furthermore, noninvasive spectrophotometry was introduced, mitigating pain to blood donor and eliminating risk of infection. Notwithstanding a tremendous evolution in terms of ease of operation, accuracy, mobility, rapidity and cost, a component of inherent variability persists, which may partly be attributed to pre-analytical variables. Hence, blood centers should pay due attention to validation of test methodology, competency of operating staff and regular proficiency testing of the outputs. In this article, we have reviewed various regulatory guidelines, described the variables that affect the measurements and compared the validated technologies for Hb screening of blood donors along with enumeration of their merits and limitations.

  4. Quantitative Assessment of Agricultural Runoff and Soil Erosion Using Mathematical Modeling: Applications in the Mediterranean Region

    NASA Astrophysics Data System (ADS)

    Arhonditsis, G.; Giourga, C.; Loumou, A.; Koulouri, M.

    2002-09-01

    Three mathematical models, the runoff curve number equation, the universal soil loss equation, and the mass response functions, were evaluated for predicting nonpoint source nutrient loading from agricultural watersheds of the Mediterranean region. These methodologies were applied to a catchment, the gulf of Gera Basin, that is a typical terrestrial ecosystem of the islands of the Aegean archipelago. The calibration of the model parameters was based on data from experimental plots from which edge-of-field losses of sediment, water runoff, and nutrients were measured. Special emphasis was given to the transport of dissolved and solid-phase nutrients from their sources in the farmers' fields to the outlet of the watershed in order to estimate respective attenuation rates. It was found that nonpoint nutrient loading due to surface losses was high during winter, the contribution being between 50% and 80% of the total annual nutrient losses from the terrestrial ecosystem. The good fit between simulated and experimental data supports the view that these modeling procedures should be considered as reliable and effective methodological tools in Mediterranean areas for evaluating potential control measures, such as management practices for soil and water conservation and changes in land uses, aimed at diminishing soil loss and nutrient delivery to surface waters. Furthermore, the modifications of the general mathematical formulations and the experimental values of the model parameters provided by the study can be used in further application of these methodologies in watersheds with similar characteristics.

  5. Seismic Characterization of EGS Reservoirs

    NASA Astrophysics Data System (ADS)

    Templeton, D. C.; Pyle, M. L.; Matzel, E.; Myers, S.; Johannesson, G.

    2014-12-01

    To aid in the seismic characterization of Engineered Geothermal Systems (EGS), we enhance the traditional microearthquake detection and location methodologies at two EGS systems. We apply the Matched Field Processing (MFP) seismic imaging technique to detect new seismic events using known discrete microearthquake sources. Events identified using MFP are typically smaller magnitude events or events that occur within the coda of a larger event. Additionally, we apply a Bayesian multiple-event seismic location algorithm, called MicroBayesLoc, to estimate the 95% probability ellipsoids for events with high signal-to-noise ratios (SNR). Such probability ellipsoid information can provide evidence for determining if a seismic lineation could be real or simply within the anticipated error range. We apply this methodology to the Basel EGS data set and compare it to another EGS dataset. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  6. Evaluation of massively parallel sequencing for forensic DNA methylation profiling.

    PubMed

    Richards, Rebecca; Patel, Jayshree; Stevenson, Kate; Harbison, SallyAnn

    2018-05-11

    Epigenetics is an emerging area of interest in forensic science. DNA methylation, a type of epigenetic modification, can be applied to chronological age estimation, identical twin differentiation and body fluid identification. However, there is not yet an agreed, established methodology for targeted detection and analysis of DNA methylation markers in forensic research. Recently a massively parallel sequencing-based approach has been suggested. The use of massively parallel sequencing is well established in clinical epigenetics and is emerging as a new technology in the forensic field. This review investigates the potential benefits, limitations and considerations of this technique for the analysis of DNA methylation in a forensic context. The importance of a robust protocol, regardless of the methodology used, that minimises potential sources of bias is highlighted. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  7. An Overview of R in Health Decision Sciences.

    PubMed

    Jalal, Hawre; Pechlivanoglou, Petros; Krijkamp, Eline; Alarid-Escudero, Fernando; Enns, Eva; Hunink, M G Myriam

    2017-10-01

    As the complexity of health decision science applications increases, high-level programming languages are increasingly adopted for statistical analyses and numerical computations. These programming languages facilitate sophisticated modeling, model documentation, and analysis reproducibility. Among the high-level programming languages, the statistical programming framework R is gaining increased recognition. R is freely available, cross-platform compatible, and open source. A large community of users who have generated an extensive collection of well-documented packages and functions supports it. These functions facilitate applications of health decision science methodology as well as the visualization and communication of results. Although R's popularity is increasing among health decision scientists, methodological extensions of R in the field of decision analysis remain isolated. The purpose of this article is to provide an overview of existing R functionality that is applicable to the various stages of decision analysis, including model design, input parameter estimation, and analysis of model outputs.

  8. A Review of Issues Related to Data Acquisition and Analysis in EEG/MEG Studies.

    PubMed

    Puce, Aina; Hämäläinen, Matti S

    2017-05-31

    Electroencephalography (EEG) and magnetoencephalography (MEG) are non-invasive electrophysiological methods, which record electric potentials and magnetic fields due to electric currents in synchronously-active neurons. With MEG being more sensitive to neural activity from tangential currents and EEG being able to detect both radial and tangential sources, the two methods are complementary. Over the years, neurophysiological studies have changed considerably: high-density recordings are becoming de rigueur; there is interest in both spontaneous and evoked activity; and sophisticated artifact detection and removal methods are available. Improved head models for source estimation have also increased the precision of the current estimates, particularly for EEG and combined EEG/MEG. Because of their complementarity, more investigators are beginning to perform simultaneous EEG/MEG studies to gain more complete information about neural activity. Given the increase in methodological complexity in EEG/MEG, it is important to gather data that are of high quality and that are as artifact free as possible. Here, we discuss some issues in data acquisition and analysis of EEG and MEG data. Practical considerations for different types of EEG and MEG studies are also discussed.

  9. Elliptic Cylinder Airborne Sampling and Geostatistical Mass Balance Approach for Quantifying Local Greenhouse Gas Emissions.

    PubMed

    Tadić, Jovan M; Michalak, Anna M; Iraci, Laura; Ilić, Velibor; Biraud, Sébastien C; Feldman, Daniel R; Bui, Thaopaul; Johnson, Matthew S; Loewenstein, Max; Jeong, Seongeun; Fischer, Marc L; Yates, Emma L; Ryoo, Ju-Mee

    2017-09-05

    In this study, we explore observational, experimental, methodological, and practical aspects of the flux quantification of greenhouse gases from local point sources by using in situ airborne observations, and suggest a series of conceptual changes to improve flux estimates. We address the major sources of uncertainty reported in previous studies by modifying (1) the shape of the typical flight path, (2) the modeling of covariance and anisotropy, and (3) the type of interpolation tools used. We show that a cylindrical flight profile offers considerable advantages compared to traditional profiles collected as curtains, although this new approach brings with it the need for a more comprehensive subsequent analysis. The proposed flight pattern design does not require prior knowledge of wind direction and allows for the derivation of an ad hoc empirical correction factor to partially alleviate errors resulting from interpolation and measurement inaccuracies. The modified approach is applied to a use-case for quantifying CH 4 emission from an oil field south of San Ardo, CA, and compared to a bottom-up CH 4 emission estimate.

  10. Combining multiple earthquake models in real time for earthquake early warning

    USGS Publications Warehouse

    Minson, Sarah E.; Wu, Stephen; Beck, James L; Heaton, Thomas H.

    2017-01-01

    The ultimate goal of earthquake early warning (EEW) is to provide local shaking information to users before the strong shaking from an earthquake reaches their location. This is accomplished by operating one or more real‐time analyses that attempt to predict shaking intensity, often by estimating the earthquake’s location and magnitude and then predicting the ground motion from that point source. Other EEW algorithms use finite rupture models or may directly estimate ground motion without first solving for an earthquake source. EEW performance could be improved if the information from these diverse and independent prediction models could be combined into one unified, ground‐motion prediction. In this article, we set the forecast shaking at each location as the common ground to combine all these predictions and introduce a Bayesian approach to creating better ground‐motion predictions. We also describe how this methodology could be used to build a new generation of EEW systems that provide optimal decisions customized for each user based on the user’s individual false‐alarm tolerance and the time necessary for that user to react.

  11. Regression to fuzziness method for estimation of remaining useful life in power plant components

    NASA Astrophysics Data System (ADS)

    Alamaniotis, Miltiadis; Grelle, Austin; Tsoukalas, Lefteri H.

    2014-10-01

    Mitigation of severe accidents in power plants requires the reliable operation of all systems and the on-time replacement of mechanical components. Therefore, the continuous surveillance of power systems is a crucial concern for the overall safety, cost control, and on-time maintenance of a power plant. In this paper a methodology called regression to fuzziness is presented that estimates the remaining useful life (RUL) of power plant components. The RUL is defined as the difference between the time that a measurement was taken and the estimated failure time of that component. The methodology aims to compensate for a potential lack of historical data by modeling an expert's operational experience and expertise applied to the system. It initially identifies critical degradation parameters and their associated value range. Once completed, the operator's experience is modeled through fuzzy sets which span the entire parameter range. This model is then synergistically used with linear regression and a component's failure point to estimate the RUL. The proposed methodology is tested on estimating the RUL of a turbine (the basic electrical generating component of a power plant) in three different cases. Results demonstrate the benefits of the methodology for components for which operational data is not readily available and emphasize the significance of the selection of fuzzy sets and the effect of knowledge representation on the predicted output. To verify the effectiveness of the methodology, it was benchmarked against the data-based simple linear regression model used for predictions which was shown to perform equal or worse than the presented methodology. Furthermore, methodology comparison highlighted the improvement in estimation offered by the adoption of appropriate of fuzzy sets for parameter representation.

  12. Software Size Estimation Using Expert Estimation: A Fuzzy Logic Approach

    ERIC Educational Resources Information Center

    Stevenson, Glenn A.

    2012-01-01

    For decades software managers have been using formal methodologies such as the Constructive Cost Model and Function Points to estimate the effort of software projects during the early stages of project development. While some research shows these methodologies to be effective, many software managers feel that they are overly complicated to use and…

  13. Uncertainty quantification in (α,n) neutron source calculations for an oxide matrix

    DOE PAGES

    Pigni, M. T.; Croft, S.; Gauld, I. C.

    2016-04-25

    Here we present a methodology to propagate nuclear data covariance information in neutron source calculations from (α,n) reactions. The approach is applied to estimate the uncertainty in the neutron generation rates for uranium oxide fuel types due to uncertainties on 1) 17,18O( α,n) reaction cross sections and 2) uranium and oxygen stopping power cross sections. The procedure to generate reaction cross section covariance information is based on the Bayesian fitting method implemented in the R-matrix SAMMY code. The evaluation methodology uses the Reich-Moore approximation to fit the 17,18O(α,n) reaction cross-sections in order to derive a set of resonance parameters andmore » a related covariance matrix that is then used to calculate the energydependent cross section covariance matrix. The stopping power cross sections and related covariance information for uranium and oxygen were obtained by the fit of stopping power data in the -energy range of 1 keV up to 12 MeV. Cross section perturbation factors based on the covariance information relative to the evaluated 17,18O( α,n) reaction cross sections, as well as uranium and oxygen stopping power cross sections, were used to generate a varied set of nuclear data libraries used in SOURCES4C and ORIGEN for inventory and source term calculations. The set of randomly perturbed output (α,n) source responses, provide the mean values and standard deviations of the calculated responses reflecting the uncertainties in nuclear data used in the calculations. Lastly, the results and related uncertainties are compared with experiment thick target (α,n) yields for uranium oxide.« less

  14. Investigating microearthquake finite source attributes with IRIS Community Wavefield Demonstration Experiment in Oklahoma

    NASA Astrophysics Data System (ADS)

    Fan, Wenyuan; McGuire, Jeffrey J.

    2018-05-01

    An earthquake rupture process can be kinematically described by rupture velocity, duration and spatial extent. These key kinematic source parameters provide important constraints on earthquake physics and rupture dynamics. In particular, core questions in earthquake science can be addressed once these properties of small earthquakes are well resolved. However, these parameters of small earthquakes are poorly understood, often limited by available datasets and methodologies. The IRIS Community Wavefield Experiment in Oklahoma deployed ˜350 three component nodal stations within 40 km2 for a month, offering an unprecedented opportunity to test new methodologies for resolving small earthquake finite source properties in high resolution. In this study, we demonstrate the power of the nodal dataset to resolve the variations in the seismic wavefield over the focal sphere due to the finite source attributes of a M2 earthquake within the array. The dense coverage allows us to tightly constrain rupture area using the second moment method even for such a small earthquake. The M2 earthquake was a strike-slip event and unilaterally propagated towards the surface at 90 per cent local S- wave speed (2.93 km s-1). The earthquake lasted ˜0.019 s and ruptured Lc ˜70 m by Wc ˜45 m. With the resolved rupture area, the stress-drop of the earthquake is estimated as 7.3 MPa for Mw 2.3. We demonstrate that the maximum and minimum bounds on rupture area are within a factor of two, much lower than typical stress drop uncertainty, despite a suboptimal station distribution. The rupture properties suggest that there is little difference between the M2 Oklahoma earthquake and typical large earthquakes. The new three component nodal systems have great potential for improving the resolution of studies of earthquake source properties.

  15. The effects of survey question wording on rape estimates: evidence from a quasi-experimental design.

    PubMed

    Fisher, Bonnie S

    2009-02-01

    The measurement of rape is among the leading methodological issues in the violence against women field. Methodological discussion continues to focus on decreasing measurement errors and improving the accuracy of rape estimates. The current study used a quasi-experimental design to examine the effect of survey question wording on estimates of completed and attempted rape and verbal threats of rape. Specifically, the study statistically compares self-reported rape estimates from two nationally representative studies of college women's sexual victimization experiences, the National College Women Sexual Victimization study and the National Violence Against College Women study. Results show significant differences between the two sets of rape estimates, with National Violence Against College Women study rape estimates ranging from 4.4% to 10.4% lower than the National College Women Sexual Victimization study rape estimates. Implications for future methodological research are discussed.

  16. Methodology for estimating helicopter performance and weights using limited data

    NASA Technical Reports Server (NTRS)

    Baserga, Claudio; Ingalls, Charles; Lee, Henry; Peyran, Richard

    1990-01-01

    Methodology is developed and described for estimating the flight performance and weights of a helicopter for which limited data are available. The methodology is based on assumptions which couple knowledge of the technology of the helicopter under study with detailed data from well documented helicopters thought to be of similar technology. The approach, analysis assumptions, technology modeling, and the use of reference helicopter data are discussed. Application of the methodology is illustrated with an investigation of the Agusta A129 Mangusta.

  17. Top-Down Versus Bottom-Up Estimative of CO2 and CO Vehicular Emission Contribution from the Megacity of SãO Paulo, Brazil

    NASA Astrophysics Data System (ADS)

    Andrade, M.; Nogueira, T.; Martínez, P. J.; Fornaro, A.; Miranda, R. M.; Ynoue, R.

    2013-12-01

    The Metropolitan Area of São Paulo (MASP) is composed by 39 municipalities with a population of 20 million inhabitants in an area of 8,511 km2. The main source of pollutants to the air is the vehicular emission: exhaust and evaporative fuel. The climate is influenced by the sea breeze from the Southeast direction - MASP is approximately 40 km far from the sea; and by the valley- mountain circulation, due to the presence of the Serra do Mar Mountains in the Northwest part of the city. This wind circulation suffers the influence of the heat island due to the high degree of urbanization. The MASP fleet is composed by approximately 7 million passenger cars and freight vehicles, with 85% light duty vehicles (LDVs), 3% heavy-duty diesel vehicles (HDVs, diesel + 5% bio-diesel) and 12% motorcycles. About 55% of LDVs burn a mixture of 78% gasoline and 22% ethanol (gasohol), 4% use hydrous ethanol (95% ethanol and 5% water), 38% are flex-fuel vehicles that are capable of burning both gasohol and hydrous ethanol and 2% use diesel (CETESB, 2013a). The use of gasohol or hydrous ethanol by the flex-fuel is determined by the price of the fuel. Vehicle traffic is the main source of regulated pollutants: carbon monoxide (CO), nitrogen oxides (NOx) and hydrocarbons (HC), and contributes to the formation of inhalable particulate matter emissions (PM10) as well as being the principal source of carbon dioxide (CO2). 97% of all CO emissions, 85% of HC, 82% of NOx, 36% of sulfur dioxide (SO2), and 36% of all PM10 emissions come from mobile sources (CETESB, 2013b). The official inventory is calculated with the botton-up methodology: calculation of the emission factors in dynamometer, estimation of the average distance each kind of vehicles drives per day and the total number of vehicles in circulation. The values considered a deterioration factor due to the vehicle aging. The top-down methodology was performed from measurements performed in experiments in traffic roads and tunnels. The data presented here compared tunnel measurements performed in 2004 and 2011. The official data estimate an emission of 15327 million tons per year of CO2eq (60% by LDV, 38% HDV and 2% motorcycles) and 128 million tons per year of CO. The top-down estimative based on tunnel measurements resulted in values approximately 5 times higher, being the difference more attributable to the estimative of the diesel emission factor. The uncertainties are related to the deterioration of the emission factor with time and the driving pattern. The diurnal variation of CO2 atmospheric concentration is characterized by the mobile source emission pattern. CETESB. Relatório Anual de Qualidade do Ar no Estado de São Paulo 2012. Companhia de Tecnologia de Saneamento Ambiental, São Paulo, Brazil, 2013a. CETESB. Plano de Controle de Poluição Veicular do Estado de São Paulo 2011 /2013. Companhia de Tecnologia de Saneamento Ambiental, São Paulo, Brazil, 2013b.

  18. A systematic review of waterborne disease burden methodologies from developed countries.

    PubMed

    Murphy, H M; Pintar, K D M; McBean, E A; Thomas, M K

    2014-12-01

    The true incidence of endemic acute gastrointestinal illness (AGI) attributable to drinking water in Canada is unknown. Using a systematic review framework, the literature was evaluated to identify methods used to attribute AGI to drinking water. Several strategies have been suggested or applied to quantify AGI attributable to drinking water at a national level. These vary from simple point estimates, to quantitative microbial risk assessment, to Monte Carlo simulations, which rely on assumptions and epidemiological data from the literature. Using two methods proposed by researchers in the USA, this paper compares the current approaches and key assumptions. Knowledge gaps are identified to inform future waterborne disease attribution estimates. To improve future estimates, there is a need for robust epidemiological studies that quantify the health risks associated with small, private water systems, groundwater systems and the influence of distribution system intrusions on risk. Quantification of the occurrence of enteric pathogens in water supplies, particularly for groundwater, is needed. In addition, there are unanswered questions regarding the susceptibility of vulnerable sub-populations to these pathogens and the influence of extreme weather events (precipitation) on AGI-related health risks. National centralized data to quantify the proportions of the population served by different water sources, by treatment level, source water quality, and the condition of the distribution system infrastructure, are needed.

  19. Direct measurements show decreasing methane emissions from natural gas local distribution systems in the United States.

    PubMed

    Lamb, Brian K; Edburg, Steven L; Ferrara, Thomas W; Howard, Touché; Harrison, Matthew R; Kolb, Charles E; Townsend-Small, Amy; Dyck, Wesley; Possolo, Antonio; Whetstone, James R

    2015-04-21

    Fugitive losses from natural gas distribution systems are a significant source of anthropogenic methane. Here, we report on a national sampling program to measure methane emissions from 13 urban distribution systems across the U.S. Emission factors were derived from direct measurements at 230 underground pipeline leaks and 229 metering and regulating facilities using stratified random sampling. When these new emission factors are combined with estimates for customer meters, maintenance, and upsets, and current pipeline miles and numbers of facilities, the total estimate is 393 Gg/yr with a 95% upper confidence limit of 854 Gg/yr (0.10% to 0.22% of the methane delivered nationwide). This fraction includes emissions from city gates to the customer meter, but does not include other urban sources or those downstream of customer meters. The upper confidence limit accounts for the skewed distribution of measurements, where a few large emitters accounted for most of the emissions. This emission estimate is 36% to 70% less than the 2011 EPA inventory, (based largely on 1990s emission data), and reflects significant upgrades at metering and regulating stations, improvements in leak detection and maintenance activities, as well as potential effects from differences in methodologies between the two studies.

  20. Latin hypercube approach to estimate uncertainty in ground water vulnerability

    USGS Publications Warehouse

    Gurdak, J.J.; McCray, J.E.; Thyne, G.; Qi, S.L.

    2007-01-01

    A methodology is proposed to quantify prediction uncertainty associated with ground water vulnerability models that were developed through an approach that coupled multivariate logistic regression with a geographic information system (GIS). This method uses Latin hypercube sampling (LHS) to illustrate the propagation of input error and estimate uncertainty associated with the logistic regression predictions of ground water vulnerability. Central to the proposed method is the assumption that prediction uncertainty in ground water vulnerability models is a function of input error propagation from uncertainty in the estimated logistic regression model coefficients (model error) and the values of explanatory variables represented in the GIS (data error). Input probability distributions that represent both model and data error sources of uncertainty were simultaneously sampled using a Latin hypercube approach with logistic regression calculations of probability of elevated nonpoint source contaminants in ground water. The resulting probability distribution represents the prediction intervals and associated uncertainty of the ground water vulnerability predictions. The method is illustrated through a ground water vulnerability assessment of the High Plains regional aquifer. Results of the LHS simulations reveal significant prediction uncertainties that vary spatially across the regional aquifer. Additionally, the proposed method enables a spatial deconstruction of the prediction uncertainty that can lead to improved prediction of ground water vulnerability. ?? 2007 National Ground Water Association.

  1. Observations and Bayesian location methodology of transient acoustic signals (likely blue whales) in the Indian Ocean, using a hydrophone triplet.

    PubMed

    Le Bras, Ronan J; Kuzma, Heidi; Sucic, Victor; Bokelmann, Götz

    2016-05-01

    A notable sequence of calls was encountered, spanning several days in January 2003, in the central part of the Indian Ocean on a hydrophone triplet recording acoustic data at a 250 Hz sampling rate. This paper presents signal processing methods applied to the waveform data to detect, group, extract amplitude and bearing estimates for the recorded signals. An approximate location for the source of the sequence of calls is inferred from extracting the features from the waveform. As the source approaches the hydrophone triplet, the source level (SL) of the calls is estimated at 187 ± 6 dB re: 1 μPa-1 m in the 15-60 Hz frequency range. The calls are attributed to a subgroup of blue whales, Balaenoptera musculus, with a characteristic acoustic signature. A Bayesian location method using probabilistic models for bearing and amplitude is demonstrated on the calls sequence. The method is applied to the case of detection at a single triad of hydrophones and results in a probability distribution map for the origin of the calls. It can be extended to detections at multiple triads and because of the Bayesian formulation, additional modeling complexity can be built-in as needed.

  2. A Probabilistic Tsunami Hazard Study of the Auckland Region, Part I: Propagation Modelling and Tsunami Hazard Assessment at the Shoreline

    NASA Astrophysics Data System (ADS)

    Power, William; Wang, Xiaoming; Lane, Emily; Gillibrand, Philip

    2013-09-01

    Regional source tsunamis represent a potentially devastating threat to coastal communities in New Zealand, yet are infrequent events for which little historical information is available. It is therefore essential to develop robust methods for quantitatively estimating the hazards posed, so that effective mitigation measures can be implemented. We develop a probabilistic model for the tsunami hazard posed to the Auckland region of New Zealand from the Kermadec Trench and the southern New Hebrides Trench subduction zones. An innovative feature of our model is the systematic analysis of uncertainty regarding the magnitude-frequency distribution of earthquakes in the source regions. The methodology is first used to estimate the tsunami hazard at the coastline, and then used to produce a set of scenarios that can be applied to produce probabilistic maps of tsunami inundation for the study region; the production of these maps is described in part II. We find that the 2,500 year return period regional source tsunami hazard for the densely populated east coast of Auckland is dominated by events originating in the Kermadec Trench, while the equivalent hazard to the sparsely populated west coast is approximately equally due to events on the Kermadec Trench and the southern New Hebrides Trench.

  3. Estimation of Release History of Pollutant Source and Dispersion Coefficient of Aquifer Using Trained ANN Model

    NASA Astrophysics Data System (ADS)

    Srivastava, R.; Ayaz, M.; Jain, A.

    2013-12-01

    Knowledge of the release history of a groundwater pollutant source is critical in the prediction of the future trend of the pollutant movement and in choosing an effective remediation strategy. Moreover, for source sites which have undergone an ownership change, the estimated release history can be utilized for appropriate allocation of the costs of remediation among different parties who may be responsible for the contamination. Estimation of the release history with the help of concentration data is an inverse problem that becomes ill-posed because of the irreversible nature of the dispersion process. Breakthrough curves represent the temporal variation of pollutant concentration at a particular location, and contain significant information about the source and the release history. Several methodologies have been developed to solve the inverse problem of estimating the source and/or porous medium properties using the breakthrough curves as a known input. A common problem in the use of the breakthrough curves for this purpose is that, in most field situations, we have little or no information about the time of measurement of the breakthrough curve with respect to the time when the pollutant source becomes active. We develop an Artificial Neural Network (ANN) model to estimate the release history of a groundwater pollutant source through the use of breakthrough curves. It is assumed that the source location is known but the time dependent contaminant source strength is unknown. This temporal variation of the strength of the pollutant source is the output of the ANN model that is trained using the Levenberg-Marquardt algorithm utilizing synthetically generated breakthrough curves as inputs. A single hidden layer was used in the neural network and, to utilize just sufficient information and reduce the required sampling duration, only the upper half of the curve is used as the input pattern. The second objective of this work was to identify the aquifer parameters. An ANN model was developed to estimate the longitudinal and transverse dispersion coefficients following a philosophy similar to the one used earlier. Performance of the trained ANN model is evaluated for a 3-Dimensional case, first with perfect data and then with erroneous data with an error level up to 10 percent. Since the solution is highly sensitive to the errors in the input data, instead of using the raw data, we smoothen the upper half of the erroneous breakthrough curve by approximating it with a fourth order polynomial which is used as the input pattern for the ANN model. The main advantage of the proposed model is that it requires only the upper half of the breakthrough curve and, in addition to minimizing the effect of uncertainties in the tail ends of the breakthrough curve, is capable of estimating both the release history and aquifer parameters reasonably well. Results for the case with erroneous data having different error levels demonstrate the practical applicability and robustness of the ANN models. It is observed that with increase in the error level, the correlation coefficient of the training, testing and validation regressions tends to decrease, although the value stays within acceptable limits even for reasonably large error levels.

  4. Piecewise synonyms for enhanced UMLS source terminology integration.

    PubMed

    Huang, Kuo-Chuan; Geller, James; Halper, Michael; Cimino, James J

    2007-10-11

    The UMLS contains more than 100 source vocabularies and is growing via the integration of others. When integrating a new source, the source terms already in the UMLS must first be found. The easiest approach to this is simple string matching. However, string matching usually does not find all concepts that should be found. A new methodology, based on the notion of piecewise synonyms, for enhancing the process of concept discovery in the UMLS is presented. This methodology is supported by first creating a general synonym dictionary based on the UMLS. Each multi-word source term is decomposed into its component words, allowing for the generation of separate synonyms for each word from the general synonym dictionary. The recombination of these synonyms into new terms creates an expanded pool of matching candidates for terms from the source. The methodology is demonstrated with respect to an existing UMLS source. It shows a 34% improvement over simple string matching.

  5. Evaluation of a Change Detection Methodology by Means of Binary Thresholding Algorithms and Informational Fusion Processes

    PubMed Central

    Molina, Iñigo; Martinez, Estibaliz; Arquero, Agueda; Pajares, Gonzalo; Sanchez, Javier

    2012-01-01

    Landcover is subject to continuous changes on a wide variety of temporal and spatial scales. Those changes produce significant effects in human and natural activities. Maintaining an updated spatial database with the occurred changes allows a better monitoring of the Earth’s resources and management of the environment. Change detection (CD) techniques using images from different sensors, such as satellite imagery, aerial photographs, etc., have proven to be suitable and secure data sources from which updated information can be extracted efficiently, so that changes can also be inventoried and monitored. In this paper, a multisource CD methodology for multiresolution datasets is applied. First, different change indices are processed, then different thresholding algorithms for change/no_change are applied to these indices in order to better estimate the statistical parameters of these categories, finally the indices are integrated into a change detection multisource fusion process, which allows generating a single CD result from several combination of indices. This methodology has been applied to datasets with different spectral and spatial resolution properties. Then, the obtained results are evaluated by means of a quality control analysis, as well as with complementary graphical representations. The suggested methodology has also been proved efficiently for identifying the change detection index with the higher contribution. PMID:22737023

  6. Evaluation of a change detection methodology by means of binary thresholding algorithms and informational fusion processes.

    PubMed

    Molina, Iñigo; Martinez, Estibaliz; Arquero, Agueda; Pajares, Gonzalo; Sanchez, Javier

    2012-01-01

    Landcover is subject to continuous changes on a wide variety of temporal and spatial scales. Those changes produce significant effects in human and natural activities. Maintaining an updated spatial database with the occurred changes allows a better monitoring of the Earth's resources and management of the environment. Change detection (CD) techniques using images from different sensors, such as satellite imagery, aerial photographs, etc., have proven to be suitable and secure data sources from which updated information can be extracted efficiently, so that changes can also be inventoried and monitored. In this paper, a multisource CD methodology for multiresolution datasets is applied. First, different change indices are processed, then different thresholding algorithms for change/no_change are applied to these indices in order to better estimate the statistical parameters of these categories, finally the indices are integrated into a change detection multisource fusion process, which allows generating a single CD result from several combination of indices. This methodology has been applied to datasets with different spectral and spatial resolution properties. Then, the obtained results are evaluated by means of a quality control analysis, as well as with complementary graphical representations. The suggested methodology has also been proved efficiently for identifying the change detection index with the higher contribution.

  7. Does Metformin Reduce Cancer Risks? Methodologic Considerations.

    PubMed

    Golozar, Asieh; Liu, Shuiqing; Lin, Joeseph A; Peairs, Kimberly; Yeh, Hsin-Chieh

    2016-01-01

    The substantial burden of cancer and diabetes and the association between the two conditions has been a motivation for researchers to look for targeted strategies that can simultaneously affect both diseases and reduce their overlapping burden. In the absence of randomized clinical trials, researchers have taken advantage of the availability and richness of administrative databases and electronic medical records to investigate the effects of drugs on cancer risk among diabetic individuals. The majority of these studies suggest that metformin could potentially reduce cancer risk. However, the validity of this purported reduction in cancer risk is limited by several methodological flaws either in the study design or in the analysis. Whether metformin use decreases cancer risk relies heavily on the availability of valid data sources with complete information on confounders, accurate assessment of drug use, appropriate study design, and robust analytical techniques. The majority of the observational studies assessing the association between metformin and cancer risk suffer from methodological shortcomings and efforts to address these issues have been incomplete. Future investigations on the association between metformin and cancer risk should clearly address the methodological issues due to confounding by indication, prevalent user bias, and time-related biases. Although the proposed strategies do not guarantee a bias-free estimate for the association between metformin and cancer, they will reduce synthesis of and reporting of erroneous results.

  8. Estimating the Earthquake Source Time Function by Markov Chain Monte Carlo Sampling

    NASA Astrophysics Data System (ADS)

    Dȩbski, Wojciech

    2008-07-01

    Many aspects of earthquake source dynamics like dynamic stress drop, rupture velocity and directivity, etc. are currently inferred from the source time functions obtained by a deconvolution of the propagation and recording effects from seismograms. The question of the accuracy of obtained results remains open. In this paper we address this issue by considering two aspects of the source time function deconvolution. First, we propose a new pseudo-spectral parameterization of the sought function which explicitly takes into account the physical constraints imposed on the sought functions. Such parameterization automatically excludes non-physical solutions and so improves the stability and uniqueness of the deconvolution. Secondly, we demonstrate that the Bayesian approach to the inverse problem at hand, combined with an efficient Markov Chain Monte Carlo sampling technique, is a method which allows efficient estimation of the source time function uncertainties. The key point of the approach is the description of the solution of the inverse problem by the a posteriori probability density function constructed according to the Bayesian (probabilistic) theory. Next, the Markov Chain Monte Carlo sampling technique is used to sample this function so the statistical estimator of a posteriori errors can be easily obtained with minimal additional computational effort with respect to modern inversion (optimization) algorithms. The methodological considerations are illustrated by a case study of the mining-induced seismic event of the magnitude M L ≈3.1 that occurred at Rudna (Poland) copper mine. The seismic P-wave records were inverted for the source time functions, using the proposed algorithm and the empirical Green function technique to approximate Green functions. The obtained solutions seem to suggest some complexity of the rupture process with double pulses of energy release. However, the error analysis shows that the hypothesis of source complexity is not justified at the 95% confidence level. On the basis of the analyzed event we also show that the separation of the source inversion into two steps introduces limitations on the completeness of the a posteriori error analysis.

  9. Exposure assessment in investigations of waterborne illness: a quantitative estimate of measurement error

    PubMed Central

    Jones, Andria Q; Dewey, Catherine E; Doré, Kathryn; Majowicz, Shannon E; McEwen, Scott A; Waltner-Toews, David

    2006-01-01

    Background Exposure assessment is typically the greatest weakness of epidemiologic studies of disinfection by-products (DBPs) in drinking water, which largely stems from the difficulty in obtaining accurate data on individual-level water consumption patterns and activity. Thus, surrogate measures for such waterborne exposures are commonly used. Little attention however, has been directed towards formal validation of these measures. Methods We conducted a study in the City of Hamilton, Ontario (Canada) in 2001–2002, to assess the accuracy of two surrogate measures of home water source: (a) urban/rural status as assigned using residential postal codes, and (b) mapping of residential postal codes to municipal water systems within a Geographic Information System (GIS). We then assessed the accuracy of a commonly-used surrogate measure of an individual's actual drinking water source, namely, their home water source. Results The surrogates for home water source provided good classification of residents served by municipal water systems (approximately 98% predictive value), but did not perform well in classifying those served by private water systems (average: 63.5% predictive value). More importantly, we found that home water source was a poor surrogate measure of the individuals' actual drinking water source(s), being associated with high misclassification errors. Conclusion This study demonstrated substantial misclassification errors associated with a surrogate measure commonly used in studies of drinking water disinfection byproducts. Further, the limited accuracy of two surrogate measures of an individual's home water source heeds caution in their use in exposure classification methodology. While these surrogates are inexpensive and convenient, they should not be substituted for direct collection of accurate data pertaining to the subjects' waterborne disease exposure. In instances where such surrogates must be used, estimation of the misclassification and its subsequent effects are recommended for the interpretation and communication of results. Our results also lend support for further investigation into the quantification of the exposure misclassification associated with these surrogate measures, which would provide useful estimates for consideration in interpretation of waterborne disease studies. PMID:16729887

  10. From field data to volumes: constraining uncertainties in pyroclastic eruption parameters

    NASA Astrophysics Data System (ADS)

    Klawonn, Malin; Houghton, Bruce F.; Swanson, Donald A.; Fagents, Sarah A.; Wessel, Paul; Wolfe, Cecily J.

    2014-07-01

    In this study, we aim to understand the variability in eruption volume estimates derived from field studies of pyroclastic deposits. We distributed paper maps of the 1959 Kīlauea Iki tephra to 101 volcanologists worldwide, who produced hand-drawn isopachs. Across the returned maps, uncertainty in isopach areas is 7 % across the well-sampled deposit but increases to over 30 % for isopachs that are governed by the largest and smallest thickness measurements. We fit the exponential, power-law, and Weibull functions through the isopach thickness versus area1/2 values and find volume estimate variations up to a factor of 4.9 for a single map. Across all maps and methodologies, we find an average standard deviation for a total volume of s = 29 %. The volume uncertainties are largest for the most proximal ( s = 62 %) and distal field ( s = 53 %) and small for the densely sampled intermediate deposit ( s = 8 %). For the Kīlauea Iki 1959 eruption, we find that the deposit beyond the 5-cm isopach contains only 2 % of the total erupted volume, whereas the near-source deposit contains 48 % and the intermediate deposit 50 % of the total volume. Thus, the relative uncertainty within each zone impacts the total volume estimates differently. The observed uncertainties for the different deposit regions in this study illustrate a fundamental problem of estimating eruption volumes: while some methodologies may provide better fits to the isopach data or rely on fewer free parameters, the main issue remains the predictive capabilities of the empirical functions for the regions where measurements are missing.

  11. Mapping surface heat fluxes by assimilating GOES land surface temperature and SMAP products

    NASA Astrophysics Data System (ADS)

    Lu, Y.; Steele-Dunne, S. C.; Van De Giesen, N.

    2017-12-01

    Surface heat fluxes significantly affect the land-atmosphere interaction, but their modelling is often hindered by the lack of in-situ measurements and the high spatial heterogeneity. Here, we propose a hybrid particle assimilation strategy to estimate surface heat fluxes by assimilating GOES land surface temperature (LST) data and SMAP products into a simple dual-source surface energy balance model, in which the requirement for in-situ data is minimized. The study aims to estimate two key parameters: a neutral bulk heat transfer coefficient (CHN) and an evaporative fraction (EF). CHN scales the sum of surface energy fluxes, and EF represents the partitioning between flux components. To bridge the huge resolution gap between GOES and SMAP data, SMAP data are assimilated using a particle filter to update soil moisture which constrains EF, and GOES data are assimilated with an adaptive particle batch smoother to update CHN. The methodology is applied to an area in the US Southern Great Plains with forcing data from NLDAS-2 and the GPM mission. Assessment against in-situ observations suggests that the sensible and latent heat flux estimates are greatly improved at both daytime and 30-min scale after assimilation, particularly for latent heat fluxes. Comparison against an LST-only assimilation case demonstrates that despite the coarse resolution, assimilating SMAP data is not only beneficial but also crucial for successful and robust flux estimation, particularly when the modelling uncertainties are large. Since the methodology is independent on in-situ data, it can be easily applied to other areas.

  12. Development of methane emission factors for enteric fermentation in cattle from Benin using IPCC Tier 2 methodology.

    PubMed

    Kouazounde, J B; Gbenou, J D; Babatounde, S; Srivastava, N; Eggleston, S H; Antwi, C; Baah, J; McAllister, T A

    2015-03-01

    The objective of this study was to develop emission factors (EF) for methane (CH4) emissions from enteric fermentation in cattle native to Benin. Information on livestock characteristics and diet practices specific to the Benin cattle population were gathered from a variety of sources and used to estimate EF according to Tier 2 methodology of the 2006 Intergovernmental Panel on Climate Change (IPCC) Guidelines for National Greenhouse Gas Inventories. Most cattle from Benin are Bos taurus represented by Borgou, Somba and Lagune breeds. They are mainly multi-purpose, being used for production of meat, milk, hides and draft power and grazed in open pastures and crop lands comprising tropical forages and crops. Estimated enteric CH4 EFs varied among cattle breeds and subcategory owing to differences in proportions of gross energy intake expended to meet maintenance, production and activity. EFs ranged from 15.0 to 43.6, 16.9 to 46.3 and 24.7 to 64.9 kg CH4/head per year for subcategories of Lagune, Somba and Borgou cattle, respectively. Average EFs for cattle breeds were 24.8, 29.5 and 40.2 kg CH4/head per year for Lagune, Somba and Borgou cattle, respectively. The national EF for cattle from Benin was 39.5 kg CH4/head per year. This estimated EF was 27.4% higher than the default EF suggested by IPCC for African cattle with the exception of dairy cattle. The outcome of the study underscores the importance of obtaining country-specific EF to estimate global enteric CH4 emissions.

  13. Methodology of Estimation of Methane Emissions from Coal Mines in Poland

    NASA Astrophysics Data System (ADS)

    Patyńska, Renata

    2014-03-01

    Based on a literature review concerning methane emissions in Poland, it was stated in 2009 that the National Greenhouse Inventory 2007 [13] was published. It was prepared firstly to meet Poland's obligations resulting from point 3.1 Decision no. 280/2004/WE of the European Parliament and of the Council of 11 February 2004, concerning a mechanism for monitoring community greenhouse gas emissions and for implementing the Kyoto Protocol and secondly, for the United Nations Framework Convention on Climate Change (UNFCCC) and Kyoto Protocol. The National Greenhouse Inventory states that there are no detailed data concerning methane emissions in collieries in the Polish mining industry. That is why the methane emission in the methane coal mines of Górnośląskie Zagłębie Węglowe - GZW (Upper Silesian Coal Basin - USCB) in Poland was meticulously studied and evaluated. The applied methodology for estimating methane emission from the GZW coal mining system was used for the four basic sources of its emission. Methane emission during the mining and post-mining process. Such an approach resulted from the IPCC guidelines of 2006 [10]. Updating the proposed methods (IPCC2006) of estimating the methane emissions of hard coal mines (active and abandoned ones) in Poland, assumes that the methane emission factor (EF) is calculated based on methane coal mine output and actual values of absolute methane content. The result of verifying the method of estimating methane emission during the mining process for Polish coal mines is the equation of methane emission factor EF.

  14. Estimation of uncertainty for contour method residual stress measurements

    DOE PAGES

    Olson, Mitchell D.; DeWald, Adrian T.; Prime, Michael B.; ...

    2014-12-03

    This paper describes a methodology for the estimation of measurement uncertainty for the contour method, where the contour method is an experimental technique for measuring a two-dimensional map of residual stress over a plane. Random error sources including the error arising from noise in displacement measurements and the smoothing of the displacement surfaces are accounted for in the uncertainty analysis. The output is a two-dimensional, spatially varying uncertainty estimate such that every point on the cross-section where residual stress is determined has a corresponding uncertainty value. Both numerical and physical experiments are reported, which are used to support the usefulnessmore » of the proposed uncertainty estimator. The uncertainty estimator shows the contour method to have larger uncertainty near the perimeter of the measurement plane. For the experiments, which were performed on a quenched aluminum bar with a cross section of 51 × 76 mm, the estimated uncertainty was approximately 5 MPa (σ/E = 7 · 10⁻⁵) over the majority of the cross-section, with localized areas of higher uncertainty, up to 10 MPa (σ/E = 14 · 10⁻⁵).« less

  15. Deformation data modeling through numerical models: an efficient method for tracking magma transport

    NASA Astrophysics Data System (ADS)

    Charco, M.; Gonzalez, P. J.; Galán del Sastre, P.

    2017-12-01

    Nowadays, multivariate collected data and robust physical models at volcano observatories are becoming crucial for providing effective volcano monitoring. Nevertheless, the forecast of volcanic eruption is notoriously difficult. Wthin this frame one of the most promising methods to evaluate the volcano hazard is the use of surface ground deformation and in the last decades many developments in the field of deformation modeling has been achieved. In particular, numerical modeling allows realistic media features such as topography and crustal heterogeneities to be included, although it is still very time cosuming to solve the inverse problem for near-real time interpretations. Here, we present a method that can be efficiently used to estimate the location and evolution of magmatic sources base on real-time surface deformation data and Finite Element (FE) models. Generally, the search for the best-fitting magmatic (point) source(s) is conducted for an array of 3-D locations extending below a predefined volume region and the Green functions for all the array components have to be precomputed. We propose a FE model for the pre-computation of Green functions in a mechanically heterogeneous domain which eventually will lead to a better description of the status of the volcanic area. The number of Green functions is reduced here to the number of observational points by using their reciprocity relationship. We present and test this methodology with an optimization method base on a Genetic Algorithm. Following synthetic and sensitivity test to estimate the uncertainty of the model parameters, we apply the tool for magma tracking during 2007 Kilauea volcano intrusion and eruption. We show how data inversion with numerical models can speed up the source parameters estimations for a given volcano showing signs of unrest.

  16. An inventory of nitrous oxide emissions from agriculture in the UK using the IPCC methodology: emission estimate, uncertainty and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Brown, L.; Armstrong Brown, S.; Jarvis, S. C.; Syed, B.; Goulding, K. W. T.; Phillips, V. R.; Sneath, R. W.; Pain, B. F.

    Nitrous oxide emission from UK agriculture was estimated, using the IPCC default values of all emission factors and parameters, to be 87 Gg N 2O-N in both 1990 and 1995. This estimate was shown, however, to have an overall uncertainty of 62%. The largest component of the emission (54%) was from the direct (soil) sector. Two of the three emission factors applied within the soil sector, EF1 (direct emission from soil) and EF3 PRP (emission from pasture range and paddock) were amongst the most influential on the total estimate, producing a ±31 and +11% to -17% change in emissions, respectively, when varied through the IPCC range from the default value. The indirect sector (from leached N and deposited ammonia) contributed 29% of the total emission, and had the largest uncertainty (126%). The factors determining the fraction of N leached (Frac LEACH) and emissions from it (EF5), were the two most influential. These parameters are poorly specified and there is great potential to improve the emission estimate for this component. Use of mathematical models (NCYCLE and SUNDIAL) to predict Frac LEACH suggested that the IPCC default value for this parameter may be too high for most situations in the UK. Comparison with other UK-derived inventories suggests that the IPCC methodology may overestimate emission. Although the IPCC approach includes additional components to the other inventories (most notably emission from indirect sources), estimates for the common components (i.e. fertiliser and animals), and emission factors used, are higher than those of other inventories. Whilst it is recognised that the IPCC approach is generalised in order to allow widespread applicability, sufficient data are available to specify at least two of the most influential parameters, i.e. EF1 and Frac LEACH, more accurately, and so provide an improved estimate of nitrous oxide emissions from UK agriculture.

  17. The public health benefits of insulation retrofits in existing housing in the United States

    PubMed Central

    Levy, Jonathan I; Nishioka, Yurika; Spengler, John D

    2003-01-01

    Background Methodological limitations make it difficult to quantify the public health benefits of energy efficiency programs. To address this issue, we developed a risk-based model to estimate the health benefits associated with marginal energy usage reductions and applied the model to a hypothetical case study of insulation retrofits in single-family homes in the United States. Methods We modeled energy savings with a regression model that extrapolated findings from an energy simulation program. Reductions of fine particulate matter (PM2.5) emissions and particle precursors (SO2 and NOx) were quantified using fuel-specific emission factors and marginal electricity analyses. Estimates of population exposure per unit emissions, varying by location and source type, were extrapolated from past dispersion model runs. Concentration-response functions for morbidity and mortality from PM2.5 were derived from the epidemiological literature, and economic values were assigned to health outcomes based on willingness to pay studies. Results In total, the insulation retrofits would save 800 TBTU (8 × 1014 British Thermal Units) per year across 46 million homes, resulting in 3,100 fewer tons of PM2.5, 100,000 fewer tons of NOx, and 190,000 fewer tons of SO2 per year. These emission reductions are associated with outcomes including 240 fewer deaths, 6,500 fewer asthma attacks, and 110,000 fewer restricted activity days per year. At a state level, the health benefits per unit energy savings vary by an order of magnitude, illustrating that multiple factors (including population patterns and energy sources) influence health benefit estimates. The health benefits correspond to $1.3 billion per year in externalities averted, compared with $5.9 billion per year in economic savings. Conclusion In spite of significant uncertainties related to the interpretation of PM2.5 health effects and other dimensions of the model, our analysis demonstrates that a risk-based methodology is viable for national-level energy efficiency programs. PMID:12740041

  18. A time-frequency analysis of the dynamics of cortical networks of sleep spindles from MEG-EEG recordings

    PubMed Central

    Zerouali, Younes; Lina, Jean-Marc; Sekerovic, Zoran; Godbout, Jonathan; Dube, Jonathan; Jolicoeur, Pierre; Carrier, Julie

    2014-01-01

    Sleep spindles are a hallmark of NREM sleep. They result from a widespread thalamo-cortical loop and involve synchronous cortical networks that are still poorly understood. We investigated whether brain activity during spindles can be characterized by specific patterns of functional connectivity among cortical generators. For that purpose, we developed a wavelet-based approach aimed at imaging the synchronous oscillatory cortical networks from simultaneous MEG-EEG recordings. First, we detected spindles on the EEG and extracted the corresponding frequency-locked MEG activity under the form of an analytic ridge signal in the time-frequency plane (Zerouali et al., 2013). Secondly, we performed source reconstruction of the ridge signal within the Maximum Entropy on the Mean framework (Amblard et al., 2004), yielding a robust estimate of the cortical sources producing observed oscillations. Lastly, we quantified functional connectivity among cortical sources using phase-locking values. The main innovations of this methodology are (1) to reveal the dynamic behavior of functional networks resolved in the time-frequency plane and (2) to characterize functional connectivity among MEG sources through phase interactions. We showed, for the first time, that the switch from fast to slow oscillatory mode during sleep spindles is required for the emergence of specific patterns of connectivity. Moreover, we show that earlier synchrony during spindles was associated with mainly intra-hemispheric connectivity whereas later synchrony was associated with global long-range connectivity. We propose that our methodology can be a valuable tool for studying the connectivity underlying neural processes involving sleep spindles, such as memory, plasticity or aging. PMID:25389381

  19. Joint Seismic-Geodetic Algorithm for Finite-Fault Detection and Slip Inversion in the West Coast ShakeAlert System

    NASA Astrophysics Data System (ADS)

    Smith, D. E.; Felizardo, C.; Minson, S. E.; Boese, M.; Langbein, J. O.; Murray, J. R.

    2016-12-01

    Finite-fault source algorithms can greatly benefit earthquake early warning (EEW) systems. Estimates of finite-fault parameters provide spatial information, which can significantly improve real-time shaking calculations and help with disaster response. In this project, we have focused on integrating a finite-fault seismic-geodetic algorithm into the West Coast ShakeAlert framework. The seismic part is FinDer 2, a C++ version of the algorithm developed by Böse et al. (2012). It interpolates peak ground accelerations and calculates the best fault length and strike from template matching. The geodetic part is a C++ version of BEFORES, the algorithm developed by Minson et al. (2014) that uses a Bayesian methodology to search for the most probable slip distribution on a fault of unknown orientation. Ultimately, these two will be used together where FinDer generates a Bayesian prior for BEFORES via the methodology of Minson et al. (2015), and the joint solution will generate estimates of finite-fault extent, strike, dip, best slip distribution, and magnitude. We have created C++ versions of both FinDer and BEFORES using open source libraries and have developed a C++ Application Protocol Interface (API) for them both. Their APIs allow FinDer and BEFORES to contribute to the ShakeAlert system via an open source messaging system, ActiveMQ. FinDer has been receiving real-time data, detecting earthquakes, and reporting messages on the development system for several months. We are also testing FinDer extensively with Earthworm tankplayer files. BEFORES has been tested with ActiveMQ messaging in the ShakeAlert framework, and works off a FinDer trigger. We are finishing the FinDer-BEFORES connections in this framework, and testing this system via seismic-geodetic tankplayer files. This will include actual and simulated data.

  20. Improving bioaerosol exposure assessments of composting facilities — Comparative modelling of emissions from different compost ages and processing activities

    NASA Astrophysics Data System (ADS)

    Taha, M. P. M.; Drew, G. H.; Tamer, A.; Hewings, G.; Jordinson, G. M.; Longhurst, P. J.; Pollard, S. J. T.

    We present bioaerosol source term concentrations from passive and active composting sources and compare emissions from green waste compost aged 1, 2, 4, 6, 8, 12 and 16 weeks. Results reveal that the age of compost has little effect on the bioaerosol concentrations emitted for passive windrow sources. However emissions from turning compost during the early stages may be higher than during the later stages of the composting process. The bioaerosol emissions from passive sources were in the range of 10 3-10 4 cfu m -3, with releases from active sources typically 1-log higher. We propose improvements to current risk assessment methodologies by examining emission rates and the differences between two air dispersion models for the prediction of downwind bioaerosol concentrations at off-site points of exposure. The SCREEN3 model provides a more precautionary estimate of the source depletion curves of bioaerosol emissions in comparison to ADMS 3.3. The results from both models predict that bioaerosol concentrations decrease to below typical background concentrations before 250 m, the distance at which the regulator in England and Wales may require a risk assessment to be completed.

  1. National Health Accounts development: lessons from Thailand.

    PubMed

    Tangcharoensathien, V; Laixuthai, A; Vasavit, J; Tantigate, N A; Prajuabmoh-Ruffolo, W; Vimolkit, D; Lertiendumrong, J

    1999-12-01

    National Health Accounts (NHA) are an important tool to demonstrate how a country's health resources are spent, on what services, and who pays for them. NHA are used by policy-makers for monitoring health expenditure patterns; policy instruments to re-orientate the pattern can then be further introduced. The National Economic and Social Development Board (NESDB) of Thailand produces aggregate health expenditure data but its estimation methods have several limitations. This has led to the research and development of an NHA prototype in 1994, through an agreed definition of health expenditure and methodology, in consultation with peer and other stakeholders. This is an initiative by local researchers without external support, with an emphasis on putting the system into place. It involves two steps: firstly, the flow of funds from ultimate sources of finance to financing agencies; and secondly, the use of funds by financing agencies. Five ultimate sources and 12 financing agencies (seven public and five private) were identified. Use of consumption expenditures was listed under four main categories and 32 sub-categories. Using 1994 figures, we estimated a total health expenditure of 128,305.11 million Baht; 84.07% consumption and 15.93% capital formation. Of total consumption expenditure, 36.14% was spent on purchasing care from public providers, with 32.35% on private providers, 5.93% on administration and 9.65% on all other public health programmes. Public sources of finance were responsible for 48.79% and private 51.21% of the total 1994 health expenditure. Total health expenditure accounted for 3.56% of GDP (consumption expenditure at 3.00% of GDP and capital formation at 0.57% of GDP). The NESDB consumption expenditure estimate in 1994 was 180,516 million Baht or 5.01% of GDP, of which private sources were dominant (82.17%) and public sources played a minor role (17.83%). The discrepancy of consumption expenditure between the two estimates is 2.01% of GDP. There is also a large difference in the public and private proportion of consumption expenses, at 46:54 in NHA and 18:82 in NESDB. Future NHA sustainable development is proposed. Firstly, we need more accurate aggregate and disaggregated data, especially from households, who take the lion's share of total expenditure, based on amended questionnaires in the National Statistical Office Household Socio-Economic Survey. Secondly, partnership building with NESDB and other financing agencies is needed in the further development of the financial information system to suit the biennial NHA report. Thirdly, expenditures need breaking down into ambulatory and inpatient care for monitoring and the proper introduction of policy instruments. We also suggest that in a pluralistic health care system, the breakdown of spending on public and private providers is important. Finally, a sustainable NHA development and utilization of NHA for planning and policy development is the prime objective. International comparisons through collaborative efforts in standardizing definition and methodology will be a useful by-product when developing countries are able to sustain their NHA reports.

  2. Benefit-Cost Analysis of Integrated Paratransit Systems : Volume 6. Technical Appendices.

    DOT National Transportation Integrated Search

    1979-09-01

    This last volume, includes five technical appendices which document the methodologies used in the benefit-cost analysis. They are the following: Scenario analysis methodology; Impact estimation; Example of impact estimation; Sensitivity analysis; Agg...

  3. Debiased estimates for NEO orbits, absolute magnitudes, and source regions

    NASA Astrophysics Data System (ADS)

    Granvik, Mikael; Morbidelli, Alessandro; Jedicke, Robert; Bolin, Bryce T.; Bottke, William; Beshore, Edward C.; Vokrouhlicky, David; Nesvorny, David; Michel, Patrick

    2017-10-01

    The debiased absolute-magnitude and orbit distributions as well as source regions for near-Earth objects (NEOs) provide a fundamental frame of reference for studies on individual NEOs as well as on more complex population-level questions. We present a new four-dimensional model of the NEO population that describes debiased steady-state distributions of semimajor axis (a), eccentricity (e), inclination (i), and absolute magnitude (H). We calibrate the model using NEO detections by the 703 and G96 stations of the Catalina Sky Survey (CSS) during 2005-2012 corresponding to objects with 17

  4. The Freight Analysis Framework Verson 4 (FAF4) - Building the FAF4 Regional Database: Data Sources and Estimation Methodologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, Ho-Ling; Hargrove, Stephanie; Chin, Shih-Miao

    The Freight Analysis Framework (FAF) integrates data from a variety of sources to create a comprehensive national picture of freight movements among states and major metropolitan areas by all modes of transportation. It provides a national picture of current freight flows to, from, and within the United States, assigns the flows to the transportation network, and projects freight flow patterns into the future. The FAF4 is the fourth database of its kind, FAF1 provided estimates for truck, rail, and water tonnage for calendar year 1998, FAF2 provided a more complete picture based on the 2002 Commodity Flow Survey (CFS) andmore » FAF3 made further improvements building on the 2007 CFS. Since the first FAF effort, a number of changes in both data sources and products have taken place. The FAF4 flow matrix described in this report is used as the base-year data to forecast future freight activities, projecting shipment weights and values from year 2020 to 2045 in five-year intervals. It also provides the basis for annual estimates to the FAF4 flow matrix, aiming to provide users with the timeliest data. Furthermore, FAF4 truck freight is routed on the national highway network to produce the FAF4 network database and flow assignments for truck. This report details the data sources and methodologies applied to develop the base year 2012 FAF4 database. An overview of the FAF4 components is briefly discussed in Section 2. Effects on FAF4 from the changes in the 2012 CFS are highlighted in Section 3. Section 4 provides a general discussion on the process used in filling data gaps within the domestic CFS matrix, specifically on the estimation of CFS suppressed/unpublished cells. Over a dozen CFS OOS components of FAF4 are then addressed in Section 5 through Section 11 of this report. This includes discussions of farm-based agricultural shipments in Section 5, shipments from fishery and logging sectors in Section 6. Shipments of municipal solid wastes and debris from construction and demolition activities are covered in Section 7. Movements involving OOS industry sectors on Retail, Services, and Household/Business Moves are addressed in Section 8. Flows of OOS commodity on crude petroleum and natural gas are presented in Sections 9 and 10, respectively. Discussions regarding shipments of foreign trade, including trade with Canada/Mexico, international airfreight, and waterborne foreign trade, are then discussed in Section 11. Several appendices are also provided at the end of this report to offer additional information.« less

  5. Methodology to estimate particulate matter emissions from certified commercial aircraft engines.

    PubMed

    Wayson, Roger L; Fleming, Gregg G; Lovinelli, Ralph

    2009-01-01

    Today, about one-fourth of U.S. commercial service airports, including 41 of the busiest 50, are either in nonattainment or maintenance areas per the National Ambient Air Quality Standards. U.S. aviation activity is forecasted to triple by 2025, while at the same time, the U.S. Environmental Protection Agency (EPA) is evaluating stricter particulate matter (PM) standards on the basis of documented human health and welfare impacts. Stricter federal standards are expected to impede capacity and limit aviation growth if regulatory mandated emission reductions occur as for other non-aviation sources (i.e., automobiles, power plants, etc.). In addition, strong interest exists as to the role aviation emissions play in air quality and climate change issues. These reasons underpin the need to quantify and understand PM emissions from certified commercial aircraft engines, which has led to the need for a methodology to predict these emissions. Standardized sampling techniques to measure volatile and nonvolatile PM emissions from aircraft engines do not exist. As such, a first-order approximation (FOA) was derived to fill this need based on available information. FOA1.0 only allowed prediction of nonvolatile PM. FOA2.0 was a change to include volatile PM emissions on the basis of the ratio of nonvolatile to volatile emissions. Recent collaborative efforts by industry (manufacturers and airlines), research establishments, and regulators have begun to provide further insight into the estimation of the PM emissions. The resultant PM measurement datasets are being analyzed to refine sampling techniques and progress towards standardized PM measurements. These preliminary measurement datasets also support the continued refinement of the FOA methodology. FOA3.0 disaggregated the prediction techniques to allow for independent prediction of nonvolatile and volatile emissions on a more theoretical basis. The Committee for Aviation Environmental Protection of the International Civil Aviation Organization endorsed the use of FOA3.0 in February 2007. Further commitment was made to improve the FOA as new data become available, until such time the methodology is rendered obsolete by a fully validated database of PM emission indices for today's certified commercial fleet. This paper discusses related assumptions and derived equations for the FOA3.0 methodology used worldwide to estimate PM emissions from certified commercial aircraft engines within the vicinity of airports.

  6. Development of regional stump-to-mill logging cost estimators

    Treesearch

    Chris B. LeDoux; John E. Baumgras

    1989-01-01

    Planning logging operations requires estimating the logging costs for the sale or tract being harvested. Decisions need to be made on equipment selection and its application to terrain. In this paper a methodology is described that has been developed and implemented to solve the problem of accurately estimating logging costs by region. The methodology blends field time...

  7. Design Science Methodology Applied to a Chemical Surveillance Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Zhuanyi; Han, Kyungsik; Charles-Smith, Lauren E.

    Public health surveillance systems gain significant benefits from integrating existing early incident detection systems,supported by closed data sources, with open source data.However, identifying potential alerting incidents relies on finding accurate, reliable sources and presenting the high volume of data in a way that increases analysts work efficiency; a challenge for any system that leverages open source data. In this paper, we present the design concept and the applied design science research methodology of ChemVeillance, a chemical analyst surveillance system.Our work portrays a system design and approach that translates theoretical methodology into practice creating a powerful surveillance system built for specificmore » use cases.Researchers, designers, developers, and related professionals in the health surveillance community can build upon the principles and methodology described here to enhance and broaden current surveillance systems leading to improved situational awareness based on a robust integrated early warning system.« less

  8. A Radial Basis Function Approach to Financial Time Series Analysis

    DTIC Science & Technology

    1993-12-01

    including efficient methods for parameter estimation and pruning, a pointwise prediction error estimator, and a methodology for controlling the "data...collection of practical techniques to address these issues for a modeling methodology . Radial Basis Function networks. These techniques in- clude efficient... methodology often then amounts to a careful consideration of the interplay between model complexity and reliability. These will be recurrent themes

  9. Tunnel and Station Cost Methodology : Mined Tunnels

    DOT National Transportation Integrated Search

    1983-01-01

    The main objective of this study was to develop a model for estimating the cost of subway station and tunnel construction. This report describes a cost estimating methodology for subway tunnels that can be used by planners, designers, owners, and gov...

  10. Tunnel and Station Cost Methodology Volume II: Stations

    DOT National Transportation Integrated Search

    1981-01-01

    The main objective of this study was to develop a model for estimating the cost of subway station and tunnel construction. This report describes a cost estimating methodology for subway tunnels that can be used by planners, designers, owners, and gov...

  11. Economic Effects of Increased Control Zone Sizes in Conflict Resolution

    NASA Technical Reports Server (NTRS)

    Datta, Koushik

    1998-01-01

    A methodology for estimating the economic effects of different control zone sizes used in conflict resolutions between aircraft is presented in this paper. The methodology is based on estimating the difference in flight times of aircraft with and without the control zone, and converting the difference into a direct operating cost. Using this methodology the effects of increased lateral and vertical control zone sizes are evaluated.

  12. Predicting Vessel Trajectories from Ais Data Using R

    DTIC Science & Technology

    2017-06-01

    future position at the expectation level set by the user, therefore producing a valid methodology for both estimating the future vessel location and... methodology for both estimating the future vessel location and for assessing anomalous vessel behavior. vi THIS PAGE INTENTIONALLY LEFT BLANK vii... methodology , that brings them one step closer to attaining these goals. A key idea in the current literature is that the series of vessel locations

  13. Recommendations for benefit-risk assessment methodologies and visual representations.

    PubMed

    Hughes, Diana; Waddingham, Ed; Mt-Isa, Shahrul; Goginsky, Alesia; Chan, Edmond; Downey, Gerald F; Hallgreen, Christine E; Hockley, Kimberley S; Juhaeri, Juhaeri; Lieftucht, Alfons; Metcalf, Marilyn A; Noel, Rebecca A; Phillips, Lawrence D; Ashby, Deborah; Micaleff, Alain

    2016-03-01

    The purpose of this study is to draw on the practical experience from the PROTECT BR case studies and make recommendations regarding the application of a number of methodologies and visual representations for benefit-risk assessment. Eight case studies based on the benefit-risk balance of real medicines were used to test various methodologies that had been identified from the literature as having potential applications in benefit-risk assessment. Recommendations were drawn up based on the results of the case studies. A general pathway through the case studies was evident, with various classes of methodologies having roles to play at different stages. Descriptive and quantitative frameworks were widely used throughout to structure problems, with other methods such as metrics, estimation techniques and elicitation techniques providing ways to incorporate technical or numerical data from various sources. Similarly, tree diagrams and effects tables were universally adopted, with other visualisations available to suit specific methodologies or tasks as required. Every assessment was found to follow five broad stages: (i) Planning, (ii) Evidence gathering and data preparation, (iii) Analysis, (iv) Exploration and (v) Conclusion and dissemination. Adopting formal, structured approaches to benefit-risk assessment was feasible in real-world problems and facilitated clear, transparent decision-making. Prior to this work, no extensive practical application and appraisal of methodologies had been conducted using real-world case examples, leaving users with limited knowledge of their usefulness in the real world. The practical guidance provided here takes us one step closer to a harmonised approach to benefit-risk assessment from multiple perspectives. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Performance-based methodology for assessing seismic vulnerability and capacity of buildings

    NASA Astrophysics Data System (ADS)

    Shibin, Lin; Lili, Xie; Maosheng, Gong; Ming, Li

    2010-06-01

    This paper presents a performance-based methodology for the assessment of seismic vulnerability and capacity of buildings. The vulnerability assessment methodology is based on the HAZUS methodology and the improved capacitydemand-diagram method. The spectral displacement ( S d ) of performance points on a capacity curve is used to estimate the damage level of a building. The relationship between S d and peak ground acceleration (PGA) is established, and then a new vulnerability function is expressed in terms of PGA. Furthermore, the expected value of the seismic capacity index (SCev) is provided to estimate the seismic capacity of buildings based on the probability distribution of damage levels and the corresponding seismic capacity index. The results indicate that the proposed vulnerability methodology is able to assess seismic damage of a large number of building stock directly and quickly following an earthquake. The SCev provides an effective index to measure the seismic capacity of buildings and illustrate the relationship between the seismic capacity of buildings and seismic action. The estimated result is compared with damage surveys of the cities of Dujiangyan and Jiangyou in the M8.0 Wenchuan earthquake, revealing that the methodology is acceptable for seismic risk assessment and decision making. The primary reasons for discrepancies between the estimated results and the damage surveys are discussed.

  15. Remote sensing as a tool for estimating soil erosion potential

    NASA Technical Reports Server (NTRS)

    Morris-Jones, D. R.; Morgan, K. M.; Kiefer, R. W.

    1979-01-01

    The Universal Soil Loss Equation is a frequently used methodology for estimating soil erosion potential. The Universal Soil Loss Equation requires a variety of types of geographic information (e.g. topographic slope, soil erodibility, land use, crop type, and soil conservation practice) in order to function. This information is traditionally gathered from topographic maps, soil surveys, field surveys, and interviews with farmers. Remote sensing data sources and interpretation techniques provide an alternative method for collecting information regarding land use, crop type, and soil conservation practice. Airphoto interpretation techniques and medium altitude, multi-date color and color infrared positive transparencies (70mm) were utilized in this study to determine their effectiveness for gathering the desired land use/land cover data. Successful results were obtained within the test site, a 6136 hectare watershed in Dane County, Wisconsin.

  16. Final Report: Seismic Hazard Assessment at the PGDP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhinmeng

    2007-06-01

    Selecting a level of seismic hazard at the Paducah Gaseous Diffusion Plant for policy considerations and engineering design is not an easy task because it not only depends on seismic hazard, but also on seismic risk and other related environmental, social, and economic issues. Seismic hazard is the main focus. There is no question that there are seismic hazards at the Paducah Gaseous Diffusion Plant because of its proximity to several known seismic zones, particularly the New Madrid Seismic Zone. The issues in estimating seismic hazard are (1) the methods being used and (2) difficulty in characterizing the uncertainties ofmore » seismic sources, earthquake occurrence frequencies, and ground-motion attenuation relationships. This report summarizes how input data were derived, which methodologies were used, and what the hazard estimates at the Paducah Gaseous Diffusion Plant are.« less

  17. Improving Forecasts Through Realistic Uncertainty Estimates: A Novel Data Driven Method for Model Uncertainty Quantification in Data Assimilation

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.; Moradkhani, H.; Marshall, L. A.; Sharma, A.; Geenens, G.

    2016-12-01

    Effective combination of model simulations and observations through Data Assimilation (DA) depends heavily on uncertainty characterisation. Many traditional methods for quantifying model uncertainty in DA require some level of subjectivity (by way of tuning parameters or by assuming Gaussian statistics). Furthermore, the focus is typically on only estimating the first and second moments. We propose a data-driven methodology to estimate the full distributional form of model uncertainty, i.e. the transition density p(xt|xt-1). All sources of uncertainty associated with the model simulations are considered collectively, without needing to devise stochastic perturbations for individual components (such as model input, parameter and structural uncertainty). A training period is used to derive the distribution of errors in observed variables conditioned on hidden states. Errors in hidden states are estimated from the conditional distribution of observed variables using non-linear optimization. The theory behind the framework and case study applications are discussed in detail. Results demonstrate improved predictions and more realistic uncertainty bounds compared to a standard perturbation approach.

  18. Quantification of groundwater infiltration and surface water inflows in urban sewer networks based on a multiple model approach.

    PubMed

    Karpf, Christian; Krebs, Peter

    2011-05-01

    The management of sewer systems requires information about discharge and variability of typical wastewater sources in urban catchments. Especially the infiltration of groundwater and the inflow of surface water (I/I) are important for making decisions about the rehabilitation and operation of sewer networks. This paper presents a methodology to identify I/I and estimate its quantity. For each flow fraction in sewer networks, an individual model approach is formulated whose parameters are optimised by the method of least squares. This method was applied to estimate the contributions to the wastewater flow in the sewer system of the City of Dresden (Germany), where data availability is good. Absolute flows of I/I and their temporal variations are estimated. Further information on the characteristics of infiltration is gained by clustering and grouping sewer pipes according to the attributes construction year and groundwater influence and relating these resulting classes to infiltration behaviour. Further, it is shown that condition classes based on CCTV-data can be used to estimate the infiltration potential of sewer pipes. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Procedures for the estimation of regional scale atmospheric emissions—An example from the North West Region of England

    NASA Astrophysics Data System (ADS)

    Lindley, S. J.; Longhurst, J. W. S.; Watson, A. F. R.; Conlan, D. E.

    This paper considers the value of applying an alternative pro rata methodology to the estimation of atmospheric emissions from a given regional or local area. Such investigations into less time and resource intensive means of providing estimates in comparison to traditional methods are important due to the potential role of new methods in the development of air quality management plans. A pro rata approach is used here to estimate emissions of SO 2, NO x, CO, CO 2, VOCs and black smoke from all sources and Pb from transportation for the North West region of England. This method has the advantage of using readily available data as well as being an easily repeatable procedure which provides a good indication of emissions to be expected from a particular geographical region. This can then provide the impetus for further emission studies and ultimately a regional/local air quality management plan. Results suggest that between 1987 and 1991 trends in the emissions of the pollutants considered have been less favourable in the North West region than in the nation as a whole.

  20. Sensitivity tests to define the source apportionment performance criteria in the DeltaSA tool

    NASA Astrophysics Data System (ADS)

    Pernigotti, Denise; Belis, Claudio A.

    2017-04-01

    Identification and quantification of the contribution of emission sources to a given area is a key task for the design of abatement strategies. Moreover, European member states are obliged to report this kind of information for zones where the pollution levels exceed the limit values. At present, little is known about the performance and uncertainty of the variety of methodologies used for source apportionment and the comparability between the results of studies using different approaches. The source apportionment Delta (SA Delta) is a tool developed by the EC-JRC to support the particulate matter source apportionment modellers in the identification of sources (for factor analysis studies) and/or in the measure of their performance. The source identification is performed by the tool measuring the proximity of any user chemical profile to preloaded repository data (SPECIATE and SPECIEUROPE). The model performances criteria are based on standard statistical indexes calculated by comparing participants' source contribute estimates and their time series with preloaded references data. Those preloaded data refer to previous European SA intercomparison exercises: the first with real world data (22 participants), the second with synthetic data (25 participants) and the last with real world data which was also extended to Chemical Transport Models (38 receptor models and 4 CTMs). The references used for the model performances are 'true' (predefined by JRC) for the synthetic while they are calculated as ensemble average of the participants' results in real world intercomparisons. The candidates used for each source ensemble reference calculation were selected among participants results based on a number of consistency checks plus the similarity between their chemical profiles to the repository measured data. The estimation of the ensemble reference uncertainty is crucial in order to evaluate the users' performances against it. For this reason a sensitivity analysis on different methods to estimate the ensemble references' uncertainties was performed re-analyzing the synthetic intercomparison dataset, the only one where 'true' reference and ensemble reference contributions were both present. The Delta SA is now available on-line and will be presented, with a critical discussion of the sensitivity analysis on the ensemble reference uncertainty. In particular the grade of among participants mutual agreement on the presence of a certain source should be taken into account. Moreover also the importance of the synthetic intercomparisons in order to catch receptor models common biases will be stressed.

  1. Accounting for Parcel-Allocation Variability in Practice: Combining Sources of Uncertainty and Choosing the Number of Allocations.

    PubMed

    Sterba, Sonya K; Rights, Jason D

    2016-01-01

    Item parceling remains widely used under conditions that can lead to parcel-allocation variability in results. Hence, researchers may be interested in quantifying and accounting for parcel-allocation variability within sample. To do so in practice, three key issues need to be addressed. First, how can we combine sources of uncertainty arising from sampling variability and parcel-allocation variability when drawing inferences about parameters in structural equation models? Second, on what basis can we choose the number of repeated item-to-parcel allocations within sample? Third, how can we diagnose and report proportions of total variability per estimate arising due to parcel-allocation variability versus sampling variability? This article addresses these three methodological issues. Developments are illustrated using simulated and empirical examples, and software for implementing them is provided.

  2. Construction of a Calibrated Probabilistic Classification Catalog: Application to 50k Variable Sources in the All-Sky Automated Survey

    NASA Astrophysics Data System (ADS)

    Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; Brink, Henrik; Crellin-Quick, Arien

    2012-12-01

    With growing data volumes from synoptic surveys, astronomers necessarily must become more abstracted from the discovery and introspection processes. Given the scarcity of follow-up resources, there is a particularly sharp onus on the frameworks that replace these human roles to provide accurate and well-calibrated probabilistic classification catalogs. Such catalogs inform the subsequent follow-up, allowing consumers to optimize the selection of specific sources for further study and permitting rigorous treatment of classification purities and efficiencies for population studies. Here, we describe a process to produce a probabilistic classification catalog of variability with machine learning from a multi-epoch photometric survey. In addition to producing accurate classifications, we show how to estimate calibrated class probabilities and motivate the importance of probability calibration. We also introduce a methodology for feature-based anomaly detection, which allows discovery of objects in the survey that do not fit within the predefined class taxonomy. Finally, we apply these methods to sources observed by the All-Sky Automated Survey (ASAS), and release the Machine-learned ASAS Classification Catalog (MACC), a 28 class probabilistic classification catalog of 50,124 ASAS sources in the ASAS Catalog of Variable Stars. We estimate that MACC achieves a sub-20% classification error rate and demonstrate that the class posterior probabilities are reasonably calibrated. MACC classifications compare favorably to the classifications of several previous domain-specific ASAS papers and to the ASAS Catalog of Variable Stars, which had classified only 24% of those sources into one of 12 science classes.

  3. A preliminary probabilistic analysis of tsunami sources of seismic and non-seismic origin applied to the city of Naples, Italy

    NASA Astrophysics Data System (ADS)

    Tonini, R.; Anita, G.

    2011-12-01

    In both worldwide and regional historical catalogues, most of the tsunamis are caused by earthquakes and a minor percentage is represented by all the other non-seismic sources. On the other hand, tsunami hazard and risk studies are often applied to very specific areas, where this global trend can be different or even inverted, depending on the kind of potential tsunamigenic sources which characterize the case study. So far, few probabilistic approaches consider the contribution of landslides and/or phenomena derived by volcanic activity, i.e. pyroclastic flows and flank collapses, as predominant in the PTHA, also because of the difficulties to estimate the correspondent recurrence time. These considerations are valid, for example, for the city of Naples, Italy, which is surrounded by a complex active volcanic system (Vesuvio, Campi Flegrei, Ischia) that presents a significant number of potential tsunami sources of non-seismic origin compared to the seismic ones. In this work we present the preliminary results of a probabilistic multi-source tsunami hazard assessment applied to Naples. The method to estimate the uncertainties will be based on Bayesian inference. This is the first step towards a more comprehensive task which will provide a tsunami risk quantification for this town in the frame of the Italian national project ByMuR (http://bymur.bo.ingv.it). This three years long ongoing project has the final objective of developing a Bayesian multi-risk methodology to quantify the risk related to different natural hazards (volcanoes, earthquakes and tsunamis) applied to the city of Naples.

  4. CONSTRUCTION OF A CALIBRATED PROBABILISTIC CLASSIFICATION CATALOG: APPLICATION TO 50k VARIABLE SOURCES IN THE ALL-SKY AUTOMATED SURVEY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.

    2012-12-15

    With growing data volumes from synoptic surveys, astronomers necessarily must become more abstracted from the discovery and introspection processes. Given the scarcity of follow-up resources, there is a particularly sharp onus on the frameworks that replace these human roles to provide accurate and well-calibrated probabilistic classification catalogs. Such catalogs inform the subsequent follow-up, allowing consumers to optimize the selection of specific sources for further study and permitting rigorous treatment of classification purities and efficiencies for population studies. Here, we describe a process to produce a probabilistic classification catalog of variability with machine learning from a multi-epoch photometric survey. In additionmore » to producing accurate classifications, we show how to estimate calibrated class probabilities and motivate the importance of probability calibration. We also introduce a methodology for feature-based anomaly detection, which allows discovery of objects in the survey that do not fit within the predefined class taxonomy. Finally, we apply these methods to sources observed by the All-Sky Automated Survey (ASAS), and release the Machine-learned ASAS Classification Catalog (MACC), a 28 class probabilistic classification catalog of 50,124 ASAS sources in the ASAS Catalog of Variable Stars. We estimate that MACC achieves a sub-20% classification error rate and demonstrate that the class posterior probabilities are reasonably calibrated. MACC classifications compare favorably to the classifications of several previous domain-specific ASAS papers and to the ASAS Catalog of Variable Stars, which had classified only 24% of those sources into one of 12 science classes.« less

  5. A linear least squares approach for evaluation of crack tip stress field parameters using DIC

    NASA Astrophysics Data System (ADS)

    Harilal, R.; Vyasarayani, C. P.; Ramji, M.

    2015-12-01

    In the present work, an experimental study is carried out to estimate the mixed-mode stress intensity factors (SIF) for different cracked specimen configurations using digital image correlation (DIC) technique. For the estimation of mixed-mode SIF's using DIC, a new algorithm is proposed for the extraction of crack tip location and coefficients in the multi-parameter displacement field equations. From those estimated coefficients, SIF could be extracted. The required displacement data surrounding the crack tip has been obtained using 2D-DIC technique. An open source 2D DIC software Ncorr is used for the displacement field extraction. The presented methodology has been used to extract mixed-mode SIF's for specimen configurations like single edge notch (SEN) specimen and centre slant crack (CSC) specimens made out of Al 2014-T6 alloy. The experimental results have been compared with the analytical values and they are found to be in good agreement, thereby confirming the accuracy of the algorithm being proposed.

  6. The social cost of rheumatoid arthritis in Italy: the results of an estimation exercise.

    PubMed

    Turchetti, G; Bellelli, S; Mosca, M

    2014-03-14

    The objective of this study is to estimate the mean annual social cost per adult person and the total social cost of rheumatoid arthritis (RA) in Italy. A literature review was performed by searching primary economic studies on adults in order to collect cost data of RA in Italy in the last decade. The review results were merged with data of institutional sources for estimating - following the methodological steps of the cost of illness analysis - the social cost of RA in Italy. The mean annual social cost of RA was € 13,595 per adult patient in Italy. Affecting 259,795 persons, RA determines a social cost of € 3.5 billions in Italy. Non-medical direct cost and indirect cost represent the main cost items (48% and 31%) of the total social cost of RA in Italy. Based on these results, it appears evident that the assessment of the economic burden of RA solely based on direct medical costs evaluation gives a limited view of the phenomenon.

  7. Estimation and projection of nitrous oxide (N2O) emissions from anthropogenic sources in Taiwan.

    PubMed

    Tsai, Wen-Tien; Chyan, Jih-Ming

    2006-03-01

    Taiwan is a densely populated and developed country with more than 97% of energy consumption supplied by imported fuels. Greenhouse gas emissions are thus becoming significant environmental issues in the country. Using the Intergovernmental Panel on Climate Change (IPCC) recommended methodologies, anthropogenic emissions of nitrous oxide (N2O) in Taiwan during 2000-2003 were estimated to be around 41 thousand metric tons annually. About 87% of N2O emissions come from agriculture, 7% from the energy sector, 3% from industrial processes sector, 3% from waste sector. On the basis of N2O emissions in 2000, projections for the year 2010 show that emissions were estimated to decline by about 6% mainly due to agricultural changes in response to the entry of WTO in 2002. In contrast to projections for the year 2020, N2O emissions were projected to grow by about 17%. This is based on the reasonable scenario that a new adipic acid/nitric acid plant will be probably started after 2010.

  8. Security Events and Vulnerability Data for Cybersecurity Risk Estimation.

    PubMed

    Allodi, Luca; Massacci, Fabio

    2017-08-01

    Current industry standards for estimating cybersecurity risk are based on qualitative risk matrices as opposed to quantitative risk estimates. In contrast, risk assessment in most other industry sectors aims at deriving quantitative risk estimations (e.g., Basel II in Finance). This article presents a model and methodology to leverage on the large amount of data available from the IT infrastructure of an organization's security operation center to quantitatively estimate the probability of attack. Our methodology specifically addresses untargeted attacks delivered by automatic tools that make up the vast majority of attacks in the wild against users and organizations. We consider two-stage attacks whereby the attacker first breaches an Internet-facing system, and then escalates the attack to internal systems by exploiting local vulnerabilities in the target. Our methodology factors in the power of the attacker as the number of "weaponized" vulnerabilities he/she can exploit, and can be adjusted to match the risk appetite of the organization. We illustrate our methodology by using data from a large financial institution, and discuss the significant mismatch between traditional qualitative risk assessments and our quantitative approach. © 2017 Society for Risk Analysis.

  9. Analytical magmatic source modelling from a joint inversion of ground deformation and focal mechanisms data

    NASA Astrophysics Data System (ADS)

    Cannavo', Flavio; Scandura, Danila; Palano, Mimmo; Musumeci, Carla

    2014-05-01

    Seismicity and ground deformation represent the principal geophysical methods for volcano monitoring and provide important constraints on subsurface magma movements. The occurrence of migrating seismic swarms, as observed at several volcanoes worldwide, are commonly associated with dike intrusions. In addition, on active volcanoes, (de)pressurization and/or intrusion of magmatic bodies stress and deform the surrounding crustal rocks, often causing earthquakes randomly distributed in time within a volume extending about 5-10 km from the wall of the magmatic bodies. Despite advances in space-based, geodetic and seismic networks have significantly improved volcano monitoring in the last decades on an increasing worldwide number of volcanoes, quantitative models relating deformation and seismicity are not common. The observation of several episodes of volcanic unrest throughout the world, where the movement of magma through the shallow crust was able to produce local rotation of the ambient stress field, introduces an opportunity to improve the estimate of the parameters of a deformation source. In particular, during these episodes of volcanic unrest a radial pattern of P-axes of the focal mechanism solutions, similar to that of ground deformation, has been observed. Therefore, taking into account additional information from focal mechanisms data, we propose a novel approach to volcanic source modeling based on the joint inversion of deformation and focal plane solutions assuming that both observations are due to the same source. The methodology is first verified against a synthetic dataset of surface deformation and strain within the medium, and then applied to real data from an unrest episode occurred before the May 13th 2008 eruption at Mt. Etna (Italy). The main results clearly indicate as the joint inversion improves the accuracy of the estimated source parameters of about 70%. The statistical tests indicate that the source depth is the parameter with the highest increment of accuracy. In addition a sensitivity analysis confirms that displacements data are more useful to constrain the pressure and the horizontal location of the source than its depth, while the P-axes better constrain the depth estimation.

  10. Markov Logic Networks for Adverse Drug Event Extraction from Text.

    PubMed

    Natarajan, Sriraam; Bangera, Vishal; Khot, Tushar; Picado, Jose; Wazalwar, Anurag; Costa, Vitor Santos; Page, David; Caldwell, Michael

    2017-05-01

    Adverse drug events (ADEs) are a major concern and point of emphasis for the medical profession, government, and society. A diverse set of techniques from epidemiology, statistics, and computer science are being proposed and studied for ADE discovery from observational health data (e.g., EHR and claims data), social network data (e.g., Google and Twitter posts), and other information sources. Methodologies are needed for evaluating, quantitatively measuring, and comparing the ability of these various approaches to accurately discover ADEs. This work is motivated by the observation that text sources such as the Medline/Medinfo library provide a wealth of information on human health. Unfortunately, ADEs often result from unexpected interactions, and the connection between conditions and drugs is not explicit in these sources. Thus, in this work we address the question of whether we can quantitatively estimate relationships between drugs and conditions from the medical literature. This paper proposes and studies a state-of-the-art NLP-based extraction of ADEs from text.

  11. Creating an anthropomorphic digital MR phantom—an extensible tool for comparing and evaluating quantitative imaging algorithms

    NASA Astrophysics Data System (ADS)

    Bosca, Ryan J.; Jackson, Edward F.

    2016-01-01

    Assessing and mitigating the various sources of bias and variance associated with image quantification algorithms is essential to the use of such algorithms in clinical research and practice. Assessment is usually accomplished with grid-based digital reference objects (DRO) or, more recently, digital anthropomorphic phantoms based on normal human anatomy. Publicly available digital anthropomorphic phantoms can provide a basis for generating realistic model-based DROs that incorporate the heterogeneity commonly found in pathology. Using a publicly available vascular input function (VIF) and digital anthropomorphic phantom of a normal human brain, a methodology was developed to generate a DRO based on the general kinetic model (GKM) that represented realistic and heterogeneously enhancing pathology. GKM parameters were estimated from a deidentified clinical dynamic contrast-enhanced (DCE) MRI exam. This clinical imaging volume was co-registered with a discrete tissue model, and model parameters estimated from clinical images were used to synthesize a DCE-MRI exam that consisted of normal brain tissues and a heterogeneously enhancing brain tumor. An example application of spatial smoothing was used to illustrate potential applications in assessing quantitative imaging algorithms. A voxel-wise Bland-Altman analysis demonstrated negligible differences between the parameters estimated with and without spatial smoothing (using a small radius Gaussian kernel). In this work, we reported an extensible methodology for generating model-based anthropomorphic DROs containing normal and pathological tissue that can be used to assess quantitative imaging algorithms.

  12. Grey literature in meta-analyses.

    PubMed

    Conn, Vicki S; Valentine, Jeffrey C; Cooper, Harris M; Rantz, Marilyn J

    2003-01-01

    In meta-analysis, researchers combine the results of individual studies to arrive at cumulative conclusions. Meta-analysts sometimes include "grey literature" in their evidential base, which includes unpublished studies and studies published outside widely available journals. Because grey literature is a source of data that might not employ peer review, critics have questioned the validity of its data and the results of meta-analyses that include it. To examine evidence regarding whether grey literature should be included in meta-analyses and strategies to manage grey literature in quantitative synthesis. This article reviews evidence on whether the results of studies published in peer-reviewed journals are representative of results from broader samplings of research on a topic as a rationale for inclusion of grey literature. Strategies to enhance access to grey literature are addressed. The most consistent and robust difference between published and grey literature is that published research is more likely to contain results that are statistically significant. Effect size estimates of published research are about one-third larger than those of unpublished studies. Unfunded and small sample studies are less likely to be published. Yet, importantly, methodological rigor does not differ between published and grey literature. Meta-analyses that exclude grey literature likely (a) over-represent studies with statistically significant findings, (b) inflate effect size estimates, and (c) provide less precise effect size estimates than meta-analyses including grey literature. Meta-analyses should include grey literature to fully reflect the existing evidential base and should assess the impact of methodological variations through moderator analysis.

  13. Estimating direction in brain-behavior interactions: Proactive and reactive brain states in driving.

    PubMed

    Garcia, Javier O; Brooks, Justin; Kerick, Scott; Johnson, Tony; Mullen, Tim R; Vettel, Jean M

    2017-04-15

    Conventional neuroimaging analyses have ascribed function to particular brain regions, exploiting the power of the subtraction technique in fMRI and event-related potential analyses in EEG. Moving beyond this convention, many researchers have begun exploring network-based neurodynamics and coordination between brain regions as a function of behavioral parameters or environmental statistics; however, most approaches average evoked activity across the experimental session to study task-dependent networks. Here, we examined on-going oscillatory activity as measured with EEG and use a methodology to estimate directionality in brain-behavior interactions. After source reconstruction, activity within specific frequency bands (delta: 2-3Hz; theta: 4-7Hz; alpha: 8-12Hz; beta: 13-25Hz) in a priori regions of interest was linked to continuous behavioral measurements, and we used a predictive filtering scheme to estimate the asymmetry between brain-to-behavior and behavior-to-brain prediction using a variant of Granger causality. We applied this approach to a simulated driving task and examined directed relationships between brain activity and continuous driving performance (steering behavior or vehicle heading error). Our results indicated that two neuro-behavioral states may be explored with this methodology: a Proactive brain state that actively plans the response to the sensory information and is characterized by delta-beta activity, and a Reactive brain state that processes incoming information and reacts to environmental statistics primarily within the alpha band. Published by Elsevier Inc.

  14. Assessing Satellite-Based Fire Data for use in the National Emissions Inventory

    NASA Technical Reports Server (NTRS)

    Soja, Amber J.; Al-Saadi, Jassim; Giglio, Louis; Randall, Dave; Kittaka, Chieko; Pouliot, George; Kordzi, Joseph J.; Raffuse, Sean; Pace, Thompson G.; Pierce, Thomas E.; hide

    2009-01-01

    Biomass burning is significant to emission estimates because: (1) it can be a major contributor of particulate matter and other pollutants; (2) it is one of the most poorly documented of all sources; (3) it can adversely affect human health; and (4) it has been identified as a significant contributor to climate change through feedbacks with the radiation budget. Additionally, biomass burning can be a significant contributor to a regions inability to achieve the National Ambient Air Quality Standards for PM 2.5 and ozone, particularly on the top 20% worst air quality days. The United States does not have a standard methodology to track fire occurrence or area burned, which are essential components to estimating fire emissions. Satellite imagery is available almost instantaneously and has great potential to enhance emission estimates and their timeliness. This investigation compares satellite-derived fire data to ground-based data to assign statistical error and helps provide confidence in these data. The largest fires are identified by all satellites and their spatial domain is accurately sensed. MODIS provides enhanced spatial and temporal information, and GOES ABBA data are able to capture more small agricultural fires. A methodology is presented that combines these satellite data in Near-Real-Time to produce a product that captures 81 to 92% of the total area burned by wildfire, prescribed, agricultural and rangeland burning. Each satellite possesses distinct temporal and spatial capabilities that permit the detection of unique fires that could be omitted if using data from only one satellite.

  15. Common sources and estimated intake of plant sterols in the Spanish diet.

    PubMed

    Jiménez-Escrig, Antonio; Santos-Hidalgo, Ana B; Saura-Calixto, Fulgencio

    2006-05-03

    Plant sterols (PS) are minor lipid components of plants, which may have potential health benefits, mainly based in their cholesterol-lowering effect. The aim of this study was to determine the composition and content of PS in plant-based foods commonly consumed in Spain and to estimate the PS intake in the Spanish diet. For this purpose, the determination of PS content, using a modern methodology to measure free, esterified, and glycosidic sterol forms, was done. Second, an estimation of the intake of PS, using the Spanish National Food Consumption data, was made. The daily intake per person of PS--campesterol, beta-sitosterol, stigmasterol, and stigmastanol--in the Spanish diet was estimated at 276 mg, the largest component being beta-sitosterol (79.7%). Other unknown compounds, tentatively identified as PS, may constitute a considerable potential intake (99 mg). When the daily PS intake among European diets was compared in terms of campesterol, beta-sitosterol, stigmasterol, and stigmastanol, the PS intake in the Spanish diet was in the same range of other countries such as Finland (15.7% higher) or The Netherlands (equal). However, some qualitative differences in the PS sources were detected, that is, the predominant brown bread and vegetable fat consumption in the northern diets versus the white bread and vegetable oil consumption in the Spanish diet. These differences may help to provide a link between the consumption of PS and healthy effects of the diet.

  16. Estimating Domestic Values for EQ-5D Health States Using Survey Data From External Sources.

    PubMed

    Chuang, Ling-Hsiang; Zarate, Victor; Kind, Paul

    2009-02-01

    Health status measures used to quantify outcomes for economic evaluation must be capable of representing health gain in a single index, usually calibrated in terms of the social preferences elicited from "the relevant population." The general problem faced in the majority of countries where social preferences are required for cost-effectiveness analysis is the absence of a value set based on domestic data sources. This article establishes a methodology for estimating domestic visual analog scale (VAS)-based values for EQ-5D health states by adjusting data sets from countries where valuation studies have been carried out. building upon the relationship between the values for respondents' real health states and hypothetical health states, 2 models are investigated. One assumes that the link between VAS scores for real and hypothetical health state is constant across 2 countries (R1), whereas the other adopts the assumption that the relationship of VAS scores for hypothetical heath states between 2 countries functionally corresponds to variation in scores for real health states (R2). Data from national UK and US population surveys were selected to test both methods. The R2 model performed better in generating estimated scores that were closer to observed values. The R2 model seems to offer a viable method for estimating domestic values of health. Such a method could help to bridge the gap between countries as well as region within a country.

  17. Quantifying Impacts of Food Trade on Water Availability Considering Water Sources

    NASA Astrophysics Data System (ADS)

    Oki, T.; Yano, S.; Hanasaki, N.

    2012-12-01

    Food production requires a lot of water, and traded food potentially has external impacts on environment through reducing the water availability in the producing region. Water footprint is supposed to be an indicator to reflect the impacts of water use. However, impacts of water use on environment, resource, and sustainability are different in place and time, and according to the sources of water withdrawals. Therefore it is preferable to characterize the water withdrawals or consumptions rather than just accumulate the total amount of water use when estimating water footprint. In this study, a new methodology, global green-water equivalent method, is proposed in which regional characterization factors are determined based on the estimates of natural hydrological cycles, such as precipitation, total runoff, and sub-surface runoff, and applied for green-water, river(+reservoir) water, and non-renewable ground water uses. Water footprint of the world associated with the production of 19 major crops was estimated using an integrated hydrological and water resources modeling system (H08), with atmospheric forcing data for 1991-2000 with spatial resolution of 0.5 by 0.5 longitudinal and latitudinal degrees. The impacts is estimated to be 6 times larger than the simple summation of green and blue water uses, and reflect the climatological water scarcity conditions geographically. The results can be used to compare the possible impacts of food trade associated with various crops from various regions on environment through reducing the availability of water resources in the cropping area.

  18. Continuous EEG source imaging enhances analysis of EEG-fMRI in focal epilepsy.

    PubMed

    Vulliemoz, S; Rodionov, R; Carmichael, D W; Thornton, R; Guye, M; Lhatoo, S D; Michel, C M; Duncan, J S; Lemieux, L

    2010-02-15

    EEG-correlated fMRI (EEG-fMRI) studies can reveal haemodynamic changes associated with Interictal Epileptic Discharges (IED). Methodological improvements are needed to increase sensitivity and specificity for localising the epileptogenic zone. We investigated whether the estimated EEG source activity improved models of the BOLD changes in EEG-fMRI data, compared to conventional < event-related > designs based solely on the visual identification of IED. Ten patients with pharmaco-resistant focal epilepsy underwent EEG-fMRI. EEG Source Imaging (ESI) was performed on intra-fMRI averaged IED to identify the irritative zone. The continuous activity of this estimated IED source (cESI) over the entire recording was used for fMRI analysis (cESI model). The maps of BOLD signal changes explained by cESI were compared to results of the conventional IED-related model. ESI was concordant with non-invasive data in 13/15 different types of IED. The cESI model explained significant additional BOLD variance in regions concordant with video-EEG, structural MRI or, when available, intracranial EEG in 10/15 IED. The cESI model allowed better detection of the BOLD cluster, concordant with intracranial EEG in 4/7 IED, compared to the IED model. In 4 IED types, cESI-related BOLD signal changes were diffuse with a pattern suggestive of contamination of the source signal by artefacts, notably incompletely corrected motion and pulse artefact. In one IED type, there was no significant BOLD change with either model. Continuous EEG source imaging can improve the modelling of BOLD changes related to interictal epileptic activity and this may enhance the localisation of the irritative zone. Copyright 2009 Elsevier Inc. All rights reserved.

  19. Quantification of Methane and VOC Emissions from Natural Gas Production in Two Basins with High Ozone Events

    NASA Astrophysics Data System (ADS)

    Edie, R.; Robertson, A.; Snare, D.; Soltis, J.; Field, R. A.; Murphy, S. M.

    2015-12-01

    Since 2005, the Uintah Basin of Utah and the Upper Green River Basin of Wyoming frequently exceeded the EPA 8-hour allowable ozone level of 75 ppb, spurring interest in volatile organic compounds (VOCs) emitted during oil and gas production. Debate continues over which stage of production (drilling, flowback, normal production, transmission, etc.) is the most prevalent VOC source. In this study, we quantify emissions from normal production on well pads by using the EPA-developed Other Test Method 33a. This methodology combines ground-based measurements of fugitive emissions with 3-D wind data to calculate the methane and VOC emission fluxes from a point source. VOC fluxes are traditionally estimated by gathering a canister of air during a methane flux measurement. The methane:VOC ratio of this canister is determined at a later time in the laboratory, and applied to the known methane flux. The University of Wyoming Mobile Laboratory platform is equipped with a Picarro methane analyzer and an Ionicon Proton Transfer Reaction-Time of Flight-Mass Spectrometer, which provide real-time methane and VOC data for each well pad. This independent measurement of methane and VOCs in situ reveals multiple emission sources on one well pad, with varying methane:VOC ratios. Well pad emission estimates of methane, benzene, toluene and xylene for the two basins will be presented. The different emission source VOC profiles and the limitations of real-time and traditional VOC measurement methods will also be discussed.

  20. Seismic Hazard Estimates Using Ill-defined Macroseismic Data at Site

    NASA Astrophysics Data System (ADS)

    Albarello, D.; Mucciarelli, M.

    - A new approach is proposed to the seismic hazard estimate based on documentary data concerning local history of seismic effects. The adopted methodology allows for the use of ``poor'' data, such as the macroseismic ones, within a formally coherent approach that permits overcoming a number of problems connected to the forcing of available information in the frame of ``standard'' methodologies calibrated on the use of instrumental data. The use of the proposed methodology allows full exploitation of all the available information (that for many towns in Italy covers several centuries) making possible a correct use of macroseismic data characterized by different levels of completeness and reliability. As an application of the proposed methodology, seismic hazard estimates are presented for two towns located in Northern Italy: Bologna and Carpi.

  1. Optimization of diesel oil biodegradation in seawater using statistical experimental methodology.

    PubMed

    Xia, Wenxiang; Li, Jincheng; Xia, Yan; Song, Zhiwen; Zhou, Jihong

    2012-01-01

    Petroleum hydrocarbons released into the environment can be harmful to higher organisms, but they can be utilized by microorganisms as the sole source of energy for metabolism. To investigate the optimal conditions of diesel oil biodegradation, the Plackett-Burman (PB) design was used for the optimization in the first step, and N source (NaNO₃), P source (KH₂PO₄) and pH were found to be significant factors affecting oil degradation. Then the response surface methodology (RSM) using a central composite design (CCD) was adopted for the augmentation of diesel oil biodegradation and a fitted quadratic model was obtained. The model F-value of 27.25 and the low probability value (<0.0001) indicate that the model is significant and that the concentration of NaNO₃N, KH₂PO₄ and pH had significant effects on oil removal during the study. Three-dimensional response surface plots were constructed by plotting the response (oil degradation efficiency) on the z-axis against any two independent variables, and the optimal biodegradation conditions of diesel oil (original total petroleum hydrocarbons 125 mg/L) were determined as follows: NaNO₃ 0.143 g, KH₂PO₄ 0.022 g and pH 7.4. These results fit quite well with the C, N and P ratio in biological cells. Results from the present study might provide a new method to estimate the optimal nitrogen and phosphorus concentration in advance for oil biodegradation according to the composition of petroleum.

  2. Ambient Vibration Testing for Story Stiffness Estimation of a Heritage Timber Building

    PubMed Central

    Min, Kyung-Won; Kim, Junhee; Park, Sung-Ah; Park, Chan-Soo

    2013-01-01

    This paper investigates dynamic characteristics of a historic wooden structure by ambient vibration testing, presenting a novel estimation methodology of story stiffness for the purpose of vibration-based structural health monitoring. As for the ambient vibration testing, measured structural responses are analyzed by two output-only system identification methods (i.e., frequency domain decomposition and stochastic subspace identification) to estimate modal parameters. The proposed methodology of story stiffness is estimation based on an eigenvalue problem derived from a vibratory rigid body model. Using the identified natural frequencies, the eigenvalue problem is efficiently solved and uniquely yields story stiffness. It is noteworthy that application of the proposed methodology is not necessarily confined to the wooden structure exampled in the paper. PMID:24227999

  3. Fracture mechanics approach to estimate rail wear limits

    DOT National Transportation Integrated Search

    2009-10-01

    This paper describes a systematic methodology to estimate allowable limits for rail head wear in terms of vertical head-height loss, gage-face side wear, and/or the combination of the two. This methodology is based on the principles of engineering fr...

  4. U.S. DOE methodology for the development of geologic storage potential for carbon dioxide at the national and regional scale

    USGS Publications Warehouse

    Goodman, Angela; Hakala, J. Alexandra; Bromhal, Grant; Deel, Dawn; Rodosta, Traci; Frailey, Scott; Small, Michael; Allen, Doug; Romanov, Vyacheslav; Fazio, Jim; Huerta, Nicolas; McIntyre, Dustin; Kutchko, Barbara; Guthrie, George

    2011-01-01

    A detailed description of the United States Department of Energy (US-DOE) methodology for estimating CO2 storage potential for oil and gas reservoirs, saline formations, and unmineable coal seams is provided. The oil and gas reservoirs are assessed at the field level, while saline formations and unmineable coal seams are assessed at the basin level. The US-DOE methodology is intended for external users such as the Regional Carbon Sequestration Partnerships (RCSPs), future project developers, and governmental entities to produce high-level CO2 resource assessments of potential CO2 storage reservoirs in the United States and Canada at the regional and national scale; however, this methodology is general enough that it could be applied globally. The purpose of the US-DOE CO2 storage methodology, definitions of storage terms, and a CO2 storage classification are provided. Methodology for CO2 storage resource estimate calculation is outlined. The Log Odds Method when applied with Monte Carlo Sampling is presented in detail for estimation of CO2 storage efficiency needed for CO2 storage resource estimates at the regional and national scale. CO2 storage potential reported in the US-DOE's assessment are intended to be distributed online by a geographic information system in NatCarb and made available as hard-copy in the Carbon Sequestration Atlas of the United States and Canada. US-DOE's methodology will be continuously refined, incorporating results of the Development Phase projects conducted by the RCSPs from 2008 to 2018. Estimates will be formally updated every two years in subsequent versions of the Carbon Sequestration Atlas of the United States and Canada.

  5. A Methodology to Monitor Airborne PM10 Dust Particles Using a Small Unmanned Aerial Vehicle

    PubMed Central

    Alvarado, Miguel; Gonzalez, Felipe; Erskine, Peter; Cliff, David; Heuff, Darlene

    2017-01-01

    Throughout the process of coal extraction from surface mines, gases and particles are emitted in the form of fugitive emissions by activities such as hauling, blasting and transportation. As these emissions are diffuse in nature, estimations based upon emission factors and dispersion/advection equations need to be measured directly from the atmosphere. This paper expands upon previous research undertaken to develop a relative methodology to monitor PM10 dust particles produced by mining activities making use of small unmanned aerial vehicles (UAVs). A module sensor using a laser particle counter (OPC-N2 from Alphasense, Great Notley, Essex, UK) was tested. An aerodynamic flow experiment was undertaken to determine the position and length of a sampling probe of the sensing module. Flight tests were conducted in order to demonstrate that the sensor provided data which could be used to calculate the emission rate of a source. Emission rates are a critical variable for further predictive dispersion estimates. First, data collected by the airborne module was verified using a 5.0 m tower in which a TSI DRX 8533 (reference dust monitoring device, TSI, Shoreview, MN, USA) and a duplicate of the module sensor were installed. Second, concentration values collected by the monitoring module attached to the UAV (airborne module) obtaining a percentage error of 1.1%. Finally, emission rates from the source were calculated, with airborne data, obtaining errors as low as 1.2%. These errors are low and indicate that the readings collected with the airborne module are comparable to the TSI DRX and could be used to obtain specific emission factors from fugitive emissions for industrial activities. PMID:28216557

  6. Use of Numerical Groundwater Model and Analytical Empirical Orthogonal Function for Calibrating Spatiotemporal pattern of Pumpage, Recharge and Parameter

    NASA Astrophysics Data System (ADS)

    Huang, C. L.; Hsu, N. S.; Hsu, F. C.; Liu, H. J.

    2016-12-01

    This study develops a novel methodology for the spatiotemporal groundwater calibration of mega-quantitative recharge and parameters by coupling a specialized numerical model and analytical empirical orthogonal function (EOF). The actual spatiotemporal patterns of groundwater pumpage are estimated by an originally developed back propagation neural network-based response matrix with the electrical consumption analysis. The spatiotemporal patterns of the recharge from surface water and hydrogeological parameters (i.e. horizontal hydraulic conductivity and vertical leakance) are calibrated by EOF with the simulated error hydrograph of groundwater storage, in order to qualify the multiple error sources and quantify the revised volume. The objective function of the optimization model is minimizing the root mean square error of the simulated storage error percentage across multiple aquifers, meanwhile subject to mass balance of groundwater budget and the governing equation in transient state. The established method was applied on the groundwater system of Chou-Shui River Alluvial Fan. The simulated period is from January 2012 to December 2014. The total numbers of hydraulic conductivity, vertical leakance and recharge from surface water among four aquifers are 126, 96 and 1080, respectively. Results showed that the RMSE during the calibration process was decreased dramatically and can quickly converse within 6th iteration, because of efficient filtration of the transmission induced by the estimated error and recharge across the boundary. Moreover, the average simulated error percentage according to groundwater level corresponding to the calibrated budget variables and parameters of aquifer one is as small as 0.11%. It represent that the developed methodology not only can effectively detect the flow tendency and error source in all aquifers to achieve accurately spatiotemporal calibration, but also can capture the peak and fluctuation of groundwater level in shallow aquifer.

  7. 75 FR 8649 - Request for Comments on Methodology for Conducting an Independent Study of the Burden of Patent...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-25

    ...] Request for Comments on Methodology for Conducting an Independent Study of the Burden of Patent-Related... methodologies for performing such a study (Methodology Report). ICF has now provided the USPTO with its Methodology Report, in which ICF recommends methodologies for addressing various topics about estimating the...

  8. Application of inverse dispersion model for estimating volatile organic compounds emitted from the offshore industrial park

    NASA Astrophysics Data System (ADS)

    Tsai, M.; Lee, C.; Yu, H.

    2013-12-01

    In the last 20 years, the Yunlin offshore industrial park has significantly contributed to the economic development of Taiwan. Its annual production value has reached almost 12 % of Taiwan's GDP in 2012. The offshore industrial park also balanced development of urban and rural in areas. However, the offshore industrial park is considered the major source of air pollution to nearby counties, especially, the emission of Volatile Organic Compounds(VOCs). Studies have found that exposures to high level of some VOCs have caused adverse health effects on both human and ecosystem. Since both health and ecological effects of air pollution have been the subject of numerous studies in recent years, it is a critical issue in estimating VOCs emissions. Nowadays emission estimation techniques are usually used emissions factors in calculation. Because the methodology considered totality of equipment activities based on statistical assumptions, it would encounter great uncertainty between these coefficients. This study attempts to estimate VOCs emission of the Yunlin Offshore Industrial Park using an inverse atmospheric dispersion model. The inverse modeling approach will be applied to the combination of dispersion modeling result which input a given one-unit concentration and observations at air quality stations in Yunlin. The American Meteorological Society-Environmental Protection Agency Regulatory Model (AERMOD) is chosen as the tool for dispersion modeling in the study. Observed concentrations of VOCs are collected by the Taiwanese Environmental Protection Administration (TW EPA). In addition, the study also analyzes meteorological data including wind speed, wind direction, pressure and temperature etc. VOCs emission estimations from the inverse atmospheric dispersion model will be compared to the official statistics released by Yunlin Offshore Industrial Park. Comparison of estimated concentration from inverse dispersion modeling and official statistical concentrations will give a better understanding about the uncertainty of regulatory methodology. The model results will be discussed with the importance of evaluating air pollution exposure in risk assessment.

  9. PhySIC_IST: cleaning source trees to infer more informative supertrees

    PubMed Central

    Scornavacca, Celine; Berry, Vincent; Lefort, Vincent; Douzery, Emmanuel JP; Ranwez, Vincent

    2008-01-01

    Background Supertree methods combine phylogenies with overlapping sets of taxa into a larger one. Topological conflicts frequently arise among source trees for methodological or biological reasons, such as long branch attraction, lateral gene transfers, gene duplication/loss or deep gene coalescence. When topological conflicts occur among source trees, liberal methods infer supertrees containing the most frequent alternative, while veto methods infer supertrees not contradicting any source tree, i.e. discard all conflicting resolutions. When the source trees host a significant number of topological conflicts or have a small taxon overlap, supertree methods of both kinds can propose poorly resolved, hence uninformative, supertrees. Results To overcome this problem, we propose to infer non-plenary supertrees, i.e. supertrees that do not necessarily contain all the taxa present in the source trees, discarding those whose position greatly differs among source trees or for which insufficient information is provided. We detail a variant of the PhySIC veto method called PhySIC_IST that can infer non-plenary supertrees. PhySIC_IST aims at inferring supertrees that satisfy the same appealing theoretical properties as with PhySIC, while being as informative as possible under this constraint. The informativeness of a supertree is estimated using a variation of the CIC (Cladistic Information Content) criterion, that takes into account both the presence of multifurcations and the absence of some taxa. Additionally, we propose a statistical preprocessing step called STC (Source Trees Correction) to correct the source trees prior to the supertree inference. STC is a liberal step that removes the parts of each source tree that significantly conflict with other source trees. Combining STC with a veto method allows an explicit trade-off between veto and liberal approaches, tuned by a single parameter. Performing large-scale simulations, we observe that STC+PhySIC_IST infers much more informative supertrees than PhySIC, while preserving low type I error compared to the well-known MRP method. Two biological case studies on animals confirm that the STC preprocess successfully detects anomalies in the source trees while STC+PhySIC_IST provides well-resolved supertrees agreeing with current knowledge in systematics. Conclusion The paper introduces and tests two new methodologies, PhySIC_IST and STC, that demonstrate the interest in inferring non-plenary supertrees as well as preprocessing the source trees. An implementation of the methods is available at: . PMID:18834542

  10. PhySIC_IST: cleaning source trees to infer more informative supertrees.

    PubMed

    Scornavacca, Celine; Berry, Vincent; Lefort, Vincent; Douzery, Emmanuel J P; Ranwez, Vincent

    2008-10-04

    Supertree methods combine phylogenies with overlapping sets of taxa into a larger one. Topological conflicts frequently arise among source trees for methodological or biological reasons, such as long branch attraction, lateral gene transfers, gene duplication/loss or deep gene coalescence. When topological conflicts occur among source trees, liberal methods infer supertrees containing the most frequent alternative, while veto methods infer supertrees not contradicting any source tree, i.e. discard all conflicting resolutions. When the source trees host a significant number of topological conflicts or have a small taxon overlap, supertree methods of both kinds can propose poorly resolved, hence uninformative, supertrees. To overcome this problem, we propose to infer non-plenary supertrees, i.e. supertrees that do not necessarily contain all the taxa present in the source trees, discarding those whose position greatly differs among source trees or for which insufficient information is provided. We detail a variant of the PhySIC veto method called PhySIC_IST that can infer non-plenary supertrees. PhySIC_IST aims at inferring supertrees that satisfy the same appealing theoretical properties as with PhySIC, while being as informative as possible under this constraint. The informativeness of a supertree is estimated using a variation of the CIC (Cladistic Information Content) criterion, that takes into account both the presence of multifurcations and the absence of some taxa. Additionally, we propose a statistical preprocessing step called STC (Source Trees Correction) to correct the source trees prior to the supertree inference. STC is a liberal step that removes the parts of each source tree that significantly conflict with other source trees. Combining STC with a veto method allows an explicit trade-off between veto and liberal approaches, tuned by a single parameter.Performing large-scale simulations, we observe that STC+PhySIC_IST infers much more informative supertrees than PhySIC, while preserving low type I error compared to the well-known MRP method. Two biological case studies on animals confirm that the STC preprocess successfully detects anomalies in the source trees while STC+PhySIC_IST provides well-resolved supertrees agreeing with current knowledge in systematics. The paper introduces and tests two new methodologies, PhySIC_IST and STC, that demonstrate the interest in inferring non-plenary supertrees as well as preprocessing the source trees. An implementation of the methods is available at: http://www.atgc-montpellier.fr/physic_ist/.

  11. Historical (1750–2014) anthropogenic emissions of reactive gases and aerosols from the Community Emissions Data System (CEDS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoesly, Rachel M.; Smith, Steven J.; Feng, Leyang

    Here, we present a new data set of annual historical (1750–2014) anthropogenic chemically reactive gases (CO, CH 4, NH 3, NO x, SO 2, NMVOCs), carbonaceous aerosols (black carbon – BC, and organic carbon – OC), and CO 2 developed with the Community Emissions Data System (CEDS). We improve upon existing inventories with a more consistent and reproducible methodology applied to all emission species, updated emission factors, and recent estimates through 2014. The data system relies on existing energy consumption data sets and regional and country-specific inventories to produce trends over recent decades. All emission species are consistently estimated using the samemore » activity data over all time periods. Emissions are provided on an annual basis at the level of country and sector and gridded with monthly seasonality. These estimates are comparable to, but generally slightly higher than, existing global inventories. Emissions over the most recent years are more uncertain, particularly in low- and middle-income regions where country-specific emission inventories are less available. Future work will involve refining and updating these emission estimates, estimating emissions' uncertainty, and publication of the system as open-source software.« less

  12. Historical (1750–2014) anthropogenic emissions of reactive gases and aerosols from the Community Emissions Data System (CEDS)

    DOE PAGES

    Hoesly, Rachel M.; Smith, Steven J.; Feng, Leyang; ...

    2018-01-29

    Here, we present a new data set of annual historical (1750–2014) anthropogenic chemically reactive gases (CO, CH 4, NH 3, NO x, SO 2, NMVOCs), carbonaceous aerosols (black carbon – BC, and organic carbon – OC), and CO 2 developed with the Community Emissions Data System (CEDS). We improve upon existing inventories with a more consistent and reproducible methodology applied to all emission species, updated emission factors, and recent estimates through 2014. The data system relies on existing energy consumption data sets and regional and country-specific inventories to produce trends over recent decades. All emission species are consistently estimated using the samemore » activity data over all time periods. Emissions are provided on an annual basis at the level of country and sector and gridded with monthly seasonality. These estimates are comparable to, but generally slightly higher than, existing global inventories. Emissions over the most recent years are more uncertain, particularly in low- and middle-income regions where country-specific emission inventories are less available. Future work will involve refining and updating these emission estimates, estimating emissions' uncertainty, and publication of the system as open-source software.« less

  13. Historical (1750-2014) anthropogenic emissions of reactive gases and aerosols from the Community Emissions Data System (CEDS)

    NASA Astrophysics Data System (ADS)

    Hoesly, Rachel M.; Smith, Steven J.; Feng, Leyang; Klimont, Zbigniew; Janssens-Maenhout, Greet; Pitkanen, Tyler; Seibert, Jonathan J.; Vu, Linh; Andres, Robert J.; Bolt, Ryan M.; Bond, Tami C.; Dawidowski, Laura; Kholod, Nazar; Kurokawa, June-ichi; Li, Meng; Liu, Liang; Lu, Zifeng; Moura, Maria Cecilia P.; O'Rourke, Patrick R.; Zhang, Qiang

    2018-01-01

    We present a new data set of annual historical (1750-2014) anthropogenic chemically reactive gases (CO, CH4, NH3, NOx, SO2, NMVOCs), carbonaceous aerosols (black carbon - BC, and organic carbon - OC), and CO2 developed with the Community Emissions Data System (CEDS). We improve upon existing inventories with a more consistent and reproducible methodology applied to all emission species, updated emission factors, and recent estimates through 2014. The data system relies on existing energy consumption data sets and regional and country-specific inventories to produce trends over recent decades. All emission species are consistently estimated using the same activity data over all time periods. Emissions are provided on an annual basis at the level of country and sector and gridded with monthly seasonality. These estimates are comparable to, but generally slightly higher than, existing global inventories. Emissions over the most recent years are more uncertain, particularly in low- and middle-income regions where country-specific emission inventories are less available. Future work will involve refining and updating these emission estimates, estimating emissions' uncertainty, and publication of the system as open-source software.

  14. An integrated study to evaluate debris flow hazard in alpine environment

    NASA Astrophysics Data System (ADS)

    Tiranti, Davide; Crema, Stefano; Cavalli, Marco; Deangeli, Chiara

    2018-05-01

    Debris flows are among the most dangerous natural processes affecting the alpine environment due to their magnitude (volume of transported material) and the long runout. The presence of structures and infrastructures on alluvial fans can lead to severe problems in terms of interactions between debris flows and human activities. Risk mitigation in these areas requires identifying the magnitude, triggers, and propagation of debris flows. Here, we propose an integrated methodology to characterize these phenomena. The methodology consists of three complementary procedures. Firstly, we adopt a classification method based on the propensity of the catchment bedrocks to produce clayey-grained material. The classification allows us to identify the most likely rheology of the process. Secondly, we calculate a sediment connectivity index to estimate the topographic control on the possible coupling between the sediment source areas and the catchment channel network. This step allows for the assessment of the debris supply, which is most likely available for the channelized processes. Finally, with the data obtained in the previous steps, we modelled the propagation and depositional pattern of debris flows with a 3D code based on Cellular Automata. The results of the numerical runs allow us to identify the depositional patterns and the areas potentially involved in the flow processes. This integrated methodology is applied to a test-bed catchment located in Northwestern Alps. The results indicate that this approach can be regarded as a useful tool to estimate debris flow related potential hazard scenarios in an alpine environment in an expeditious way without possessing an exhaustive knowledge of the investigated catchment, including data on historical debris flow events.

  15. Marginal regression models for clustered count data based on zero-inflated Conway-Maxwell-Poisson distribution with applications.

    PubMed

    Choo-Wosoba, Hyoyoung; Levy, Steven M; Datta, Somnath

    2016-06-01

    Community water fluoridation is an important public health measure to prevent dental caries, but it continues to be somewhat controversial. The Iowa Fluoride Study (IFS) is a longitudinal study on a cohort of Iowa children that began in 1991. The main purposes of this study (http://www.dentistry.uiowa.edu/preventive-fluoride-study) were to quantify fluoride exposures from both dietary and nondietary sources and to associate longitudinal fluoride exposures with dental fluorosis (spots on teeth) and dental caries (cavities). We analyze a subset of the IFS data by a marginal regression model with a zero-inflated version of the Conway-Maxwell-Poisson distribution for count data exhibiting excessive zeros and a wide range of dispersion patterns. In general, we introduce two estimation methods for fitting a ZICMP marginal regression model. Finite sample behaviors of the estimators and the resulting confidence intervals are studied using extensive simulation studies. We apply our methodologies to the dental caries data. Our novel modeling incorporating zero inflation, clustering, and overdispersion sheds some new light on the effect of community water fluoridation and other factors. We also include a second application of our methodology to a genomic (next-generation sequencing) dataset that exhibits underdispersion. © 2015, The International Biometric Society.

  16. Soil Intake Rates Based on Arsenic in Urine Data | Science ...

    EPA Pesticide Factsheets

    The ingestion of soil is a potential source of human exposure to environmental contaminants. Several studies have been conducted to estimate the amount of soil ingested by children. The methodology used in these studies has consisted of a mass balance using measurements of certain tracers elements in the feces and urine. There are many uncertainties associated with this approach. The present study uses an innovative approach for deriving soil intake rates. The study uses data collected for children living near a copper smelter in Washington State in the town of Ruston, and from nearby Vashon and Maury Islands. The age of the children included in the study ranged from 2 to 13 years old. Distribution of soil and dust ingestion by children will be estimated based on arsenic concentrations found in urine, soil and air samples collected during three-day visits to each household in four quarters. External peer review comments did not support the use of data to predict soil intake rates due to data variability and measurement issues. The effort as originally proposed has been terminated but collected data and analysis will be used in a new project to evaluate methodologic issues associated with measurement error and variance. The purpose of this task is to conduct an analysis of soil intake rates using environmental and biological measurements of arsenic.

  17. 210Pb as a tool for establishing sediment chronologies: examples of potentials and limitations of conventional dating models.

    PubMed

    Kirchner, Gerald

    2011-05-01

    For aquatic sediments, the use of (210)Pb originating from the decay of atmospheric (222)Rn is a well-established methodology to estimate sediment ages and sedimentation rates. Traditionally, the measurement of (210)Pb in soils and sediments involved laborious and time-consuming radiochemical separation procedures. Due to the recent development of advanced planar ('n-type') semi-conductors with high efficiencies in the low-energy range which enable the gamma-spectrometric analysis of the 46.5 keV decay line of (210)Pb, sediment dating using this radionuclide has gained renewed interest. In this contribution, potentials and limitations of the (210)Pb methodology and of the models used for estimating sediment ages and sedimentation rates are discussed and illustrated by examples of freshwater and marine sediments. Comparison with the use of (137)Cs shows that the information which may be gained by these two tracers is complementary. As a consequence, both radionuclides should be used in combination for dating of recent sediments. It is shown that for various sedimentation regimes additional information from other sources (e.g. sediment lithology) may be needed to establish a reliable chronology. A strategy for sediment dating using (210)Pb is recommended. Copyright © 2010 Elsevier Ltd. All rights reserved.

  18. Retrospective dose assessment for the population living in areas of local fallout from the Semipalatinsk Nuclear Test Site Part II: Internal exposure to thyroid.

    PubMed

    Gordeev, Konstantin; Shinkarev, Sergey; Ilyin, Leonid; Bouville, André; Hoshi, Masaharu; Luckyanov, Nickolas; Simon, Steven L

    2006-02-01

    A methodology to assess internal exposure to thyroid from radioiodines for the residents living in settlements located in the vicinity of the Semipalatinsk Nuclear Test Site is described that is the result of many years of research, primarily at the Moscow Institute of Biophysics. This methodology introduces two important concepts. First, the biologically active fraction, is defined as the fraction of the total activity on fallout particles with diameter less than 50 microns. That fraction is retained by vegetation and will ultimately result in contamination of dairy products. Second, the relative distance is derived as a dimensionless quantity from information on test yield, maximum height of cloud, and average wind velocity and describes how the biologically active fraction is distributed with distance from the site of the explosion. The parameter is derived in such a way that at locations with equal values of relative distance, the biologically active fraction will be the same for any test. The estimates of internal exposure to thyroid for the residents of Dolon and Kanonerka villages, for which the external exposure were assessed and given in a companion paper (Gordeev et al. 2006) in this conference, are presented. The main sources of uncertainty in the estimates are identified.

  19. The complex links between governance and biodiversity.

    PubMed

    Barrett, Christopher B; Gibson, Clark C; Hoffman, Barak; McCubbins, Mathew D

    2006-10-01

    We argue that two problems weaken the claims of those who link corruption and the exploitation of natural resources. The first is conceptual and the second is methodological. Studies that use national-level indicators of corruption fail to note that corruption comes in many forms, at multiple levels, that may affect resource use quite differently: negatively, positively, or not at all. Without a clear causal model of the mechanism by which corruption affects resources, one should treat with caution any estimated relationship between corruption and the state of natural resources. Simple, atheoretical models linking corruption measures and natural resource use typically do not account for other important control variables pivotal to the relationship between humans and natural resources. By way of illustration of these two general concerns, we used statistical methods to demonstrate that the findings of a recent, well-known study that posits a link between corruption and decreases in forests and elephants are not robust to simple conceptual and methodological refinements. In particular, once we controlled for a few plausible anthropogenic and biophysical conditioning factors, estimated the effects in changes rather than levels so as not to confound cross-sectional and longitudinal variation, and incorporated additional observations from the same data sources, corruption levels no longer had any explanatory power.

  20. The XMM Cluster Survey: X-ray analysis methodology

    NASA Astrophysics Data System (ADS)

    Lloyd-Davies, E. J.; Romer, A. Kathy; Mehrtens, Nicola; Hosmer, Mark; Davidson, Michael; Sabirli, Kivanc; Mann, Robert G.; Hilton, Matt; Liddle, Andrew R.; Viana, Pedro T. P.; Campbell, Heather C.; Collins, Chris A.; Dubois, E. Naomi; Freeman, Peter; Harrison, Craig D.; Hoyle, Ben; Kay, Scott T.; Kuwertz, Emma; Miller, Christopher J.; Nichol, Robert C.; Sahlén, Martin; Stanford, S. A.; Stott, John P.

    2011-11-01

    The XMM Cluster Survey (XCS) is a serendipitous search for galaxy clusters using all publicly available data in the XMM-Newton Science Archive. Its main aims are to measure cosmological parameters and trace the evolution of X-ray scaling relations. In this paper we describe the data processing methodology applied to the 5776 XMM observations used to construct the current XCS source catalogue. A total of 3675 > 4σ cluster candidates with >50 background-subtracted X-ray counts are extracted from a total non-overlapping area suitable for cluster searching of 410 deg2. Of these, 993 candidates are detected with >300 background-subtracted X-ray photon counts, and we demonstrate that robust temperature measurements can be obtained down to this count limit. We describe in detail the automated pipelines used to perform the spectral and surface brightness fitting for these candidates, as well as to estimate redshifts from the X-ray data alone. A total of 587 (122) X-ray temperatures to a typical accuracy of <40 (<10) per cent have been measured to date. We also present the methodology adopted for determining the selection function of the survey, and show that the extended source detection algorithm is robust to a range of cluster morphologies by inserting mock clusters derived from hydrodynamical simulations into real XMMimages. These tests show that the simple isothermal β-profiles is sufficient to capture the essential details of the cluster population detected in the archival XMM observations. The redshift follow-up of the XCS cluster sample is presented in a companion paper, together with a first data release of 503 optically confirmed clusters.

  1. Wavelet-based localization of oscillatory sources from magnetoencephalography data.

    PubMed

    Lina, J M; Chowdhury, R; Lemay, E; Kobayashi, E; Grova, C

    2014-08-01

    Transient brain oscillatory activities recorded with Eelectroencephalography (EEG) or magnetoencephalography (MEG) are characteristic features in physiological and pathological processes. This study is aimed at describing, evaluating, and illustrating with clinical data a new method for localizing the sources of oscillatory cortical activity recorded by MEG. The method combines time-frequency representation and an entropic regularization technique in a common framework, assuming that brain activity is sparse in time and space. Spatial sparsity relies on the assumption that brain activity is organized among cortical parcels. Sparsity in time is achieved by transposing the inverse problem in the wavelet representation, for both data and sources. We propose an estimator of the wavelet coefficients of the sources based on the maximum entropy on the mean (MEM) principle. The full dynamics of the sources is obtained from the inverse wavelet transform, and principal component analysis of the reconstructed time courses is applied to extract oscillatory components. This methodology is evaluated using realistic simulations of single-trial signals, combining fast and sudden discharges (spike) along with bursts of oscillating activity. The method is finally illustrated with a clinical application using MEG data acquired on a patient with a right orbitofrontal epilepsy.

  2. Assessing Emergency Preparedness and Response Capacity Using Community Assessment for Public Health Emergency Response Methodology: Portsmouth, Virginia, 2013.

    PubMed

    Kurkjian, Katie M; Winz, Michelle; Yang, Jun; Corvese, Kate; Colón, Ana; Levine, Seth J; Mullen, Jessica; Ruth, Donna; Anson-Dwamena, Rexford; Bayleyegn, Tesfaye; Chang, David S

    2016-04-01

    For the past decade, emergency preparedness campaigns have encouraged households to meet preparedness metrics, such as having a household evacuation plan and emergency supplies of food, water, and medication. To estimate current household preparedness levels and to enhance disaster response planning, the Virginia Department of Health with remote technical assistance from the Centers for Disease Control and Prevention conducted a community health assessment in 2013 in Portsmouth, Virginia. Using the Community Assessment for Public Health Emergency Response (CASPER) methodology with 2-stage cluster sampling, we randomly selected 210 households for in-person interviews. Households were questioned about emergency planning and supplies, information sources during emergencies, and chronic health conditions. Interview teams completed 180 interviews (86%). Interviews revealed that 70% of households had an emergency evacuation plan, 67% had a 3-day supply of water for each member, and 77% had a first aid kit. Most households (65%) reported that the television was the primary source of information during an emergency. Heart disease (54%) and obesity (40%) were the most frequently reported chronic conditions. The Virginia Department of Health identified important gaps in local household preparedness. Data from the assessment have been used to inform community health partners, enhance disaster response planning, set community health priorities, and influence Portsmouth's Community Health Improvement Plan.

  3. Off-Highway Gasoline Consuption Estimation Models Used in the Federal Highway Administration Attribution Process: 2008 Updates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, Ho-Ling; Davis, Stacy Cagle

    2009-12-01

    This report is designed to document the analysis process and estimation models currently used by the Federal Highway Administration (FHWA) to estimate the off-highway gasoline consumption and public sector fuel consumption. An overview of the entire FHWA attribution process is provided along with specifics related to the latest update (2008) on the Off-Highway Gasoline Use Model and the Public Use of Gasoline Model. The Off-Highway Gasoline Use Model is made up of five individual modules, one for each of the off-highway categories: agricultural, industrial and commercial, construction, aviation, and marine. This 2008 update of the off-highway models was the secondmore » major update (the first model update was conducted during 2002-2003) after they were originally developed in mid-1990. The agricultural model methodology, specifically, underwent a significant revision because of changes in data availability since 2003. Some revision to the model was necessary due to removal of certain data elements used in the original estimation method. The revised agricultural model also made use of some newly available information, published by the data source agency in recent years. The other model methodologies were not drastically changed, though many data elements were updated to improve the accuracy of these models. Note that components in the Public Use of Gasoline Model were not updated in 2008. A major challenge in updating estimation methods applied by the public-use model is that they would have to rely on significant new data collection efforts. In addition, due to resource limitation, several components of the models (both off-highway and public-us models) that utilized regression modeling approaches were not recalibrated under the 2008 study. An investigation of the Environmental Protection Agency's NONROAD2005 model was also carried out under the 2008 model update. Results generated from the NONROAD2005 model were analyzed, examined, and compared, to the extent that is possible on the overall totals, to the current FHWA estimates. Because NONROAD2005 model was designed for emission estimation purposes (i.e., not for measuring fuel consumption), it covers different equipment populations from those the FHWA models were based on. Thus, a direct comparison generally was not possible in most sectors. As a result, NONROAD2005 data were not used in the 2008 update of the FHWA off-highway models. The quality of fuel use estimates directly affect the data quality in many tables published in the Highway Statistics. Although updates have been made to the Off-Highway Gasoline Use Model and the Public Use Gasoline Model, some challenges remain due to aging model equations and discontinuation of data sources.« less

  4. Local deformation for soft tissue simulation

    PubMed Central

    Omar, Nadzeri; Zhong, Yongmin; Smith, Julian; Gu, Chengfan

    2016-01-01

    ABSTRACT This paper presents a new methodology to localize the deformation range to improve the computational efficiency for soft tissue simulation. This methodology identifies the local deformation range from the stress distribution in soft tissues due to an external force. A stress estimation method is used based on elastic theory to estimate the stress in soft tissues according to a depth from the contact surface. The proposed methodology can be used with both mass-spring and finite element modeling approaches for soft tissue deformation. Experimental results show that the proposed methodology can improve the computational efficiency while maintaining the modeling realism. PMID:27286482

  5. Interplanetary Scintillation studies with the Murchison Wide-field Array III: Comparison of source counts and densities for radio sources and their sub-arcsecond components at 162 MHz

    NASA Astrophysics Data System (ADS)

    Chhetri, R.; Ekers, R. D.; Morgan, J.; Macquart, J.-P.; Franzen, T. M. O.

    2018-06-01

    We use Murchison Widefield Array observations of interplanetary scintillation (IPS) to determine the source counts of point (<0.3 arcsecond extent) sources and of all sources with some subarcsecond structure, at 162 MHz. We have developed the methodology to derive these counts directly from the IPS observables, while taking into account changes in sensitivity across the survey area. The counts of sources with compact structure follow the behaviour of the dominant source population above ˜3 Jy but below this they show Euclidean behaviour. We compare our counts to those predicted by simulations and find a good agreement for our counts of sources with compact structure, but significant disagreement for point source counts. Using low radio frequency SEDs from the GLEAM survey, we classify point sources as Compact Steep-Spectrum (CSS), flat spectrum, or peaked. If we consider the CSS sources to be the more evolved counterparts of the peaked sources, the two categories combined comprise approximately 80% of the point source population. We calculate densities of potential calibrators brighter than 0.4 Jy at low frequencies and find 0.2 sources per square degrees for point sources, rising to 0.7 sources per square degree if sources with more complex arcsecond structure are included. We extrapolate to estimate 4.6 sources per square degrees at 0.04 Jy. We find that a peaked spectrum is an excellent predictor for compactness at low frequencies, increasing the number of good calibrators by a factor of three compared to the usual flat spectrum criterion.

  6. Modeling Nonlinear Site Response Uncertainty in Broadband Ground Motion Simulations for the Los Angeles Basin

    NASA Astrophysics Data System (ADS)

    Assimaki, D.; Li, W.; Steidl, J. M.; Schmedes, J.

    2007-12-01

    The assessment of strong motion site response is of great significance, both for mitigating seismic hazard and for performing detailed analyses of earthquake source characteristics. There currently exists, however, large degree of uncertainty concerning the mathematical model to be employed for the computationally efficient evaluation of local site effects, and the site investigation program necessary to evaluate the nonlinear input model parameters and ensure cost-effective predictions; and while site response observations may provide critical constraints on interpretation methods, the lack of a statistically significant number of in-situ strong motion records prohibits statistical analyses to be conducted and uncertainties to be quantified based entirely on field data. In this paper, we combine downhole observations and broadband ground motion synthetics for characteristic site conditions the Los Angeles Basin, and investigate the variability in ground motion estimation introduced by the site response assessment methodology. In particular, site-specific regional velocity and attenuation structures are initially compiled using near-surface geotechnical data collected at downhole geotechnical arrays, inverse low-strain velocity and attenuation profiles at these sites obtained by inversion of weak motion records and the crustal velocity structure at the corresponding locations obtained from the Southern California Earthquake Centre Community Velocity Model. Successively, broadband ground motions are simulated by means of a hybrid low/high-frequency finite source model with correlated random parameters for rupture scenaria of weak, medium and large magnitude events (M =3.5-7.5). Observed estimates of site response at the stations of interest are first compared to the ensemble of approximate and incremental nonlinear site response models. Parametric studies are next conducted for each fixed magnitude (fault geometry) scenario by varying the source-to-site distance and source parameters for the ensemble of site conditions. Elastic, equivalent linear and nonlinear simulations are implemented for the deterministic description of the base-model velocity and attenuation structures and nonlinear soil properties, to examine the variability in ground motion predictions as a function of ground motion amplitude and frequency content, and nonlinear site response methodology. The modeling site response uncertainty introduced in the broadband ground motion predictions is reported by means of the COV of site amplification, defined as the ratio of the predicted peak ground acceleration (PGA) and spectral acceleration (SA) at short and long periods to the corresponding intensity measure on the ground surface of a typical NEHRP BC boundary profile (Vs30=760m/s), for the ensemble of approximate and incremental nonlinear models implemented. A frequency index is developed to describe the frequency content of incident ground motion. In conjunction with the rock-outcrop acceleration level, this index is used to identify the site and ground motion conditions where incremental nonlinear analyses should be employed in lieu of approximate methodologies. Finally, the effects of modeling uncertainty in ground response analysis is evaluated in the estimation of site amplification factors, which are successively compared to recently published factors of the New Generation Attenuation Relations (NGA) and the currently employed Seismic Code Provisions (NEHRP).

  7. Estimation of aquifer radionuclide concentrations by postprocessing of conservative tracer model results

    NASA Astrophysics Data System (ADS)

    Gedeon, M.; Vandersteen, K.; Rogiers, B.

    2012-04-01

    Radionuclide concentrations in aquifers represent an important indicator in estimating the impact of a planned surface disposal for low and medium level short-lived radioactive waste in Belgium, developed by the Belgian Agency for Radioactive Waste and Enriched Fissile Materials (ONDRAF/NIRAS), who also coordinates and leads the corresponding research. Estimating aquifer concentrations for individual radionuclides represents a computational challenge because (a) different retardation values are applied to different hydrogeologic units and (b) sequential decay reactions with radionuclides of various sorption characteristics cause long computational times until a steady-state is reached. The presented work proposes a methodology reducing substantially the computational effort by postprocessing the results of a prior non-reactive tracer simulation. These advective transport results represent the steady-state concentration - source flux ratio and the break-through time at each modelling cell. These two variables are further used to estimate the individual radionuclide concentrations by (a) scaling the steady-state concentrations to the source fluxes of individual radionuclides; (b) applying the radioactive decay and ingrowth in a decay chain; (c) scaling the travel time by the retardation factor and (d) applying linear sorption. While all steps except (b) require solving simple linear equations, applying ingrowth of individual radionuclides in decay chains requires solving the differential Bateman equation. This equation needs to be solved once for a unit radionuclide activity at all arrival times found in the numerical grid. The ratios between the parent nuclide activity and the progeny activities are then used in the postprocessing. Results are presented for discrete points and examples of radioactive plume maps are given. These results compare well to the results achieved using a full numerical simulation including the respective chemical reaction processes. Although the proposed method represents a fast way to estimate the radionuclide concentrations without performing timely challenging simulations, its applicability has some limits. The radionuclide source needs to be assumed constant during the period of achieving a steady-state in the model. Otherwise, the source variability of individual radionuclides needs to be modelled using a numerical simulation. However, such a situation only occurs in cases of source variability in a period until steady-state is reached and such a simulation takes a relatively short time. The proposed method enables an effective estimation of individual radionuclide concentrations in the frame of performance assessment of a radioactive waste disposal. Reducing the calculation time to a minimum enables performing sensitivity and uncertainty analyses, testing alternative models, etc. thus enhancing the overall quality of the modelling analysis.

  8. From field data to volumes: constraining uncertainties in pyroclastic eruption parameters

    USGS Publications Warehouse

    Klawonn, Malin; Houghton, Bruce F.; Swanson, Don; Fagents, Sarah A.; Wessel, Paul; Wolfe, Cecily J.

    2014-01-01

    In this study, we aim to understand the variability in eruption volume estimates derived from field studies of pyroclastic deposits. We distributed paper maps of the 1959 Kīlauea Iki tephra to 101 volcanologists worldwide, who produced hand-drawn isopachs. Across the returned maps, uncertainty in isopach areas is 7 % across the well-sampled deposit but increases to over 30 % for isopachs that are governed by the largest and smallest thickness measurements. We fit the exponential, power-law, and Weibull functions through the isopach thickness versus area1/2 values and find volume estimate variations up to a factor of 4.9 for a single map. Across all maps and methodologies, we find an average standard deviation for a total volume of s = 29 %. The volume uncertainties are largest for the most proximal (s = 62 %) and distal field (s = 53 %) and small for the densely sampled intermediate deposit (s = 8 %). For the Kīlauea Iki 1959 eruption, we find that the deposit beyond the 5-cm isopach contains only 2 % of the total erupted volume, whereas the near-source deposit contains 48 % and the intermediate deposit 50 % of the total volume. Thus, the relative uncertainty within each zone impacts the total volume estimates differently. The observed uncertainties for the different deposit regions in this study illustrate a fundamental problem of estimating eruption volumes: while some methodologies may provide better fits to the isopach data or rely on fewer free parameters, the main issue remains the predictive capabilities of the empirical functions for the regions where measurements are missing.

  9. Integrating risk assessment and life cycle assessment: a case study of insulation.

    PubMed

    Nishioka, Yurika; Levy, Jonathan I; Norris, Gregory A; Wilson, Andrew; Hofstetter, Patrick; Spengler, John D

    2002-10-01

    Increasing residential insulation can decrease energy consumption and provide public health benefits, given changes in emissions from fuel combustion, but also has cost implications and ancillary risks and benefits. Risk assessment or life cycle assessment can be used to calculate the net impacts and determine whether more stringent energy codes or other conservation policies would be warranted, but few analyses have combined the critical elements of both methodologies In this article, we present the first portion of a combined analysis, with the goal of estimating the net public health impacts of increasing residential insulation for new housing from current practice to the latest International Energy Conservation Code (IECC 2000). We model state-by-state residential energy savings and evaluate particulate matter less than 2.5 microm in diameter (PM2.5), NOx, and SO2 emission reductions. We use past dispersion modeling results to estimate reductions in exposure, and we apply concentration-response functions for premature mortality and selected morbidity outcomes using current epidemiological knowledge of effects of PM2.5 (primary and secondary). We find that an insulation policy shift would save 3 x 10(14) British thermal units or BTU (3 x 10(17) J) over a 10-year period, resulting in reduced emissions of 1,000 tons of PM2.5, 30,000 tons of NOx, and 40,000 tons of SO2. These emission reductions yield an estimated 60 fewer fatalities during this period, with the geographic distribution of health benefits differing from the distribution of energy savings because of differences in energy sources, population patterns, and meteorology. We discuss the methodology to be used to integrate life cycle calculations, which can ultimately yield estimates that can be compared with costs to determine the influence of external costs on benefit-cost calculations.

  10. Bigger is Better, but at What Cost? Estimating the Economic Value of Incremental Data Assets.

    PubMed

    Dalessandro, Brian; Perlich, Claudia; Raeder, Troy

    2014-06-01

    Many firms depend on third-party vendors to supply data for commercial predictive modeling applications. An issue that has received very little attention in the prior research literature is the estimation of a fair price for purchased data. In this work we present a methodology for estimating the economic value of adding incremental data to predictive modeling applications and present two cases studies. The methodology starts with estimating the effect that incremental data has on model performance in terms of common classification evaluation metrics. This effect is then translated into economic units, which gives an expected economic value that the firm might realize with the acquisition of a particular data asset. With this estimate a firm can then set a data acquisition price that targets a particular return on investment. This article presents the methodology in full detail and illustrates it in the context of two marketing case studies.

  11. Reliability study on high power 638-nm triple emitter broad area laser diode

    NASA Astrophysics Data System (ADS)

    Yagi, T.; Kuramoto, K.; Kadoiwa, K.; Wakamatsu, R.; Miyashita, M.

    2016-03-01

    Reliabilities of the 638-nm triple emitter broad area laser diode (BA-LD) with the window-mirror structure were studied. Methodology to estimate mean time to failure (MTTF) due to catastrophic optical mirror degradation (COMD) in reasonable aging duration was newly proposed. Power at which the LD failed due to COMD (PCOMD) was measured for the aged LDs under the several aging conditions. It was revealed that the PCOMD was proportional to logarithm of aging duration, and MTTF due to COMD (MTTF(COMD)) could be estimated by using this relation. MTTF(COMD) estimated by the methodology with the aging duration of approximately 2,000 hours was consistent with that estimated by the long term aging. By using this methodology, the MTTF of the BA-LD was estimated exceeding 100,000 hours under the output of 2.5 W, duty cycles of 30% .

  12. Statistical Methodology for Assigning Emissions to Industries in the United States, Revised Estimates: 1970 to 1997 (2001)

    EPA Pesticide Factsheets

    This report presents the results of a study that develops a methodology to assign emissions to the manufacturing and nonmanufacturing industries that comprise the industrial sector of the EPA’s national emission estimates for 1970 to 1997.

  13. Combining tracer flux ratio methodology with low-flying aircraft measurements to estimate dairy farm CH4 emissions

    NASA Astrophysics Data System (ADS)

    Daube, C.; Conley, S.; Faloona, I. C.; Yacovitch, T. I.; Roscioli, J. R.; Morris, M.; Curry, J.; Arndt, C.; Herndon, S. C.

    2017-12-01

    Livestock activity, enteric fermentation of feed and anaerobic digestion of waste, contributes significantly to the methane budget of the United States (EPA, 2016). Studies question the reported magnitude of these methane sources (Miller et. al., 2013), calling for more detailed research of agricultural animals (Hristov, 2014). Tracer flux ratio is an attractive experimental method to bring to this problem because it does not rely on estimates of atmospheric dispersion. Collection of data occurred during one week at two dairy farms in central California (June, 2016). Each farm varied in size, layout, head count, and general operation. The tracer flux ratio method involves releasing ethane on-site with a known flow rate to serve as a tracer gas. Downwind mixed enhancements in ethane (from the tracer) and methane (from the dairy) were measured, and their ratio used to infer the unknown methane emission rate from the farm. An instrumented van drove transects downwind of each farm on public roads while tracer gases were released on-site, employing the tracer flux ratio methodology to assess simultaneous methane and tracer gas plumes. Flying circles around each farm, a small instrumented aircraft made measurements to perform a mass balance evaluation of methane gas. In the course of these two different methane quantification techniques, we were able to validate yet a third method: tracer flux ratio measured via aircraft. Ground-based tracer release rates were applied to the aircraft-observed methane-to-ethane ratios, yielding whole-site methane emission rates. Never before has the tracer flux ratio method been executed with aircraft measurements. Estimates from this new application closely resemble results from the standard ground-based technique to within their respective uncertainties. Incorporating this new dimension to the tracer flux ratio methodology provides additional context for local plume dynamics and validation of both ground and flight-based data.

  14. Assessment of exposure to indoor air contaminants from combustion sources: methodology and application.

    PubMed

    Leaderer, B P; Zagraniski, R T; Berwick, M; Stolwijk, J A

    1986-08-01

    A methodology for assessing indoor air pollutant exposures is presented, with specific application to unvented combustion by-products. This paper describes the method as applied to a study of acute respiratory illness associated with the use of unvented kerosene space heaters in 333 residences in the New Haven, Connecticut, area from September 1982 to April 1983. The protocol serves as a prototype for a nested design of exposure assessment which could be applied to large-scale field studies of indoor air contaminant levels. Questionnaires, secondary records, and several methods of air monitoring offer a reliable method of estimating environmental exposures for assessing associations with health effects at a reasonable cost. Indoor to outdoor ratios of NO2 concentrations were found to be 0.58 +/- 0.31 for residences without known sources of NO2. Levels of NO2 were found to be comparable for homes with a kerosene heater only and those with a gas cooking stove only. Homes with a kerosene heater and a gas stove had average two-week NO2 levels approximately double those with only one source. Presence of tobacco smokers had a small but significant impact on indoor NO2 levels. Two-week average levels of indoor NO2 were found to be excellent predictors of total personal NO2 exposure for a small sample of adults. Residences with kerosene space heaters had SO2 levels corresponding to the number of hours of heater use and the sulfur content of the fuel. Formaldehyde levels were found to be low and not related to unvented combustion sources. NO2, SO2, and CO2 levels measured in some of the residences were found to exceed those levels specified in current national health standards.

  15. Development of an Information Fusion System for Engine Diagnostics and Health Management

    NASA Technical Reports Server (NTRS)

    Volponi, Allan J.; Brotherton, Tom; Luppold, Robert; Simon, Donald L.

    2004-01-01

    Aircraft gas-turbine engine data are available from a variety of sources including on-board sensor measurements, maintenance histories, and component models. An ultimate goal of Propulsion Health Management (PHM) is to maximize the amount of meaningful information that can be extracted from disparate data sources to obtain comprehensive diagnostic and prognostic knowledge regarding the health of the engine. Data Fusion is the integration of data or information from multiple sources, to achieve improved accuracy and more specific inferences than can be obtained from the use of a single sensor alone. The basic tenet underlying the data/information fusion concept is to leverage all available information to enhance diagnostic visibility, increase diagnostic reliability and reduce the number of diagnostic false alarms. This paper describes a basic PHM Data Fusion architecture being developed in alignment with the NASA C17 Propulsion Health Management (PHM) Flight Test program. The challenge of how to maximize the meaningful information extracted from disparate data sources to obtain enhanced diagnostic and prognostic information regarding the health and condition of the engine is the primary goal of this endeavor. To address this challenge, NASA Glenn Research Center (GRC), NASA Dryden Flight Research Center (DFRC) and Pratt & Whitney (P&W) have formed a team with several small innovative technology companies to plan and conduct a research project in the area of data fusion as applied to PHM. Methodologies being developed and evaluated have been drawn from a wide range of areas including artificial intelligence, pattern recognition, statistical estimation, and fuzzy logic. This paper will provide a broad overview of this work, discuss some of the methodologies employed and give some illustrative examples.

  16. Sources of particulate matter components in the Athabasca oil sands region: investigation through a comparison of trace element measurement methodologies

    NASA Astrophysics Data System (ADS)

    Phillips-Smith, Catherine; Jeong, Cheol-Heon; Healy, Robert M.; Dabek-Zlotorzynska, Ewa; Celo, Valbona; Brook, Jeffrey R.; Evans, Greg

    2017-08-01

    The province of Alberta, Canada, is home to three oil sands regions which, combined, contain the third largest deposit of oil in the world. Of these, the Athabasca oil sands region is the largest. As part of Environment and Climate Change Canada's program in support of the Joint Canada-Alberta Implementation Plan for Oil Sands Monitoring program, concentrations of trace elements in PM2. 5 (particulate matter smaller than 2.5 µm in diameter) were measured through two campaigns that involved different methodologies: a long-term filter campaign and a short-term intensive campaign. In the long-term campaign, 24 h filter samples were collected once every 6 days over a 2-year period (December 2010-November 2012) at three air monitoring stations in the regional municipality of Wood Buffalo. For the intensive campaign (August 2013), hourly measurements were made with an online instrument at one air monitoring station; daily filter samples were also collected. The hourly and 24 h filter data were analyzed individually using positive matrix factorization. Seven emission sources of PM2. 5 trace elements were thereby identified: two types of upgrader emissions, soil, haul road dust, biomass burning, and two sources of mixed origin. The upgrader emissions, soil, and haul road dust sources were identified through both the methodologies and both methodologies identified a mixed source, but these exhibited more differences than similarities. The second upgrader emissions and biomass burning sources were only resolved by the hourly and filter methodologies, respectively. The similarity of the receptor modeling results from the two methodologies provided reassurance as to the identity of the sources. Overall, much of the PM2. 5-related trace elements were found to be anthropogenic, or at least to be aerosolized through anthropogenic activities. These emissions may in part explain the previously reported higher levels of trace elements in snow, water, and biota samples collected near the oil sands operations.

  17. Estimation of the limit of detection using information theory measures.

    PubMed

    Fonollosa, Jordi; Vergara, Alexander; Huerta, Ramón; Marco, Santiago

    2014-01-31

    Definitions of the limit of detection (LOD) based on the probability of false positive and/or false negative errors have been proposed over the past years. Although such definitions are straightforward and valid for any kind of analytical system, proposed methodologies to estimate the LOD are usually simplified to signals with Gaussian noise. Additionally, there is a general misconception that two systems with the same LOD provide the same amount of information on the source regardless of the prior probability of presenting a blank/analyte sample. Based upon an analogy between an analytical system and a binary communication channel, in this paper we show that the amount of information that can be extracted from an analytical system depends on the probability of presenting the two different possible states. We propose a new definition of LOD utilizing information theory tools that deals with noise of any kind and allows the introduction of prior knowledge easily. Unlike most traditional LOD estimation approaches, the proposed definition is based on the amount of information that the chemical instrumentation system provides on the chemical information source. Our findings indicate that the benchmark of analytical systems based on the ability to provide information about the presence/absence of the analyte (our proposed approach) is a more general and proper framework, while converging to the usual values when dealing with Gaussian noise. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Estimation of waste water treatment plant methane emissions: methodology and results from a short campaign

    NASA Astrophysics Data System (ADS)

    Yver-Kwok, C. E.; Müller, D.; Caldow, C.; Lebegue, B.; Mønster, J. G.; Rella, C. W.; Scheutz, C.; Schmidt, M.; Ramonet, M.; Warneke, T.; Broquet, G.; Ciais, P.

    2013-10-01

    This paper describes different methods to estimate methane emissions at different scales. These methods are applied to a waste water treatment plant (WWTP) located in Valence, France. We show that Fourier Transform Infrared (FTIR) measurements as well as Cavity Ring Down Spectroscopy (CRDS) can be used to measure emissions from the process to the regional scale. To estimate the total emissions, we investigate a tracer release method (using C2H2) and the Radon tracer method (using 222Rn). For process-scale emissions, both tracer release and chamber techniques were used. We show that the tracer release method is suitable to quantify facility- and some process-scale emissions, while the Radon tracer method encompasses not only the treatment station but also a large area around. Thus the Radon tracer method is more representative of the regional emissions around the city. Uncertainties for each method are described. Applying the methods to CH4 emissions, we find that the main source of emissions of the plant was not identified with certainty during this short campaign, although the primary source of emissions is likely to be from solid sludge. Overall, the waste water treatment plant represents a small part (3%) of the methane emissions of the city of Valence and its surroundings,which is in agreement with the national inventories.

  19. Extreme prices in electricity balancing markets from an approach of statistical physics

    NASA Astrophysics Data System (ADS)

    Mureddu, Mario; Meyer-Ortmanns, Hildegard

    2018-01-01

    An increase in energy production from renewable energy sources is viewed as a crucial achievement in most industrialized countries. The higher variability of power production via renewables leads to a rise in ancillary service costs over the power system, in particular costs within the electricity balancing markets, mainly due to an increased number of extreme price spikes. This study analyzes the impact of an increased share of renewable energy sources on the behavior of price and volumes of the Italian balancing market. Starting from configurations of load and power production, which guarantee a stable performance, we implement fluctuations in the load and in renewables; in particular we artificially increase the contribution of renewables as compared to conventional power sources to cover the total load. We then determine the amount of requested energy in the balancing market and its fluctuations, which are induced by production and consumption. Within an approach of agent-based modeling we estimate the resulting energy prices and costs. While their average values turn out to be only slightly affected by an increased contribution from renewables, the probability for extreme price events is shown to increase along with undesired peaks in the costs. Our methodology provides a tool for estimating outliers in prices obtained in the energy balancing market, once data of consumption, production and their typical fluctuations are provided.

  20. Pseudo-spectral methodology for a quantitative assessment of the cover of in-stream vegetation in small streams

    NASA Astrophysics Data System (ADS)

    Hershkovitz, Yaron; Anker, Yaakov; Ben-Dor, Eyal; Schwartz, Guy; Gasith, Avital

    2010-05-01

    In-stream vegetation is a key ecosystem component in many fluvial ecosystems, having cascading effects on stream conditions and biotic structure. Traditionally, ground-level surveys (e.g. grid and transect analyses) are commonly used for estimating cover of aquatic macrophytes. Nonetheless, this methodological approach is highly time consuming and usually yields information which is practically limited to habitat and sub-reach scales. In contrast, remote-sensing techniques (e.g. satellite imagery and airborne photography), enable collection of large datasets over section, stream and basin scales, in relatively short time and reasonable cost. However, the commonly used spatial high resolution (1m) is often inadequate for examining aquatic vegetation on habitat or sub-reach scales. We examined the utility of a pseudo-spectral methodology, using RGB digital photography for estimating the cover of in-stream vegetation in a small Mediterranean-climate stream. We compared this methodology with that obtained by traditional ground-level grid methodology and with an airborne hyper-spectral remote sensing survey (AISA-ES). The study was conducted along a 2 km section of an intermittent stream (Taninim stream, Israel). When studied, the stream was dominated by patches of watercress (Nasturtium officinale) and mats of filamentous algae (Cladophora glomerata). The extent of vegetation cover at the habitat and section scales (100 and 104 m, respectively) were estimated by the pseudo-spectral methodology, using an airborne Roli camera with a Phase-One P 45 (39 MP) CCD image acquisition unit. The swaths were taken in elevation of about 460 m having a spatial resolution of about 4 cm (NADIR). For measuring vegetation cover at the section scale (104 m) we also used a 'push-broom' AISA-ES hyper-spectral swath having a sensor configuration of 182 bands (350-2500 nm) at elevation of ca. 1,200 m (i.e. spatial resolution of ca. 1 m). Simultaneously, with every swath we used an Analytical Spectral Device (ASD) to measure hyper-spectral signatures (2150 bands configuration; 350-2500 nm) of selected ground-level targets (located by GPS) of soil, water; vegetation (common reed, watercress, filamentous algae) and standard EVA foam colored sheets (red, green, blue, black and white). Processing and analysis of the data were performed over an ITT ENVI platform. The hyper-spectral image underwent radiometric calibration according to the flight and sensor calibration parameters on CALIGEO platform and the raw DN scale was converted into radiance scale. Ground level visual survey of vegetation cover and height was applied at the habitat scale (100 m) by placing a 1m2 netted grids (10x10cm cells) along 'bank-to-bank' transect (in triplicates). Estimates of plant cover obtained by the pseudo-spectral methodology at the habitat scale were 35-61% for the watercress, 0.4-25% for the filamentous algae and 27-51% for plant-free patches. The respective estimates by ground level visual survey were 26-50, 14-43% and 36-50%. The pseudo-spectral methodology also yielded estimates for the section scale (104 m) of ca. 39% for the watercress, ca. 32% for the filamentous algae and 6% for plant-free patches. The respective estimates obtained by hyper-spectral swath were 38, 26 and 8%. Validation against ground-level measurements proved that pseudo-spectral methodology gives reasonably good estimates of in-stream plant cover. Therefore, this methodology can serve as a substitute for ground level estimates at small stream scales and for the low resolution hyper-spectral methodology at larger scales.

  1. A Review of Issues Related to Data Acquisition and Analysis in EEG/MEG Studies

    PubMed Central

    Puce, Aina; Hämäläinen, Matti S.

    2017-01-01

    Electroencephalography (EEG) and magnetoencephalography (MEG) are non-invasive electrophysiological methods, which record electric potentials and magnetic fields due to electric currents in synchronously-active neurons. With MEG being more sensitive to neural activity from tangential currents and EEG being able to detect both radial and tangential sources, the two methods are complementary. Over the years, neurophysiological studies have changed considerably: high-density recordings are becoming de rigueur; there is interest in both spontaneous and evoked activity; and sophisticated artifact detection and removal methods are available. Improved head models for source estimation have also increased the precision of the current estimates, particularly for EEG and combined EEG/MEG. Because of their complementarity, more investigators are beginning to perform simultaneous EEG/MEG studies to gain more complete information about neural activity. Given the increase in methodological complexity in EEG/MEG, it is important to gather data that are of high quality and that are as artifact free as possible. Here, we discuss some issues in data acquisition and analysis of EEG and MEG data. Practical considerations for different types of EEG and MEG studies are also discussed. PMID:28561761

  2. Basis for the power supply reliability study of the 1 MW neutron source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGhee, D.G.; Fathizadeh, M.

    1993-07-01

    The Intense Pulsed Neutron Source (IPNS) upgrade to 1 MW requires new power supply designs. This paper describes the tools and the methodology needed to assess the reliability of the power supplies. Both the design and operation of the power supplies in the synchrotron will be taken into account. To develop a reliability budget, the experiments to be conducted with this accelerator are reviewed, and data is collected on the number and duration of interruptions possible before an experiment is required to start over. Once the budget is established, several accelerators of this type will be examined. The budget ismore » allocated to the different accelerator systems based on their operating experience. The accelerator data is usually in terms of machine availability and system down time. It takes into account mean time to failure (MTTF), time to diagnose, time to repair or replace the failed components, and time to get the machine back online. These estimated times are used as baselines for the design. Even though we are in the early stage of design, available data can be analyzed to estimate the MTTF for the power supplies.« less

  3. Emission from open burning of municipal solid waste in India.

    PubMed

    Kumari, Kanchan; Kumar, Sunil; Rajagopal, Vineel; Khare, Ankur; Kumar, Rakesh

    2017-07-27

    Open burning of Municipal Solid Waste (MSW) is a potential non-point source of emission, which causes greater concern especially in developing countries such as India. Lack of awareness about environmental impact of open burning, and ignorance of the fact, i.e. 'Open burning is a source of emission of carcinogenic substances' are major hindrances towards an appropriate municipal solid waste management system in India. The paper highlights the open burning of MSW practices in India, and the current and projected emission of 10 major pollutants (dioxin, furans, particulate matter, carbon monoxide, sulphur oxides, nitrogen oxides, benzene, toluene, ethyl benzene and 1-hexene) emitted due to the open burning of MSW. Waste to Energy potential of MSW was also estimated adopting effective biological and thermal techniques. Statistical techniques were applied to analyse the data and current and projected emission of various pollutants were estimated. Data pertaining to population, MSW generation and its collection efficiency were compiled for 29 States and 7 Union Territories. Thereafter, emission of 10 pollutants was measured following methodology prescribed in Intergovernmental Panel on Climate Change guideline for National Greenhouse Gas Inventories, 2006. The study revealed that people living in Metropolitan cities are more affected by emissions from open burning.

  4. On the unified estimation of turbulence eddy dissipation rate using Doppler cloud radars and lidars: Radar and Lidar Turbulence Estimation

    DOE PAGES

    Borque, Paloma; Luke, Edward; Kollias, Pavlos

    2016-05-27

    Coincident profiling observations from Doppler lidars and radars are used to estimate the turbulence energy dissipation rate (ε) using three different data sources: (i) Doppler radar velocity (DRV), (ii) Doppler lidar velocity (DLV), and (iii) Doppler radar spectrum width (DRW) measurements. Likewise, the agreement between the derived ε estimates is examined at the cloud base height of stratiform warm clouds. Collocated ε estimates based on power spectra analysis of DRV and DLV measurements show good agreement (correlation coefficient of 0.86 and 0.78 for both cases analyzed here) during both drizzling and nondrizzling conditions. This suggests that unified (below and abovemore » cloud base) time-height estimates of ε in cloud-topped boundary layer conditions can be produced. This also suggests that eddy dissipation rate can be estimated throughout the cloud layer without the constraint that clouds need to be nonprecipitating. Eddy dissipation rate estimates based on DRW measurements compare well with the estimates based on Doppler velocity but their performance deteriorates as precipitation size particles are introduced in the radar volume and broaden the DRW values. And, based on this finding, a methodology to estimate the Doppler spectra broadening due to the spread of the drop size distribution is presented. Furthermore, the uncertainties in ε introduced by signal-to-noise conditions, the estimation of the horizontal wind, the selection of the averaging time window, and the presence of precipitation are discussed in detail.« less

  5. On the unified estimation of turbulence eddy dissipation rate using Doppler cloud radars and lidars: Radar and Lidar Turbulence Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borque, Paloma; Luke, Edward; Kollias, Pavlos

    Coincident profiling observations from Doppler lidars and radars are used to estimate the turbulence energy dissipation rate (ε) using three different data sources: (i) Doppler radar velocity (DRV), (ii) Doppler lidar velocity (DLV), and (iii) Doppler radar spectrum width (DRW) measurements. Likewise, the agreement between the derived ε estimates is examined at the cloud base height of stratiform warm clouds. Collocated ε estimates based on power spectra analysis of DRV and DLV measurements show good agreement (correlation coefficient of 0.86 and 0.78 for both cases analyzed here) during both drizzling and nondrizzling conditions. This suggests that unified (below and abovemore » cloud base) time-height estimates of ε in cloud-topped boundary layer conditions can be produced. This also suggests that eddy dissipation rate can be estimated throughout the cloud layer without the constraint that clouds need to be nonprecipitating. Eddy dissipation rate estimates based on DRW measurements compare well with the estimates based on Doppler velocity but their performance deteriorates as precipitation size particles are introduced in the radar volume and broaden the DRW values. And, based on this finding, a methodology to estimate the Doppler spectra broadening due to the spread of the drop size distribution is presented. Furthermore, the uncertainties in ε introduced by signal-to-noise conditions, the estimation of the horizontal wind, the selection of the averaging time window, and the presence of precipitation are discussed in detail.« less

  6. Generation and Validation of Spatial Distribution of Hourly Wind Speed Time-Series using Machine Learning

    NASA Astrophysics Data System (ADS)

    Veronesi, F.; Grassi, S.

    2016-09-01

    Wind resource assessment is a key aspect of wind farm planning since it allows to estimate the long term electricity production. Moreover, wind speed time-series at high resolution are helpful to estimate the temporal changes of the electricity generation and indispensable to design stand-alone systems, which are affected by the mismatch of supply and demand. In this work, we present a new generalized statistical methodology to generate the spatial distribution of wind speed time-series, using Switzerland as a case study. This research is based upon a machine learning model and demonstrates that statistical wind resource assessment can successfully be used for estimating wind speed time-series. In fact, this method is able to obtain reliable wind speed estimates and propagate all the sources of uncertainty (from the measurements to the mapping process) in an efficient way, i.e. minimizing computational time and load. This allows not only an accurate estimation, but the creation of precise confidence intervals to map the stochasticity of the wind resource for a particular site. The validation shows that machine learning can minimize the bias of the wind speed hourly estimates. Moreover, for each mapped location this method delivers not only the mean wind speed, but also its confidence interval, which are crucial data for planners.

  7. Allometric scaling theory applied to FIA biomass estimation

    Treesearch

    David C. Chojnacky

    2002-01-01

    Tree biomass estimates in the Forest Inventory and Analysis (FIA) database are derived from numerous methodologies whose abundance and complexity raise questions about consistent results throughout the U.S. A new model based on allometric scaling theory ("WBE") offers simplified methodology and a theoretically sound basis for improving the reliability and...

  8. Attribution of Net Carbon Change by Disturbance Type across Forest Lands of the Continental United States

    NASA Astrophysics Data System (ADS)

    Hagen, S. C.; Harris, N.; Saatchi, S. S.; Domke, G. M.; Woodall, C. W.; Pearson, T.

    2016-12-01

    We generated spatially comprehensive maps of carbon stocks and net carbon changes from US forestlands between 2005 and 2010 and attributed the changes to natural and anthropogenic processes. The prototype system created to produce these maps is designed to assist with national GHG inventories and support decisions associated with land management. Here, we present the results and methodological framework of our analysis. In summary, combining estimates of net C losses and gains results in net carbon change of 269±49 Tg C yr-1 (sink) in the coterminous US forest land, with carbon loss from harvest acting as the predominent source process.

  9. Association between component costs, study methodologies, and foodborne illness-related factors with the cost of nontyphoidal Salmonella illness.

    PubMed

    McLinden, Taylor; Sargeant, Jan M; Thomas, M Kate; Papadopoulos, Andrew; Fazil, Aamir

    2014-09-01

    Nontyphoidal Salmonella spp. are one of the most common causes of bacterial foodborne illness. Variability in cost inventories and study methodologies limits the possibility of meaningfully interpreting and comparing cost-of-illness (COI) estimates, reducing their usefulness. However, little is known about the relative effect these factors have on a cost-of-illness estimate. This is important for comparing existing estimates and when designing new cost-of-illness studies. Cost-of-illness estimates, identified through a scoping review, were used to investigate the association between descriptive, component cost, methodological, and foodborne illness-related factors such as chronic sequelae and under-reporting with the cost of nontyphoidal Salmonella spp. illness. The standardized cost of nontyphoidal Salmonella spp. illness from 30 estimates reported in 29 studies ranged from $0.01568 to $41.22 United States dollars (USD)/person/year (2012). The mean cost of nontyphoidal Salmonella spp. illness was $10.37 USD/person/year (2012). The following factors were found to be significant in multiple linear regression (p≤0.05): the number of direct component cost categories included in an estimate (0-4, particularly long-term care costs) and chronic sequelae costs (inclusion/exclusion), which had positive associations with the cost of nontyphoidal Salmonella spp. illness. Factors related to study methodology were not significant. Our findings indicated that study methodology may not be as influential as other factors, such as the number of direct component cost categories included in an estimate and costs incurred due to chronic sequelae. Therefore, these may be the most important factors to consider when designing, interpreting, and comparing cost of foodborne illness studies.

  10. Health Insurance Dynamics: Methodological Considerations and a Comparison of Estimates from Two Surveys.

    PubMed

    Graves, John A; Mishra, Pranita

    2016-10-01

    To highlight key methodological issues in studying insurance dynamics and to compare estimates across two commonly used surveys. Nonelderly uninsured adults and children sampled between 2001 and 2011 in the Medical Expenditure Panel Survey and the Survey of Income and Program Participation. We utilized nonparametric Kaplan-Meier methods to estimate quantiles (25th, 50th, and 75th percentiles) in the distribution of uninsured spells. We compared estimates obtained across surveys and across different methodological approaches to address issues like attrition, seam bias, censoring and truncation, and survey weighting method. All data were drawn from publicly available household surveys. Estimated uninsured spell durations in the MEPS were longer than those observed in the SIPP. There were few changes in spell durations between 2001 and 2011, with median durations of 14 months among adults and 5-7 months among children in the MEPS, and 8 months (adults) and 4 months (children) in the SIPP. The use of panel survey data to study insurance dynamics presents a unique set of methodological challenges. Researchers should consider key analytic and survey design trade-offs when choosing which survey can best suit their research goals. © Health Research and Educational Trust.

  11. Uncertainty Quantification in Remaining Useful Life of Aerospace Components using State Space Models and Inverse FORM

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar; Goebel, Kai

    2013-01-01

    This paper investigates the use of the inverse first-order reliability method (inverse- FORM) to quantify the uncertainty in the remaining useful life (RUL) of aerospace components. The prediction of remaining useful life is an integral part of system health prognosis, and directly helps in online health monitoring and decision-making. However, the prediction of remaining useful life is affected by several sources of uncertainty, and therefore it is necessary to quantify the uncertainty in the remaining useful life prediction. While system parameter uncertainty and physical variability can be easily included in inverse-FORM, this paper extends the methodology to include: (1) future loading uncertainty, (2) process noise; and (3) uncertainty in the state estimate. The inverse-FORM method has been used in this paper to (1) quickly obtain probability bounds on the remaining useful life prediction; and (2) calculate the entire probability distribution of remaining useful life prediction, and the results are verified against Monte Carlo sampling. The proposed methodology is illustrated using a numerical example.

  12. Production of Chitin from Penaeus vannamei By-Products to Pilot Plant Scale Using a Combination of Enzymatic and Chemical Processes and Subsequent Optimization of the Chemical Production of Chitosan by Response Surface Methodology.

    PubMed

    Vázquez, José A; Ramos, Patrícia; Mirón, Jesús; Valcarcel, Jesus; Sotelo, Carmen G; Pérez-Martín, Ricardo I

    2017-06-16

    The waste generated from shrimp processing contains valuable materials such as protein, carotenoids, and chitin. The present study describes a process at pilot plant scale to recover chitin from the cephalothorax of Penaeus vannamei using mild conditions. The application of a sequential enzymatic-acid-alkaline treatment yields 30% chitin of comparable purity to commercial sources. Effluents from the process are rich in protein and astaxanthin, and represent inputs for further by-product recovery. As a last step, chitin is deacetylated to produce chitosan; the optimal conditions are established by applying a response surface methodology (RSM). Under these conditions, deacetylation reaches 92% as determined by Proton Nuclear Magnetic Resonance (¹H-NMR), and the molecular weight (Mw) of chitosan is estimated at 82 KDa by gel permeation chromatography (GPC). Chitin and chitosan microstructures are characterized by Scanning Electron Microscopy (SEM).

  13. Model documentation, Coal Market Module of the National Energy Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    This report documents the objectives and the conceptual and methodological approach used in the development of the National Energy Modeling System`s (NEMS) Coal Market Module (CMM) used to develop the Annual Energy Outlook 1998 (AEO98). This report catalogues and describes the assumptions, methodology, estimation techniques, and source code of CMM`s two submodules. These are the Coal Production Submodule (CPS) and the Coal Distribution Submodule (CDS). CMM provides annual forecasts of prices, production, and consumption of coal for NEMS. In general, the CDS integrates the supply inputs from the CPS to satisfy demands for coal from exogenous demand models. The internationalmore » area of the CDS forecasts annual world coal trade flows from major supply to major demand regions and provides annual forecasts of US coal exports for input to NEMS. Specifically, the CDS receives minemouth prices produced by the CPS, demand and other exogenous inputs from other NEMS components, and provides delivered coal prices and quantities to the NEMS economic sectors and regions.« less

  14. Methodology for modeling the mechanical interaction between a reaction wheel and a flexible structure

    NASA Astrophysics Data System (ADS)

    Elias, Laila M.; Dekens, Frank G.; Basdogan, Ipek; Sievers, Lisa A.; Neville, Timothy

    2003-02-01

    This paper presents a modeling methodology used to predict the performance of a flexible structure, such as a space telescope, in the presence of an on-board vibrational disturbance source, such as a reaction wheel assembly (RWA). Both decoupled and coupled analysis methods are presented. The decoupled method relies on blocked RWA disturbances, measured with the RWA hardmounted to a rigid surface. The coupled method corrects the blocked RWA disturbance boundary conditions using 'force filters' which depend on estimates of the interface accelerances of the RWA and spacecraft. Both methods were validated on the Micro-Precision Interferometer testbed at the Jet Propulsion Laboratory. Experimental results are encouraging, indicating that both methods provide sufficient accuracy compared to measured values; however, the coupled method provides the best results when the gyroscopic nature of the spinning RWA is captured in the RWA accelerance model. Additionally, the RWA disturbance cross spectral density terms are found to be influential.

  15. Methodology for the ecotoxicological evaluation of areas polluted by phosphogypsum wastes.

    NASA Astrophysics Data System (ADS)

    Martínez-Sanchez, M. J.; Garcia-Lorenzo, M. L.; Perez-Sirvent, C.; Martinez-Lopez, S.; Hernandez-Cordoba, M.; Bech, J.

    2012-04-01

    In Spain, the production of phosphoric acid, and hence of phosphogypsum, is restricted to a fertilizer industrial site. The residues contain some radionuclides of the U-series and other contaminants. In order to estimate the risk posed by these materials, chemical methods need to be complemented with biological methods. Then, the aim of this study was to develop a battery of bioassays for the ecotoxicological screening of areas polluted by phosphogypsum wastes. Particularly, the toxicity of water samples, sediments and their pore-water extracts was evaluated by using three assays: bacteria, plants and ostracods. The applied bioassays were: the bioluminescence inhibition of Vibrio fischeri in superficial water samples using Microtox® bioassay; the root and shoot elongation inhibition and the mortality of Lepidium sativum, Sorghum saccharatum and Sinapis alba using Phytotoxkit® bioassay; and inhibition of Heterocypris incongruens by way of Ostracodtoxkit®. Proposed methodology allows the identification of contamination sources and non contaminated areas, corresponding to decreasing toxicity values.

  16. An evolutive real-time source inversion based on a linear inverse formulation

    NASA Astrophysics Data System (ADS)

    Sanchez Reyes, H. S.; Tago, J.; Cruz-Atienza, V. M.; Metivier, L.; Contreras Zazueta, M. A.; Virieux, J.

    2016-12-01

    Finite source inversion is a steppingstone to unveil earthquake rupture. It is used on ground motion predictions and its results shed light on seismic cycle for better tectonic understanding. It is not yet used for quasi-real-time analysis. Nowadays, significant progress has been made on approaches regarding earthquake imaging, thanks to new data acquisition and methodological advances. However, most of these techniques are posterior procedures once seismograms are available. Incorporating source parameters estimation into early warning systems would require to update the source build-up while recording data. In order to go toward this dynamic estimation, we developed a kinematic source inversion formulated in the time-domain, for which seismograms are linearly related to the slip distribution on the fault through convolutions with Green's functions previously estimated and stored (Perton et al., 2016). These convolutions are performed in the time-domain as we progressively increase the time window of records at each station specifically. Selected unknowns are the spatio-temporal slip-rate distribution to keep the linearity of the forward problem with respect to unknowns, as promoted by Fan and Shearer (2014). Through the spatial extension of the expected rupture zone, we progressively build-up the slip-rate when adding new data by assuming rupture causality. This formulation is based on the adjoint-state method for efficiency (Plessix, 2006). The inverse problem is non-unique and, in most cases, underdetermined. While standard regularization terms are used for stabilizing the inversion, we avoid strategies based on parameter reduction leading to an unwanted non-linear relationship between parameters and seismograms for our progressive build-up. Rise time, rupture velocity and other quantities can be extracted later on as attributs from the slip-rate inversion we perform. Satisfactory results are obtained on a synthetic example (FIgure 1) proposed by the Source Inversion Validation project (Mai et al. 2011). A real case application is currently being explored. Our specific formulation, combined with simple prior information, as well as numerical results obtained so far, yields interesting perspectives for a real-time implementation.

  17. A High Resolution Technology-based Emissions Inventory for Nepal: Present and Future Scenario

    NASA Astrophysics Data System (ADS)

    Sadavarte, P.; Das, B.; Rupakheti, M.; Byanju, R.; Bhave, P.

    2016-12-01

    A comprehensive regional assessment of emission sources is a major hindrance for a complete understanding of the air quality and for designing appropriate mitigation solutions in Nepal, a landlocked country in foothills of the Himalaya. This study attempts, for the first time, to develop a fine resolution (1km × 1km) present day emission inventory of Nepal with a higher tier approach using our understanding of the currently used technologies, energy consumption used in various energy sectors and its resultant emissions. We estimate present-day emissions of aerosols (BC, OC and PM2.5), trace gases (SO2, CO, NOX and VOC) and greenhouse gases (CO2, N2O and CH4) from non-open burning sources (residential, industry, transport, commercial) and open-burning sources (agriculture and municipal solid waste burning) for the base year 2013. We used methodologies published in literatures, and both primary and secondary data to estimate energy production and consumption in each sector and its sub-sector and associated emissions. Local practices and activity rates are explicitly accounted for energy consumption and dispersed often under-documented emission sources like brick manufacturing, diesel generator sets, mining, stone crushing, solid waste burning and diesel use in farms are considered. Apart from pyrogenic source of CH4 emissions, methanogenic and enteric fermentation sources are also accounted. Region-specific and newly measured country-specific emission factors are used for emission estimates. Activity based proxies are used for spatial and temporal distribution of emissions. Preliminary results suggest that 80% of national energy consumption is in residential sector followed by industry (8%) and transport (7%). More than 90% of the residential energy is supplied by biofuel which needs immediate attention to reduce emissions. Further, the emissions would be compared with other contemporary studies, regional and global datasets and used in the model simulations to understand impacts of air pollution on health and climate in Kathmandu Valley and Nepal. Future emissions are being developed based on different possible growth scenarios and policy interventions to mitigate emissions.

  18. Analytic estimation of recycled products added value as a means for effective environmental management

    NASA Astrophysics Data System (ADS)

    Batzias, Dimitris F.

    2012-12-01

    In this work, we present an analytic estimation of recycled products added value in order to provide a means for determining the degree of recycling that maximizes profit, taking also into account the social interest by including the subsidy of the corresponding investment. A methodology has been developed based on Life Cycle Product (LCP) with emphasis on added values H, R as fractions of production and recycle cost, respectively (H, R >1, since profit is included), which decrease by the corresponding rates h, r in the recycle course, due to deterioration of quality. At macrolevel, the claim that "an increase of exergy price, as a result of available cheap energy sources becoming more scarce, leads to less recovered quantity of any recyclable material" is proved by means of the tradeoff between the partial benefits due to material saving and resources degradation/consumption (assessed in monetary terms).

  19. BIOREL: the benchmark resource to estimate the relevance of the gene networks.

    PubMed

    Antonov, Alexey V; Mewes, Hans W

    2006-02-06

    The progress of high-throughput methodologies in functional genomics has lead to the development of statistical procedures to infer gene networks from various types of high-throughput data. However, due to the lack of common standards, the biological significance of the results of the different studies is hard to compare. To overcome this problem we propose a benchmark procedure and have developed a web resource (BIOREL), which is useful for estimating the biological relevance of any genetic network by integrating different sources of biological information. The associations of each gene from the network are classified as biologically relevant or not. The proportion of genes in the network classified as "relevant" is used as the overall network relevance score. Employing synthetic data we demonstrated that such a score ranks the networks fairly in respect to the relevance level. Using BIOREL as the benchmark resource we compared the quality of experimental and theoretically predicted protein interaction data.

  20. Connecticut Highlands Technical Report - Documentation of the Regional Rainfall-Runoff Model

    USGS Publications Warehouse

    Ahearn, Elizabeth A.; Bjerklie, David M.

    2010-01-01

    This report provides the supporting data and describes the data sources, methodologies, and assumptions used in the assessment of existing and potential water resources of the Highlands of Connecticut and Pennsylvania (referred to herein as the “Highlands”). Included in this report are Highlands groundwater and surface-water use data and the methods of data compilation. Annual mean streamflow and annual mean base-flow estimates from selected U.S. Geological Survey (USGS) gaging stations were computed using data for the period of record through water year 2005. The methods of watershed modeling are discussed and regional and sub-regional water budgets are provided. Information on Highlands surface-water-quality trends is presented. USGS web sites are provided as sources for additional information on groundwater levels, streamflow records, and ground- and surface-water-quality data. Interpretation of these data and the findings are summarized in the Highlands study report.

Top