Sample records for improve current estimates

  1. Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process.

    PubMed

    Haines, Aaron M; Zak, Matthew; Hammond, Katie; Scott, J Michael; Goble, Dale D; Rachlow, Janet L

    2013-08-13

    United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data.

  2. Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process

    PubMed Central

    Haines, Aaron M.; Zak, Matthew; Hammond, Katie; Scott, J. Michael; Goble, Dale D.; Rachlow, Janet L.

    2013-01-01

    Simple Summary The objective of our study was to evaluate the mention of uncertainty (i.e., variance) associated with population size estimates within U.S. recovery plans for endangered animals. To do this we reviewed all finalized recovery plans for listed terrestrial vertebrate species. We found that more recent recovery plans reported more estimates of population size and uncertainty. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty. We recommend that updated recovery plans combine uncertainty of population size estimates with a minimum detectable difference to aid in successful recovery. Abstract United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data. PMID:26479531

  3. Current Emergency Locator Transmitter (ELT) deficiencies and potential improvements utilizing TSO-C91a ELTs

    NASA Technical Reports Server (NTRS)

    Trudell, Bernard J.; Dreibelbis, Ryland R.

    1990-01-01

    An analysis was conducted of current ELT problems and potential improvements that could be made by employing the TSO-C91a ELTs to replace the current TSO-C91 ELTs. The scope of the study included the following: (1) validate the problems; (2) determine specific failure causes; (3) determine false alarm causes; (4) estimate improvements from TSO-C91a; (5) estimate benefits from replacement of the current ELTs; and (6) determine need and benefits for improved ELT inspection and maintenance. A detailed comparison between the two requirements documents (TSO-C91 and -91a) was made to assess improved performance of the ELT in each category of failure cause and each cause of false alarms. The comparison and analysis resulted in projecting a success of operation rate approximately 3 times the current rate and a reduction in false alarms to 0.25 of those generated by TSO-C91 ELTs. These improvements led to a projection of benefits of approximately 25 additional lives to be saved each year with TSO-C91a ELTs and an improved inspection and maintenance program.

  4. Improved battery parameter estimation method considering operating scenarios for HEV/EV applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Jufeng; Xia, Bing; Shang, Yunlong

    This study presents an improved battery parameter estimation method based on typical operating scenarios in hybrid electric vehicles and pure electric vehicles. Compared with the conventional estimation methods, the proposed method takes both the constant-current charging and the dynamic driving scenarios into account, and two separate sets of model parameters are estimated through different parts of the pulse-rest test. The model parameters for the constant-charging scenario are estimated from the data in the pulse-charging periods, while the model parameters for the dynamic driving scenario are estimated from the data in the rest periods, and the length of the fitted datasetmore » is determined by the spectrum analysis of the load current. In addition, the unsaturated phenomenon caused by the long-term resistor-capacitor (RC) network is analyzed, and the initial voltage expressions of the RC networks in the fitting functions are improved to ensure a higher model fidelity. Simulation and experiment results validated the feasibility of the developed estimation method.« less

  5. Improved battery parameter estimation method considering operating scenarios for HEV/EV applications

    DOE PAGES

    Yang, Jufeng; Xia, Bing; Shang, Yunlong; ...

    2016-12-22

    This study presents an improved battery parameter estimation method based on typical operating scenarios in hybrid electric vehicles and pure electric vehicles. Compared with the conventional estimation methods, the proposed method takes both the constant-current charging and the dynamic driving scenarios into account, and two separate sets of model parameters are estimated through different parts of the pulse-rest test. The model parameters for the constant-charging scenario are estimated from the data in the pulse-charging periods, while the model parameters for the dynamic driving scenario are estimated from the data in the rest periods, and the length of the fitted datasetmore » is determined by the spectrum analysis of the load current. In addition, the unsaturated phenomenon caused by the long-term resistor-capacitor (RC) network is analyzed, and the initial voltage expressions of the RC networks in the fitting functions are improved to ensure a higher model fidelity. Simulation and experiment results validated the feasibility of the developed estimation method.« less

  6. The operational processing of wind estimates from cloud motions: Past, present and future

    NASA Technical Reports Server (NTRS)

    Novak, C.; Young, M.

    1977-01-01

    Current NESS winds operations provide approximately 1800 high quality wind estimates per day to about twenty domestic and foreign users. This marked improvement in NESS winds operations was the result of computer techniques development which began in 1969 to streamline and improve operational procedures. In addition, the launch of the SMS-1 satellite in 1974, the first in the second generation of geostationary spacecraft, provided an improved source of visible and infrared scanner data for the extraction of wind estimates. Currently, operational winds processing at NESS is accomplished by the automated and manual analyses of infrared data from two geostationary spacecraft. This system uses data from SMS-2 and GOES-1 to produce wind estimates valid for 00Z, 12Z and 18Z synoptic times.

  7. CH-47F Improved Cargo Helicopter (CH-47F)

    DTIC Science & Technology

    2015-12-01

    Confidence Level Confidence Level of cost estimate for current APB: 50% The Confidence Level of the CH-47F APB cost estimate, which was approved on April...M) Initial PAUC Development Estimate Changes PAUC Production Estimate Econ Qty Sch Eng Est Oth Spt Total 10.316 -0.491 3.003 -0.164 2.273 7.378...SAR Baseline to Current SAR Baseline (TY $M) Initial APUC Development Estimate Changes APUC Production Estimate Econ Qty Sch Eng Est Oth Spt Total

  8. Potential Improvements to Remote Primary Productivity Estimation in the Southern California Current System

    NASA Astrophysics Data System (ADS)

    Jacox, M.; Edwards, C. A.; Kahru, M.; Rudnick, D. L.; Kudela, R. M.

    2012-12-01

    A 26-year record of depth integrated primary productivity (PP) in the Southern California Current System (SCCS) is analyzed with the goal of improving satellite net primary productivity (PP) estimates. The ratio of integrated primary productivity to surface chlorophyll correlates strongly to surface chlorophyll concentration (chl0). However, chl0 does not correlate to chlorophyll-specific productivity, and appears to be a proxy for vertical phytoplankton distribution rather than phytoplankton physiology. Modest improvements in PP model performance are achieved by tuning existing algorithms for the SCCS, particularly by empirical parameterization of photosynthetic efficiency in the Vertically Generalized Production Model. Much larger improvements are enabled by improving accuracy of subsurface chlorophyll and light profiles. In a simple vertically resolved production model, substitution of in situ surface data for remote sensing estimates offers only marginal improvements in model r2 and total log10 root mean squared difference, while inclusion of in situ chlorophyll and light profiles improves these metrics significantly. Autonomous underwater gliders, capable of measuring subsurface fluorescence on long-term, long-range deployments, significantly improve PP model fidelity in the SCCS. We suggest their use (and that of other autonomous profilers such as Argo floats) in conjunction with satellites as a way forward for improved PP estimation in coastal upwelling systems.

  9. Impact of revised and potential future albedo estimates on CCSM3 simulations of growing-season surface temperature fields for North America

    Treesearch

    Warren E. Heilman; David Y. Hollinger; Xiuping Li; Xindi Bain; Shiyuan. Zhong

    2010-01-01

    Recently published albedo research has resulted in improved growing-season albedo estimates for forest and grassland vegetation. The impact of these improved estimates on the ability of climate models to simulate growing-season surface temperature patterns is unknown. We have developed a set of current-climate surface temperature scenarios for North America using the...

  10. Variability of the Bering Sea Circulation in the Period 1992-2010

    DTIC Science & Technology

    2012-06-09

    mas- sive sources of data (satellite altimetry, Argo drifters) may improve the accuracy of these estimates in the near future. Large-scale...Combining these data with in situ observations of temperature, salinity and subsurface currents allowed obtaining increasingly accurate estimates ...al. (2006) esti- mated the Kamchatka Current transport of 24 Sv (1 Sv = 106 m?/s), a value significantly higher than pre- vious estimates of

  11. Continuous Glucose Monitoring in Subjects with Type 1 Diabetes: Improvement in Accuracy by Correcting for Background Current

    PubMed Central

    Youssef, Joseph El; Engle, Julia M.; Massoud, Ryan G.; Ward, W. Kenneth

    2010-01-01

    Abstract Background A cause of suboptimal accuracy in amperometric glucose sensors is the presence of a background current (current produced in the absence of glucose) that is not accounted for. We hypothesized that a mathematical correction for the estimated background current of a commercially available sensor would lead to greater accuracy compared to a situation in which we assumed the background current to be zero. We also tested whether increasing the frequency of sensor calibration would improve sensor accuracy. Methods This report includes analysis of 20 sensor datasets from seven human subjects with type 1 diabetes. Data were divided into a training set for algorithm development and a validation set on which the algorithm was tested. A range of potential background currents was tested. Results Use of the background current correction of 4 nA led to a substantial improvement in accuracy (improvement of absolute relative difference or absolute difference of 3.5–5.5 units). An increase in calibration frequency led to a modest accuracy improvement, with an optimum at every 4 h. Conclusions Compared to no correction, a correction for the estimated background current of a commercially available glucose sensor led to greater accuracy and better detection of hypoglycemia and hyperglycemia. The accuracy-optimizing scheme presented here can be implemented in real time. PMID:20879968

  12. CONTRIBUTIONS OF CURRENT YEAR PHOTOSYNTHATE TO FINE ROOTS ESTIMATED USING A 13C-DEPLETED CO2 SOURCE

    EPA Science Inventory

    The quantification of root turnover is necessary for a complete understanding of plant carbon (C) budgets, especially in terms of impacts of global climate change. To improve estimates of root turnover, we present a method to distinguish current- from prior-year allocation of ca...

  13. Angular velocity estimation based on star vector with improved current statistical model Kalman filter.

    PubMed

    Zhang, Hao; Niu, Yanxiong; Lu, Jiazhen; Zhang, He

    2016-11-20

    Angular velocity information is a requisite for a spacecraft guidance, navigation, and control system. In this paper, an approach for angular velocity estimation based merely on star vector measurement with an improved current statistical model Kalman filter is proposed. High-precision angular velocity estimation can be achieved under dynamic conditions. The amount of calculation is also reduced compared to a Kalman filter. Different trajectories are simulated to test this approach, and experiments with real starry sky observation are implemented for further confirmation. The estimation accuracy is proved to be better than 10-4  rad/s under various conditions. Both the simulation and the experiment demonstrate that the described approach is effective and shows an excellent performance under both static and dynamic conditions.

  14. Recovery of aboveground biomass in Ohio, 1978

    Treesearch

    Eric H. Wharton

    1982-01-01

    Timber-use studies in Ohio show that multiproduct harvesting could be improved. The recovery rate from these operations, expressed as a ratio of the merchantable stem biomass estimate, is 103 percent. Although current methods of multiproduct harvesting have improved recovery of the merchantable stem, an estimated 1,539 thousand fresh tons of total residual biomass were...

  15. Estimation of hyper-parameters for a hierarchical model of combined cortical and extra-brain current sources in the MEG inverse problem.

    PubMed

    Morishige, Ken-ichi; Yoshioka, Taku; Kawawaki, Dai; Hiroe, Nobuo; Sato, Masa-aki; Kawato, Mitsuo

    2014-11-01

    One of the major obstacles in estimating cortical currents from MEG signals is the disturbance caused by magnetic artifacts derived from extra-cortical current sources such as heartbeats and eye movements. To remove the effect of such extra-brain sources, we improved the hybrid hierarchical variational Bayesian method (hyVBED) proposed by Fujiwara et al. (NeuroImage, 2009). hyVBED simultaneously estimates cortical and extra-brain source currents by placing dipoles on cortical surfaces as well as extra-brain sources. This method requires EOG data for an EOG forward model that describes the relationship between eye dipoles and electric potentials. In contrast, our improved approach requires no EOG and less a priori knowledge about the current variance of extra-brain sources. We propose a new method, "extra-dipole," that optimally selects hyper-parameter values regarding current variances of the cortical surface and extra-brain source dipoles. With the selected parameter values, the cortical and extra-brain dipole currents were accurately estimated from the simulated MEG data. The performance of this method was demonstrated to be better than conventional approaches, such as principal component analysis and independent component analysis, which use only statistical properties of MEG signals. Furthermore, we applied our proposed method to measured MEG data during covert pursuit of a smoothly moving target and confirmed its effectiveness. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Estimating Evapotranspiration with Land Data Assimilation Systems

    NASA Technical Reports Server (NTRS)

    Peters-Lidard, C. D.; Kumar, S. V.; Mocko, D. M.; Tian, Y.

    2011-01-01

    Advancements in both land surface models (LSM) and land surface data assimilation, especially over the last decade, have substantially advanced the ability of land data assimilation systems (LDAS) to estimate evapotranspiration (ET). This article provides a historical perspective on international LSM intercomparison efforts and the development of LDAS systems, both of which have improved LSM ET skill. In addition, an assessment of ET estimates for current LDAS systems is provided along with current research that demonstrates improvement in LSM ET estimates due to assimilating satellite-based soil moisture products. Using the Ensemble Kalman Filter in the Land Information System, we assimilate both NASA and Land Parameter Retrieval Model (LPRM) soil moisture products into the Noah LSM Version 3.2 with the North American LDAS phase 2 (NLDAS-2) forcing to mimic the NLDAS-2 configuration. Through comparisons with two global reference ET products, one based on interpolated flux tower data and one from a new satellite ET algorithm, over the NLDAS2 domain, we demonstrate improvement in ET estimates only when assimilating the LPRM soil moisture product.

  17. Constructing a Database from Multiple 2D Images for Camera Pose Estimation and Robot Localization

    NASA Technical Reports Server (NTRS)

    Wolf, Michael; Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.; Padgett, Curtis W.

    2012-01-01

    The LMDB (Landmark Database) Builder software identifies persistent image features (landmarks) in a scene viewed multiple times and precisely estimates the landmarks 3D world positions. The software receives as input multiple 2D images of approximately the same scene, along with an initial guess of the camera poses for each image, and a table of features matched pair-wise in each frame. LMDB Builder aggregates landmarks across an arbitrarily large collection of frames with matched features. Range data from stereo vision processing can also be passed to improve the initial guess of the 3D point estimates. The LMDB Builder aggregates feature lists across all frames, manages the process to promote selected features to landmarks, and iteratively calculates the 3D landmark positions using the current camera pose estimations (via an optimal ray projection method), and then improves the camera pose estimates using the 3D landmark positions. Finally, it extracts image patches for each landmark from auto-selected key frames and constructs the landmark database. The landmark database can then be used to estimate future camera poses (and therefore localize a robotic vehicle that may be carrying the cameras) by matching current imagery to landmark database image patches and using the known 3D landmark positions to estimate the current pose.

  18. Current and anticipated use of thermal-hydraulic codes for BWR transient and accident analyses in Japan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arai, Kenji; Ebata, Shigeo

    1997-07-01

    This paper summarizes the current and anticipated use of the thermal-hydraulic and neutronic codes for the BWR transient and accident analyses in Japan. The codes may be categorized into the licensing codes and the best estimate codes for the BWR transient and accident analyses. Most of the licensing codes have been originally developed by General Electric. Some codes have been updated based on the technical knowledge obtained in the thermal hydraulic study in Japan, and according to the BWR design changes. The best estimates codes have been used to support the licensing calculations and to obtain the phenomenological understanding ofmore » the thermal hydraulic phenomena during a BWR transient or accident. The best estimate codes can be also applied to a design study for a next generation BWR to which the current licensing model may not be directly applied. In order to rationalize the margin included in the current BWR design and develop a next generation reactor with appropriate design margin, it will be required to improve the accuracy of the thermal-hydraulic and neutronic model. In addition, regarding the current best estimate codes, the improvement in the user interface and the numerics will be needed.« less

  19. Evaluation of small area crop estimation techniques using LANDSAT- and ground-derived data. [South Dakota

    NASA Technical Reports Server (NTRS)

    Amis, M. L.; Martin, M. V.; Mcguire, W. G.; Shen, S. S. (Principal Investigator)

    1982-01-01

    Studies completed in fiscal year 1981 in support of the clustering/classification and preprocessing activities of the Domestic Crops and Land Cover project. The theme throughout the study was the improvement of subanalysis district (usually county level) crop hectarage estimates, as reflected in the following three objectives: (1) to evaluate the current U.S. Department of Agriculture Statistical Reporting Service regression approach to crop area estimation as applied to the problem of obtaining subanalysis district estimates; (2) to develop and test alternative approaches to subanalysis district estimation; and (3) to develop and test preprocessing techniques for use in improving subanalysis district estimates.

  20. The Impact of AMSR-E Soil Moisture Assimilation on Evapotranspiration Estimation

    NASA Technical Reports Server (NTRS)

    Peters-Lidard, Christa D.; Kumar, Sujay; Mocko, David; Tian, Yudong

    2012-01-01

    An assessment ofETestimates for current LDAS systems is provided along with current research that demonstrates improvement in LSM ET estimates due to assimilating satellite-based soil moisture products. Using the Ensemble Kalman Filter in the Land Information System, we assimilate both NASA and Land Parameter Retrieval Model (LPRM) soil moisture products into the Noah LSM Version 3.2 with the North American LDAS phase 2 CNLDAS-2) forcing to mimic the NLDAS-2 configuration. Through comparisons with two global reference ET products, one based on interpolated flux tower data and one from a new satellite ET algorithm, over the NLDAS2 domain, we demonstrate improvement in ET estimates only when assimilating the LPRM soil moisture product.

  1. Recent Improvements in Retrieving Near-Surface Air Temperature and Humidity Using Microwave Remote Sensing

    NASA Technical Reports Server (NTRS)

    Roberts, J. Brent

    2010-01-01

    Detailed studies of the energy and water cycles require accurate estimation of the turbulent fluxes of moisture and heat across the atmosphere-ocean interface at regional to basin scale. Providing estimates of these latent and sensible heat fluxes over the global ocean necessitates the use of satellite or reanalysis-based estimates of near surface variables. Recent studies have shown that errors in the surface (10 meter)estimates of humidity and temperature are currently the largest sources of uncertainty in the production of turbulent fluxes from satellite observations. Therefore, emphasis has been placed on reducing the systematic errors in the retrieval of these parameters from microwave radiometers. This study discusses recent improvements in the retrieval of air temperature and humidity through improvements in the choice of algorithms (linear vs. nonlinear) and the choice of microwave sensors. Particular focus is placed on improvements using a neural network approach with a single sensor (Special Sensor Microwave/Imager) and the use of combined sensors from the NASA AQUA satellite platform. The latter algorithm utilizes the unique sampling available on AQUA from the Advanced Microwave Scanning Radiometer (AMSR-E) and the Advanced Microwave Sounding Unit (AMSU-A). Current estimates of uncertainty in the near-surface humidity and temperature from single and multi-sensor approaches are discussed and used to estimate errors in the turbulent fluxes.

  2. An exploration of multilevel modeling for estimating access to drinking-water and sanitation.

    PubMed

    Wolf, Jennyfer; Bonjour, Sophie; Prüss-Ustün, Annette

    2013-03-01

    Monitoring progress towards the targets for access to safe drinking-water and sanitation under the Millennium Development Goals (MDG) requires reliable estimates and indicators. We analyzed trends and reviewed current indicators used for those targets. We developed continuous time series for 1990 to 2015 for access to improved drinking-water sources and improved sanitation facilities by country using multilevel modeling (MLM). We show that MLM is a reliable and transparent tool with many advantages over alternative approaches to estimate access to facilities. Using current indicators, the MDG target for water would be met, but the target for sanitation missed considerably. The number of people without access to such services is still increasing in certain regions. Striking differences persist between urban and rural areas. Consideration of water quality and different classification of shared sanitation facilities would, however, alter estimates considerably. To achieve improved monitoring we propose: (1) considering the use of MLM as an alternative for estimating access to safe drinking-water and sanitation; (2) completing regular assessments of water quality and supporting the development of national regulatory frameworks as part of capacity development; (3) evaluating health impacts of shared sanitation; (4) using a more equitable presentation of countries' performances in providing improved services.

  3. Valuing Non-CO2 GHG Emission Changes in Benefit-Cost ...

    EPA Pesticide Factsheets

    The climate impacts of greenhouse gas (GHG) emissions impose social costs on society. To date, EPA has not had an approach to estimate the economic benefits of reducing emissions of non-CO2 GHGs (or the costs of increasing them) that is consistent with the methodology underlying the U.S. Government’s current estimates of the social cost of carbon (SCC). A recently published paper presents estimates of the social cost of methane that are consistent with the SCC estimates. The Agency is seeking review of the potential application of these new benefit estimates to benefit cost analysis in relation to current practice in this area. The goal of this project is to improve upon the current treatment of non-CO2 GHG emission impacts in benefit-cost analysis.

  4. ALTERNATIVE APPROACH TO ESTIMATING CANCER ...

    EPA Pesticide Factsheets

    The alternative approach for estimating cancer potency from inhalation exposure to asbestos seeks to improve the methods developed by USEPA (1986). This efforts seeks to modify the the current approach for estimating cancer potency for lung cancer and mesothelioma to account for the current scientific consensus that cancer risk from asbestos depends both on mineral type and on particle size distribution. In brief, epidemiological exposure-response data for lung cancer and mesothelioma in asbestos workers are combined with estimates of the mineral type(s) and partical size distribution at each exposure location in order to estimate potency factors that are specific to a selected set of mineral type and size

  5. Using dark current data to estimate AVIRIS noise covariance and improve spectral analyses

    NASA Technical Reports Server (NTRS)

    Boardman, Joseph W.

    1995-01-01

    Starting in 1994, all AVIRIS data distributions include a new product useful for quantification and modeling of the noise in the reported radiance data. The 'postcal' file contains approximately 100 lines of dark current data collected at the end of each data acquisition run. In essence this is a regular spectral-image cube, with 614 samples, 100 lines and 224 channels, collected with a closed shutter. Since there is no incident radiance signal, the recorded DN measure only the DC signal level and the noise in the system. Similar dark current measurements, made at the end of each line are used, with a 100 line moving average, to remove the DC signal offset. Therefore, the pixel-by-pixel fluctuations about the mean of this dark current image provide an excellent model for the additive noise that is present in AVIRIS reported radiance data. The 61,400 dark current spectra can be used to calculate the noise levels in each channel and the noise covariance matrix. Both of these noise parameters should be used to improve spectral processing techniques. Some processing techniques, such as spectral curve fitting, will benefit from a robust estimate of the channel-dependent noise levels. Other techniques, such as automated unmixing and classification, will be improved by the stable and scene-independence noise covariance estimate. Future imaging spectrometry systems should have a similar ability to record dark current data, permitting this noise characterization and modeling.

  6. Update on developments at SNIF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zacks, J., E-mail: jamie.zacks@ccfe.ac.uk; Turner, I.; Day, I.

    The Small Negative Ion Facility (SNIF) at CCFE has been undergoing continuous development and enhancement to both improve operational reliability and increase diagnostic capability. SNIF uses a CW 13.56MHz, 5kW RF driven volume source with a 30kV triode accelerator. Improvement and characterisation work includes: Installation of a new “L” type RF matching unit, used to calculate the load on the RF generator. Use of the electron suppressing biased insert as a Langmuir probe under different beam extraction conditions. Measurement of the hydrogen Fulcher molecular spectrum, used to calculate gas temperature in the source. Beam optimisation through parameter scans, using coppermore » target plate and visible cameras, with results compared with AXCEL-INP to provide beam current estimate. Modelling of the beam power density profile on the target plate using ANSYS to estimate beam power and provide another estimate of beam current. This work is described, and has allowed an estimation of the extracted beam current of approximately 6mA (4mA/cm2) at 3.5kW RF power and a source pressure of 0.6Pa.« less

  7. Improving the Accuracy of Laplacian Estimation with Novel Variable Inter-Ring Distances Concentric Ring Electrodes

    PubMed Central

    Makeyev, Oleksandr; Besio, Walter G.

    2016-01-01

    Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected. PMID:27294933

  8. Improving the Accuracy of Laplacian Estimation with Novel Variable Inter-Ring Distances Concentric Ring Electrodes.

    PubMed

    Makeyev, Oleksandr; Besio, Walter G

    2016-06-10

    Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected.

  9. Development of Neuromorphic Sift Operator with Application to High Speed Image Matching

    NASA Astrophysics Data System (ADS)

    Shankayi, M.; Saadatseresht, M.; Bitetto, M. A. V.

    2015-12-01

    There was always a speed/accuracy challenge in photogrammetric mapping process, including feature detection and matching. Most of the researches have improved algorithm's speed with simplifications or software modifications which increase the accuracy of the image matching process. This research tries to improve speed without enhancing the accuracy of the same algorithm using Neuromorphic techniques. In this research we have developed a general design of a Neuromorphic ASIC to handle algorithms such as SIFT. We also have investigated neural assignment in each step of the SIFT algorithm. With a rough estimation based on delay of the used elements including MAC and comparator, we have estimated the resulting chip's performance for 3 scenarios, Full HD movie (Videogrammetry), 24 MP (UAV photogrammetry), and 88 MP image sequence. Our estimations led to approximate 3000 fps for Full HD movie, 250 fps for 24 MP image sequence and 68 fps for 88MP Ultracam image sequence which can be a huge improvement for current photogrammetric processing systems. We also estimated the power consumption of less than10 watts which is not comparable to current workflows.

  10. Improvement in Visual Target Tracking for a Mobile Robot

    NASA Technical Reports Server (NTRS)

    Kim, Won; Ansar, Adnan; Madison, Richard

    2006-01-01

    In an improvement of the visual-target-tracking software used aboard a mobile robot (rover) of the type used to explore the Martian surface, an affine-matching algorithm has been replaced by a combination of a normalized- cross-correlation (NCC) algorithm and a template-image-magnification algorithm. Although neither NCC nor template-image magnification is new, the use of both of them to increase the degree of reliability with which features can be matched is new. In operation, a template image of a target is obtained from a previous rover position, then the magnification of the template image is based on the estimated change in the target distance from the previous rover position to the current rover position (see figure). For this purpose, the target distance at the previous rover position is determined by stereoscopy, while the target distance at the current rover position is calculated from an estimate of the current pose of the rover. The template image is then magnified by an amount corresponding to the estimated target distance to obtain a best template image to match with the image acquired at the current rover position.

  11. California Drought Recovery Assessment Using GRACE Satellite Gravimetry Information

    NASA Astrophysics Data System (ADS)

    Love, C. A.; Aghakouchak, A.; Madadgar, S.; Tourian, M. J.

    2015-12-01

    California has been experiencing its most extreme drought in recent history due to a combination of record high temperatures and exceptionally low precipitation. An estimate for when the drought can be expected to end is needed for risk mitigation and water management. A crucial component of drought recovery assessments is the estimation of terrestrial water storage (TWS) deficit. Previous studies on drought recovery have been limited to surface water hydrology (precipitation and/or runoff) for estimating changes in TWS, neglecting the contribution of groundwater deficits to the recovery time of the system. Groundwater requires more time to recover than surface water storage; therefore, the inclusion of groundwater storage in drought recovery assessments is essential for understanding the long-term vulnerability of a region. Here we assess the probability, for varying timescales, of California's current TWS deficit returning to its long-term historical mean. Our method consists of deriving the region's fluctuations in TWS from changes in the gravity field observed by NASA's Gravity Recovery and Climate Experiment (GRACE) satellites. We estimate the probability that meteorological inputs, precipitation minus evaporation and runoff, over different timespans will balance the current GRACE-derived TWS deficit (e.g. in 3, 6, 12 months). This method improves upon previous techniques as the GRACE-derived water deficit comprises all hydrologic sources, including surface water, groundwater, and snow cover. With this empirical probability assessment we expect to improve current estimates of California's drought recovery time, thereby improving risk mitigation.

  12. Efficient Strategies for Estimating the Spatial Coherence of Backscatter

    PubMed Central

    Hyun, Dongwoon; Crowley, Anna Lisa C.; Dahl, Jeremy J.

    2017-01-01

    The spatial coherence of ultrasound backscatter has been proposed to reduce clutter in medical imaging, to measure the anisotropy of the scattering source, and to improve the detection of blood flow. These techniques rely on correlation estimates that are obtained using computationally expensive strategies. In this study, we assess existing spatial coherence estimation methods and propose three computationally efficient modifications: a reduced kernel, a downsampled receive aperture, and the use of an ensemble correlation coefficient. The proposed methods are implemented in simulation and in vivo studies. Reducing the kernel to a single sample improved computational throughput and improved axial resolution. Downsampling the receive aperture was found to have negligible effect on estimator variance, and improved computational throughput by an order of magnitude for a downsample factor of 4. The ensemble correlation estimator demonstrated lower variance than the currently used average correlation. Combining the three methods, the throughput was improved 105-fold in simulation with a downsample factor of 4 and 20-fold in vivo with a downsample factor of 2. PMID:27913342

  13. Battery state-of-charge estimation using approximate least squares

    NASA Astrophysics Data System (ADS)

    Unterrieder, C.; Zhang, C.; Lunglmayr, M.; Priewasser, R.; Marsili, S.; Huemer, M.

    2015-03-01

    In recent years, much effort has been spent to extend the runtime of battery-powered electronic applications. In order to improve the utilization of the available cell capacity, high precision estimation approaches for battery-specific parameters are needed. In this work, an approximate least squares estimation scheme is proposed for the estimation of the battery state-of-charge (SoC). The SoC is determined based on the prediction of the battery's electromotive force. The proposed approach allows for an improved re-initialization of the Coulomb counting (CC) based SoC estimation method. Experimental results for an implementation of the estimation scheme on a fuel gauge system on chip are illustrated. Implementation details and design guidelines are presented. The performance of the presented concept is evaluated for realistic operating conditions (temperature effects, aging, standby current, etc.). For the considered test case of a GSM/UMTS load current pattern of a mobile phone, the proposed method is able to re-initialize the CC-method with a high accuracy, while state-of-the-art methods fail to perform a re-initialization.

  14. Improved blood glucose estimation through multi-sensor fusion.

    PubMed

    Xiong, Feiyu; Hipszer, Brian R; Joseph, Jeffrey; Kam, Moshe

    2011-01-01

    Continuous glucose monitoring systems are an integral component of diabetes management. Efforts to improve the accuracy and robustness of these systems are at the forefront of diabetes research. Towards this goal, a multi-sensor approach was evaluated in hospitalized patients. In this paper, we report on a multi-sensor fusion algorithm to combine glucose sensor measurements in a retrospective fashion. The results demonstrate the algorithm's ability to improve the accuracy and robustness of the blood glucose estimation with current glucose sensor technology.

  15. A Framework of Combining Case-Based Reasoning with a Work Breakdown Structure for Estimating the Cost of Online Course Production Projects

    ERIC Educational Resources Information Center

    He, Wu

    2014-01-01

    Currently, a work breakdown structure (WBS) approach is used as the most common cost estimation approach for online course production projects. To improve the practice of cost estimation, this paper proposes a novel framework to estimate the cost for online course production projects using a case-based reasoning (CBR) technique and a WBS. A…

  16. Cover estimation and payload location using Markov random fields

    NASA Astrophysics Data System (ADS)

    Quach, Tu-Thach

    2014-02-01

    Payload location is an approach to find the message bits hidden in steganographic images, but not necessarily their logical order. Its success relies primarily on the accuracy of the underlying cover estimators and can be improved if more estimators are used. This paper presents an approach based on Markov random field to estimate the cover image given a stego image. It uses pairwise constraints to capture the natural two-dimensional statistics of cover images and forms a basis for more sophisticated models. Experimental results show that it is competitive against current state-of-the-art estimators and can locate payload embedded by simple LSB steganography and group-parity steganography. Furthermore, when combined with existing estimators, payload location accuracy improves significantly.

  17. An algorithmic approach to crustal deformation analysis

    NASA Technical Reports Server (NTRS)

    Iz, Huseyin Baki

    1987-01-01

    In recent years the analysis of crustal deformation measurements has become important as a result of current improvements in geodetic methods and an increasing amount of theoretical and observational data provided by several earth sciences. A first-generation data analysis algorithm which combines a priori information with current geodetic measurements was proposed. Relevant methods which can be used in the algorithm were discussed. Prior information is the unifying feature of this algorithm. Some of the problems which may arise through the use of a priori information in the analysis were indicated and preventive measures were demonstrated. The first step in the algorithm is the optimal design of deformation networks. The second step in the algorithm identifies the descriptive model of the deformation field. The final step in the algorithm is the improved estimation of deformation parameters. Although deformation parameters are estimated in the process of model discrimination, they can further be improved by the use of a priori information about them. According to the proposed algorithm this information must first be tested against the estimates calculated using the sample data only. Null-hypothesis testing procedures were developed for this purpose. Six different estimators which employ a priori information were examined. Emphasis was put on the case when the prior information is wrong and analytical expressions for possible improvements under incompatible prior information were derived.

  18. Use of Multiple Imputation Method to Improve Estimation of Missing Baseline Serum Creatinine in Acute Kidney Injury Research

    PubMed Central

    Peterson, Josh F.; Eden, Svetlana K.; Moons, Karel G.; Ikizler, T. Alp; Matheny, Michael E.

    2013-01-01

    Summary Background and objectives Baseline creatinine (BCr) is frequently missing in AKI studies. Common surrogate estimates can misclassify AKI and adversely affect the study of related outcomes. This study examined whether multiple imputation improved accuracy of estimating missing BCr beyond current recommendations to apply assumed estimated GFR (eGFR) of 75 ml/min per 1.73 m2 (eGFR 75). Design, setting, participants, & measurements From 41,114 unique adult admissions (13,003 with and 28,111 without BCr data) at Vanderbilt University Hospital between 2006 and 2008, a propensity score model was developed to predict likelihood of missing BCr. Propensity scoring identified 6502 patients with highest likelihood of missing BCr among 13,003 patients with known BCr to simulate a “missing” data scenario while preserving actual reference BCr. Within this cohort (n=6502), the ability of various multiple-imputation approaches to estimate BCr and classify AKI were compared with that of eGFR 75. Results All multiple-imputation methods except the basic one more closely approximated actual BCr than did eGFR 75. Total AKI misclassification was lower with multiple imputation (full multiple imputation + serum creatinine) (9.0%) than with eGFR 75 (12.3%; P<0.001). Improvements in misclassification were greater in patients with impaired kidney function (full multiple imputation + serum creatinine) (15.3%) versus eGFR 75 (40.5%; P<0.001). Multiple imputation improved specificity and positive predictive value for detecting AKI at the expense of modestly decreasing sensitivity relative to eGFR 75. Conclusions Multiple imputation can improve accuracy in estimating missing BCr and reduce misclassification of AKI beyond currently proposed methods. PMID:23037980

  19. Comparing Parameter Estimation Techniques for an Electrical Power Transformer Oil Temperature Prediction Model

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry

    1999-01-01

    This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.

  20. Parameter estimation for a cohesive sediment transport model by assimilating satellite observations in the Hangzhou Bay: Temporal variations and spatial distributions

    NASA Astrophysics Data System (ADS)

    Wang, Daosheng; Zhang, Jicai; He, Xianqiang; Chu, Dongdong; Lv, Xianqing; Wang, Ya Ping; Yang, Yang; Fan, Daidu; Gao, Shu

    2018-01-01

    Model parameters in the suspended cohesive sediment transport models are critical for the accurate simulation of suspended sediment concentrations (SSCs). Difficulties in estimating the model parameters still prevent numerical modeling of the sediment transport from achieving a high level of predictability. Based on a three-dimensional cohesive sediment transport model and its adjoint model, the satellite remote sensing data of SSCs during both spring tide and neap tide, retrieved from Geostationary Ocean Color Imager (GOCI), are assimilated to synchronously estimate four spatially and temporally varying parameters in the Hangzhou Bay in China, including settling velocity, resuspension rate, inflow open boundary conditions and initial conditions. After data assimilation, the model performance is significantly improved. Through several sensitivity experiments, the spatial and temporal variation tendencies of the estimated model parameters are verified to be robust and not affected by model settings. The pattern for the variations of the estimated parameters is analyzed and summarized. The temporal variations and spatial distributions of the estimated settling velocity are negatively correlated with current speed, which can be explained using the combination of flocculation process and Stokes' law. The temporal variations and spatial distributions of the estimated resuspension rate are also negatively correlated with current speed, which are related to the grain size of the seabed sediments under different current velocities. Besides, the estimated inflow open boundary conditions reach the local maximum values near the low water slack conditions and the estimated initial conditions are negatively correlated with water depth, which is consistent with the general understanding. The relationships between the estimated parameters and the hydrodynamic fields can be suggestive for improving the parameterization in cohesive sediment transport models.

  1. TEMPORALLY-RESOLVED AMMONIA EMISSION INVENTORIES: CURRENT ESTIMATES, EVALUATION TOOLS, AND MEASUREMENT NEEDS

    EPA Science Inventory

    In this study, we evaluate the suitability of a three-dimensional chemical transport model (CTM) as a tool for assessing ammonia emission inventories, calculate the improvement in CTM performance owing to recent advances in temporally-varying ammonia emission estimates, and ident...

  2. Finite element method modeling to assess Laplacian estimates via novel variable inter-ring distances concentric ring electrodes.

    PubMed

    Makeyev, Oleksandr; Besio, Walter G

    2016-08-01

    Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation has been demonstrated in a range of applications. In our recent work we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts using finite element method modeling. Obtained results suggest that increasing inter-ring distances electrode configurations may decrease the estimation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration the estimation error may be decreased more than two-fold while for the quadripolar configuration more than six-fold decrease is expected.

  3. Doses and risks from the ingestion of Dounreay fuel fragments.

    PubMed

    Darley, P J; Charles, M W; Fell, T P; Harrison, J D

    2003-01-01

    The radiological implications of ingestion of nuclear fuel fragments present in the marine environment around Dounreay have been reassessed by using the Monte Carlo code MCNP to obtain improved estimates of the doses to target cells in the walls of the lower large intestine resulting from the passage of a fragment. The approach takes account of the reduction in dose due to attenuation within the intestinal wall and self-absorption of radiation in the fuel fragment itself. In addition, dose is calculated on the basis of a realistic estimate of the anatomical volume of the lumen, rather than being based on the average mass of the contents, as in the current ICRP model. Our best estimates of doses from the ingestion of the largest Dounreay particles are at least a factor of 30 lower than those predicted using the current ICRP model. The new ICRP model will address the issues raised here and provide improved estimates of dose.

  4. Tree biomass in the Swiss landscape: nationwide modelling for improved accounting for forest and non-forest trees.

    PubMed

    Price, B; Gomez, A; Mathys, L; Gardi, O; Schellenberger, A; Ginzler, C; Thürig, E

    2017-03-01

    Trees outside forest (TOF) can perform a variety of social, economic and ecological functions including carbon sequestration. However, detailed quantification of tree biomass is usually limited to forest areas. Taking advantage of structural information available from stereo aerial imagery and airborne laser scanning (ALS), this research models tree biomass using national forest inventory data and linear least-square regression and applies the model both inside and outside of forest to create a nationwide model for tree biomass (above ground and below ground). Validation of the tree biomass model against TOF data within settlement areas shows relatively low model performance (R 2 of 0.44) but still a considerable improvement on current biomass estimates used for greenhouse gas inventory and carbon accounting. We demonstrate an efficient and easily implementable approach to modelling tree biomass across a large heterogeneous nationwide area. The model offers significant opportunity for improved estimates on land use combination categories (CC) where tree biomass has either not been included or only roughly estimated until now. The ALS biomass model also offers the advantage of providing greater spatial resolution and greater within CC spatial variability compared to the current nationwide estimates.

  5. Interacting multiple model forward filtering and backward smoothing for maneuvering target tracking

    NASA Astrophysics Data System (ADS)

    Nandakumaran, N.; Sutharsan, S.; Tharmarasa, R.; Lang, Tom; McDonald, Mike; Kirubarajan, T.

    2009-08-01

    The Interacting Multiple Model (IMM) estimator has been proven to be effective in tracking agile targets. Smoothing or retrodiction, which uses measurements beyond the current estimation time, provides better estimates of target states. Various methods have been proposed for multiple model smoothing in the literature. In this paper, a new smoothing method, which involves forward filtering followed by backward smoothing while maintaining the fundamental spirit of the IMM, is proposed. The forward filtering is performed using the standard IMM recursion, while the backward smoothing is performed using a novel interacting smoothing recursion. This backward recursion mimics the IMM estimator in the backward direction, where each mode conditioned smoother uses standard Kalman smoothing recursion. Resulting algorithm provides improved but delayed estimates of target states. Simulation studies are performed to demonstrate the improved performance with a maneuvering target scenario. The comparison with existing methods confirms the improved smoothing accuracy. This improvement results from avoiding the augmented state vector used by other algorithms. In addition, the new technique to account for model switching in smoothing is a key in improving the performance.

  6. Carbon storage and carbon-to-organic matter relationships of three forested ecosystems of the Rocky Mountains

    Treesearch

    Theresa B. Jain

    1994-01-01

    Fluctuations in atmospheric carbon dioxide is influenced by carbon storage and cycling in terrestrial forest ecosystems. Currently, only gross estimates are available for carbon content of these ecosystems and reliable estimates are lacking for Rocky Mountain forests. To improve carbon storage estimates more information is needed on the relationship between carbon and...

  7. Online Detection of Broken Rotor Bar Fault in Induction Motors by Combining Estimation of Signal Parameters via Min-norm Algorithm and Least Square Method

    NASA Astrophysics Data System (ADS)

    Wang, Pan-Pan; Yu, Qiang; Hu, Yong-Jun; Miao, Chang-Xin

    2017-11-01

    Current research in broken rotor bar (BRB) fault detection in induction motors is primarily focused on a high-frequency resolution analysis of the stator current. Compared with a discrete Fourier transformation, the parametric spectrum estimation technique has a higher frequency accuracy and resolution. However, the existing detection methods based on parametric spectrum estimation cannot realize online detection, owing to the large computational cost. To improve the efficiency of BRB fault detection, a new detection method based on the min-norm algorithm and least square estimation is proposed in this paper. First, the stator current is filtered using a band-pass filter and divided into short overlapped data windows. The min-norm algorithm is then applied to determine the frequencies of the fundamental and fault characteristic components with each overlapped data window. Next, based on the frequency values obtained, a model of the fault current signal is constructed. Subsequently, a linear least squares problem solved through singular value decomposition is designed to estimate the amplitudes and phases of the related components. Finally, the proposed method is applied to a simulated current and an actual motor, the results of which indicate that, not only parametric spectrum estimation technique.

  8. Model-data integration for developing the Cropland Carbon Monitoring System (CCMS)

    NASA Astrophysics Data System (ADS)

    Jones, C. D.; Bandaru, V.; Pnvr, K.; Jin, H.; Reddy, A.; Sahajpal, R.; Sedano, F.; Skakun, S.; Wagle, P.; Gowda, P. H.; Hurtt, G. C.; Izaurralde, R. C.

    2017-12-01

    The Cropland Carbon Monitoring System (CCMS) has been initiated to improve regional estimates of carbon fluxes from croplands in the conterminous United States through integration of terrestrial ecosystem modeling, use of remote-sensing products and publically available datasets, and development of improved landscape and management databases. In order to develop these improved carbon flux estimates, experimental datasets are essential for evaluating the skill of estimates, characterizing the uncertainty of these estimates, characterizing parameter sensitivities, and calibrating specific modeling components. Experiments were sought that included flux tower measurement of CO2 fluxes under production of major agronomic crops. Currently data has been collected from 17 experiments comprising 117 site-years from 12 unique locations. Calibration of terrestrial ecosystem model parameters using available crop productivity and net ecosystem exchange (NEE) measurements resulted in improvements in RMSE of NEE predictions of between 3.78% to 7.67%, while improvements in RMSE for yield ranged from -1.85% to 14.79%. Model sensitivities were dominated by parameters related to leaf area index (LAI) and spring growth, demonstrating considerable capacity for model improvement through development and integration of remote-sensing products. Subsequent analyses will assess the impact of such integrated approaches on skill of cropland carbon flux estimates.

  9. Analytic assessment of Laplacian estimates via novel variable interring distances concentric ring electrodes.

    PubMed

    Makeyev, Oleksandr; Besio, Walter G

    2016-08-01

    Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation has been demonstrated in a range of applications. In our recent work we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are analytically compared to their constant inter-ring distances counterparts using coefficients of the Taylor series truncation terms. Obtained results suggest that increasing inter-ring distances electrode configurations may decrease the truncation error of the Laplacian estimation resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration the truncation error may be decreased more than two-fold while for the quadripolar more than seven-fold decrease is expected.

  10. Energy consumption characteristics of transports using the prop-fan concept

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The fuel saving and economic potentials of the prop-fan high-speed propeller concept were evaluated for twin-engine commercial transport airplanes designed for 3333.6 km range, 180 passengers, and Mach 0.8 cruise. A fuel saving of 9.7% at the design range was estimated for a prop-fan airplane having wing-mounted engines, while a 5.8% saving was estimated for a design having the engines mounted on the aft body. The fuel savings and cost were found to be sensitive to the propeller noise level and to aerodynamic drag effects due to wing-slipstream interaction. Uncertainties in these effects could change the fuel savings as much as + or - 50%. A modest improvement in direct operating cost (DOC) was estimated for the wing-mounted prop-fan at current fuel prices. This improvement could become substantial in the event of further relative increases in the price of oil. The improvement in DOC requires the achievement of the nominal fuel saving and reductions in propeller and gearbox maintenance costs relative to current experience.

  11. Bias adjustment of infrared-based rainfall estimation using Passive Microwave satellite rainfall data

    NASA Astrophysics Data System (ADS)

    Karbalaee, Negar; Hsu, Kuolin; Sorooshian, Soroosh; Braithwaite, Dan

    2017-04-01

    This study explores using Passive Microwave (PMW) rainfall estimation for spatial and temporal adjustment of Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS). The PERSIANN-CCS algorithm collects information from infrared images to estimate rainfall. PERSIANN-CCS is one of the algorithms used in the Integrated Multisatellite Retrievals for GPM (Global Precipitation Mission) estimation for the time period PMW rainfall estimations are limited or not available. Continued improvement of PERSIANN-CCS will support Integrated Multisatellite Retrievals for GPM for current as well as retrospective estimations of global precipitation. This study takes advantage of the high spatial and temporal resolution of GEO-based PERSIANN-CCS estimation and the more effective, but lower sample frequency, PMW estimation. The Probability Matching Method (PMM) was used to adjust the rainfall distribution of GEO-based PERSIANN-CCS toward that of PMW rainfall estimation. The results show that a significant improvement of global PERSIANN-CCS rainfall estimation is obtained.

  12. Assuring Software Cost Estimates: Is it an Oxymoron?

    NASA Technical Reports Server (NTRS)

    Hihn, Jarius; Tregre, Grant

    2013-01-01

    The software industry repeatedly observes cost growth of well over 100% even after decades of cost estimation research and well-known best practices, so "What's the problem?" In this paper we will provide an overview of the current state oj software cost estimation best practice. We then explore whether applying some of the methods used in software assurance might improve the quality of software cost estimates. This paper especially focuses on issues associated with model calibration, estimate review, and the development and documentation of estimates as part alan integrated plan.

  13. Enhancing the Benefit of the Chemical Mixture Methodology: A Report on Methodology Testing and Potential Approaches for Improving Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Xiao-Ying; Yao, Juan; He, Hua

    2012-01-01

    Extensive testing shows that the current version of the Chemical Mixture Methodology (CMM) is meeting its intended mission to provide conservative estimates of the health effects from exposure to airborne chemical mixtures. However, the current version of the CMM could benefit from several enhancements that are designed to improve its application of Health Code Numbers (HCNs) and employ weighting factors to reduce over conservatism.

  14. Improving Empirical Approaches to Estimating Local Greenhouse Gas Emissions

    NASA Astrophysics Data System (ADS)

    Blackhurst, M.; Azevedo, I. L.; Lattanzi, A.

    2016-12-01

    Evidence increasingly indicates our changing climate will have significant global impacts on public health, economies, and ecosystems. As a result, local governments have become increasingly interested in climate change mitigation. In the U.S., cities and counties representing nearly 15% of the domestic population plan to reduce 300 million metric tons of greenhouse gases over the next 40 years (or approximately 1 ton per capita). Local governments estimate greenhouse gas emissions to establish greenhouse gas mitigation goals and select supporting mitigation measures. However, current practices produce greenhouse gas estimates - also known as a "greenhouse gas inventory " - of empirical quality often insufficient for robust mitigation decision making. Namely, current mitigation planning uses sporadic, annual, and deterministic estimates disaggregated by broad end use sector, obscuring sources of emissions uncertainty, variability, and exogeneity that influence mitigation opportunities. As part of AGU's Thriving Earth Exchange, Ari Lattanzi of City of Pittsburgh, PA recently partnered with Dr. Inez Lima Azevedo (Carnegie Mellon University) and Dr. Michael Blackhurst (University of Pittsburgh) to improve the empirical approach to characterizing Pittsburgh's greenhouse gas emissions. The project will produce first-order estimates of the underlying sources of uncertainty, variability, and exogeneity influencing Pittsburgh's greenhouse gases and discuss implications of mitigation decision making. The results of the project will enable local governments to collect more robust greenhouse gas inventories to better support their mitigation goals and improve measurement and verification efforts.

  15. Estimating cropland NPP using national crop inventory and MODIS derived crop specific parameters

    NASA Astrophysics Data System (ADS)

    Bandaru, V.; West, T. O.; Ricciuto, D. M.

    2011-12-01

    Estimates of cropland net primary production (NPP) are needed as input for estimates of carbon flux and carbon stock changes. Cropland NPP is currently estimated using terrestrial ecosystem models, satellite remote sensing, or inventory data. All three of these methods have benefits and problems. Terrestrial ecosystem models are often better suited for prognostic estimates rather than diagnostic estimates. Satellite-based NPP estimates often underestimate productivity on intensely managed croplands and are also limited to a few broad crop categories. Inventory-based estimates are consistent with nationally collected data on crop yields, but they lack sub-county spatial resolution. Integrating these methods will allow for spatial resolution consistent with current land cover and land use, while also maintaining total biomass quantities recorded in national inventory data. The main objective of this study was to improve cropland NPP estimates by using a modification of the CASA NPP model with individual crop biophysical parameters partly derived from inventory data and MODIS 8day 250m EVI product. The study was conducted for corn and soybean crops in Iowa and Illinois for years 2006 and 2007. We used EVI as a linear function for fPAR, and used crop land cover data (56m spatial resolution) to extract individual crop EVI pixels. First, we separated mixed pixels of both corn and soybean that occur when MODIS 250m pixel contains more than one crop. Second, we substituted mixed EVI pixels with nearest pure pixel values of the same crop within 1km radius. To get more accurate photosynthetic active radiation (PAR), we applied the Mountain Climate Simulator (MTCLIM) algorithm with the use of temperature and precipitation data from the North American Land Data Assimilation System (NLDAS-2) to generate shortwave radiation data. Finally, county specific light use efficiency (LUE) values of each crop for years 2006 to 2007 were determined by application of mean county inventory NPP and EVI-derived APAR into the Monteith equation. Results indicate spatial variability in LUE values across Iowa and Illinois. Northern regions of both Iowa and Illinois have higher LUE values than southern regions. This trend is reflected in NPP estimates. Results also show that corn has higher LUE values than soybean, resulting in higher NPP for corn than for soybean. Current NPP estimates were compared with NPP estimates from MOD17A3 product and with county inventory-based NPP estimates. Results indicate that current NPP estimates closely agree with inventory-based estimates, and that current NPP estimates are higher than those of the MOD17A3 product. It was also found that when mixed pixels were substituted with nearest pure pixels, revised NPP estimates were improved showing better agreement with inventory-based estimates.

  16. Global access to safe water: accounting for water quality and the resulting impact on MDG progress.

    PubMed

    Onda, Kyle; LoBuglio, Joe; Bartram, Jamie

    2012-03-01

    Monitoring of progress towards the Millennium Development Goal (MDG) drinking water target relies on classification of water sources as "improved" or "unimproved" as an indicator for water safety. We adjust the current Joint Monitoring Programme (JMP) estimate by accounting for microbial water quality and sanitary risk using the only-nationally representative water quality data currently available, that from the WHO and UNICEF "Rapid Assessment of Drinking Water Quality". A principal components analysis (PCA) of national environmental and development indicators was used to create models that predicted, for most countries, the proportions of piped and of other-improved water supplies that are faecally contaminated; and of these sources, the proportions that lack basic sanitary protection against contamination. We estimate that 1.8 billion people (28% of the global population) used unsafe water in 2010. The 2010 JMP estimate is that 783 million people (11%) use unimproved sources. Our estimates revise the 1990 baseline from 23% to 37%, and the target from 12% to 18%, resulting in a shortfall of 10% of the global population towards the MDG target in 2010. In contrast, using the indicator "use of an improved source" suggests that the MDG target for drinking-water has already been achieved. We estimate that an additional 1.2 billion (18%) use water from sources or systems with significant sanitary risks. While our estimate is imprecise, the magnitude of the estimate and the health and development implications suggest that greater attention is needed to better understand and manage drinking water safety.

  17. Estimating organ doses from tube current modulated CT examinations using a generalized linear model.

    PubMed

    Bostani, Maryam; McMillan, Kyle; Lu, Peiyun; Kim, Grace Hyun J; Cody, Dianna; Arbique, Gary; Greenberg, S Bruce; DeMarco, John J; Cagnon, Chris H; McNitt-Gray, Michael F

    2017-04-01

    Currently, available Computed Tomography dose metrics are mostly based on fixed tube current Monte Carlo (MC) simulations and/or physical measurements such as the size specific dose estimate (SSDE). In addition to not being able to account for Tube Current Modulation (TCM), these dose metrics do not represent actual patient dose. The purpose of this study was to generate and evaluate a dose estimation model based on the Generalized Linear Model (GLM), which extends the ability to estimate organ dose from tube current modulated examinations by incorporating regional descriptors of patient size, scanner output, and other scan-specific variables as needed. The collection of a total of 332 patient CT scans at four different institutions was approved by each institution's IRB and used to generate and test organ dose estimation models. The patient population consisted of pediatric and adult patients and included thoracic and abdomen/pelvis scans. The scans were performed on three different CT scanner systems. Manual segmentation of organs, depending on the examined anatomy, was performed on each patient's image series. In addition to the collected images, detailed TCM data were collected for all patients scanned on Siemens CT scanners, while for all GE and Toshiba patients, data representing z-axis-only TCM, extracted from the DICOM header of the images, were used for TCM simulations. A validated MC dosimetry package was used to perform detailed simulation of CT examinations on all 332 patient models to estimate dose to each segmented organ (lungs, breasts, liver, spleen, and kidneys), denoted as reference organ dose values. Approximately 60% of the data were used to train a dose estimation model, while the remaining 40% was used to evaluate performance. Two different methodologies were explored using GLM to generate a dose estimation model: (a) using the conventional exponential relationship between normalized organ dose and size with regional water equivalent diameter (WED) and regional CTDI vol as variables and (b) using the same exponential relationship with the addition of categorical variables such as scanner model and organ to provide a more complete estimate of factors that may affect organ dose. Finally, estimates from generated models were compared to those obtained from SSDE and ImPACT. The Generalized Linear Model yielded organ dose estimates that were significantly closer to the MC reference organ dose values than were organ doses estimated via SSDE or ImPACT. Moreover, the GLM estimates were better than those of SSDE or ImPACT irrespective of whether or not categorical variables were used in the model. While the improvement associated with a categorical variable was substantial in estimating breast dose, the improvement was minor for other organs. The GLM approach extends the current CT dose estimation methods by allowing the use of additional variables to more accurately estimate organ dose from TCM scans. Thus, this approach may be able to overcome the limitations of current CT dose metrics to provide more accurate estimates of patient dose, in particular, dose to organs with considerable variability across the population. © 2017 American Association of Physicists in Medicine.

  18. Improved Satellite Estimation of Near-Surface Humidity Using Vertical Water Vapor Profile Information

    NASA Astrophysics Data System (ADS)

    Tomita, H.; Hihara, T.; Kubota, M.

    2018-01-01

    Near-surface air-specific humidity is a key variable in the estimation of air-sea latent heat flux and evaporation from the ocean surface. An accurate estimation over the global ocean is required for studies on global climate, air-sea interactions, and water cycles. Current remote sensing techniques are problematic and a major source of errors for flux and evaporation. Here we propose a new method to estimate surface humidity using satellite microwave radiometer instruments, based on a new finding about the relationship between multichannel brightness temperatures measured by satellite sensors, surface humidity, and vertical moisture structure. Satellite estimations using the new method were compared with in situ observations to evaluate this method, confirming that it could significantly improve satellite estimations with high impact on satellite estimation of latent heat flux. We recommend the adoption of this method for any satellite microwave radiometer observations.

  19. Sixth Annual Flight Mechanics/Estimation Theory Symposium

    NASA Technical Reports Server (NTRS)

    Lefferts, E. (Editor)

    1981-01-01

    Methods of orbital position estimation were reviewed. The problem of accuracy in orbital mechanics is discussed and various techniques in current use are presented along with suggested improvements. Of special interest is the compensation for bias in satelliteborne instruments due to attitude instabilities. Image processing and correctional techniques are reported for geodetic measurements and mapping.

  20. Series resistance compensation for whole-cell patch-clamp studies using a membrane state estimator

    PubMed Central

    Sherman, AJ; Shrier, A; Cooper, E

    1999-01-01

    Whole-cell patch-clamp techniques are widely used to measure membrane currents from isolated cells. While suitable for a broad range of ionic currents, the series resistance (R(s)) of the recording pipette limits the bandwidth of the whole-cell configuration, making it difficult to measure rapid ionic currents. To increase bandwidth, it is necessary to compensate for R(s). Most methods of R(s) compensation become unstable at high bandwidth, making them hard to use. We describe a novel method of R(s) compensation that overcomes the stability limitations of standard designs. This method uses a state estimator, implemented with analog computation, to compute the membrane potential, V(m), which is then used in a feedback loop to implement a voltage clamp; we refer to this as state estimator R(s) compensation. To demonstrate the utility of this approach, we built an amplifier incorporating state estimator R(s) compensation. In benchtop tests, our amplifier showed significantly higher bandwidths and improved stability when compared with a commercially available amplifier. We demonstrated that state estimator R(s) compensation works well in practice by recording voltage-gated Na(+) currents under voltage-clamp conditions from dissociated neonatal rat sympathetic neurons. We conclude that state estimator R(s) compensation should make it easier to measure large rapid ionic currents with whole-cell patch-clamp techniques. PMID:10545359

  1. The potential for improving remote primary productivity estimates through subsurface chlorophyll and irradiance measurement

    NASA Astrophysics Data System (ADS)

    Jacox, Michael G.; Edwards, Christopher A.; Kahru, Mati; Rudnick, Daniel L.; Kudela, Raphael M.

    2015-02-01

    A 26-year record of depth integrated primary productivity (PP) in the Southern California Current System (SCCS) is analyzed with the goal of improving satellite net primary productivity (PP) estimates. Modest improvements in PP model performance are achieved by tuning existing algorithms for the SCCS, particularly by parameterizing carbon fixation rate in the vertically generalized production model as a function of surface chlorophyll concentration and distance from shore. Much larger improvements are enabled by improving the accuracy of subsurface chlorophyll and light profiles. In a simple vertically resolved production model for the SCCS (VRPM-SC), substitution of in situ surface data for remote sensing estimates offers only marginal improvements in model r2 (from 0.54 to 0.56) and total log10 root mean squared difference (from 0.22 to 0.21), while inclusion of in situ chlorophyll and light profiles improves these metrics to 0.77 and 0.15, respectively. Autonomous underwater gliders, capable of measuring subsurface properties on long-term, long-range deployments, significantly improve PP model fidelity in the SCCS. We suggest their use (and that of other autonomous profilers such as Argo floats) in conjunction with satellites as a way forward for large-scale improvements in PP estimation.

  2. Wind Plant Preconstruction Energy Estimates. Current Practice and Opportunities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clifton, Andrew; Smith, Aaron; Fields, Michael

    2016-04-19

    Understanding the amount of energy that will be harvested by a wind power plant each year and the variability of that energy is essential to assessing and potentially improving the financial viability of that power plant. The preconstruction energy estimate process predicts the amount of energy--with uncertainty estimates--that a wind power plant will deliver to the point of revenue. This report describes the preconstruction energy estimate process from a technical perspective and seeks to provide insight into the financial implications associated with each step.

  3. Mass-improvement of the vector current in three-flavor QCD

    NASA Astrophysics Data System (ADS)

    Fritzsch, P.

    2018-06-01

    We determine two improvement coefficients which are relevant to cancel mass-dependent cutoff effects in correlation functions with operator insertions of the non-singlet local QCD vector current. This determination is based on degenerate three-flavor QCD simulations of non-perturbatively O( a) improved Wilson fermions with tree-level improved gauge action. Employing a very robust strategy that has been pioneered in the quenched approximation leads to an accurate estimate of a counterterm cancelling dynamical quark cutoff effects linear in the trace of the quark mass matrix. To our knowledge this is the first time that such an effect has been determined systematically with large significance.

  4. High-precision radiometric tracking for planetary approach and encounter in the inner solar system

    NASA Technical Reports Server (NTRS)

    Christensen, C. S.; Thurman, S. W.; Davidson, J. M.; Finger, M. H.; Folkner, W. M.

    1989-01-01

    The benefits of improved radiometric tracking data have been studied for planetary approach within the inner Solar System using the Mars Rover Sample Return trajectory as a model. It was found that the benefit of improved data to approach and encounter navigation was highly dependent on the a priori uncertainties assumed for several non-estimated parameters, including those for frame-tie, Earth orientation, troposphere delay, and station locations. With these errors at their current levels, navigational performance was found to be insensitive to enhancements in data accuracy. However, when expected improvements in these errors are modeled, performance with current-accuracy data significantly improves, with substantial further improvements possible with enhancements in data accuracy.

  5. Constraining the range of Yukawa gravity interaction from S2 star orbits III: improvement expectations for graviton mass bounds

    NASA Astrophysics Data System (ADS)

    Zakharov, A. F.; Jovanović, P.; Borka, D.; Borka Jovanović, V.

    2018-04-01

    Recently, the LIGO-Virgo collaboration discovered gravitational waves and in their first publication on the subject the authors also presented a graviton mass constraint as mg < 1.2 × 10‑22 eV [1] (see also more details in a complimentary paper [2]). In our previous papers we considered constraints on Yukawa gravity parameters [3] and on graviton mass from analysis of the trajectory of S2 star near the Galactic Center [4]. In the paper we analyze a potential to reduce upper bounds for graviton mass with future observational data on trajectories of bright stars near the Galactic Center. Since gravitational potentials are different for these two cases, expressions for relativistic advance for general relativity and Yukawa potential are different functions on eccentricity and semimajor axis, it gives an opportunity to improve current estimates of graviton mass with future observational facilities. In our considerations of an improvement potential for a graviton mass estimate we adopt a conservative strategy and assume that trajectories of bright stars and their apocenter advance will be described with general relativity expressions and it gives opportunities to improve graviton mass constraints. In contrast with our previous studies, where we present current constraints on parameters of Yukawa gravity [5] and graviton mass [6] from observations of S2 star, in the paper we express expectations to improve current constraints for graviton mass, assuming the GR predictions about apocenter shifts will be confirmed with future observations. We concluded that if future observations of bright star orbits during around fifty years will confirm GR predictions about apocenter shifts of bright star orbits it give an opportunity to constrain a graviton mass at a level around 5 × 10‑23 eV or slightly better than current estimates obtained with LIGO observations.

  6. Estimating tag loss of the Atlantic Horseshoe crab, Limulus polyphemus, using a multi-state model

    USGS Publications Warehouse

    Butler, Catherine Alyssa; McGowan, Conor P.; Grand, James B.; Smith, David

    2012-01-01

    The Atlantic Horseshoe crab, Limulus polyphemus, is a valuable resource along the Mid-Atlantic coast which has, in recent years, experienced new management paradigms due to increased concern about this species role in the environment. While current management actions are underway, many acknowledge the need for improved and updated parameter estimates to reduce the uncertainty within the management models. Specifically, updated and improved estimates of demographic parameters such as adult crab survival in the regional population of interest, Delaware Bay, could greatly enhance these models and improve management decisions. There is however, some concern that difficulties in tag resighting or complete loss of tags could be occurring. As apparent from the assumptions of a Jolly-Seber model, loss of tags can result in a biased estimate and underestimate a survival rate. Given that uncertainty, as a first step towards estimating an unbiased estimate of adult survival, we first took steps to estimate the rate of tag loss. Using data from a double tag mark-resight study conducted in Delaware Bay and Program MARK, we designed a multi-state model to allow for the estimation of mortality of each tag separately and simultaneously.

  7. Impacts of phenology on estimation of actual evapotranspiration with VegET model

    NASA Astrophysics Data System (ADS)

    Kovalskyy, V.; Henebry, G. M.

    2009-12-01

    The VegET model provides spatially explicit estimation of actual evapotranspiration (AET). Currently, it uses a climatology based on AVHRR NDVI image time series to modulate fluxes during growing seasons (Senay 2008). This step simplifies the model formulation, but it also introduces errors by ignoring the interannual variation in phenology. We report on a study to evaluate the effects of using an NDVI climatology in VegET rather than current season values. Using flux tower data from three sites across the US Corn Belt, we found that currently the model overestimates the duration of season. With the standard deviation of more than one week, the model results in an additional 50 to 70 mm of AET per season, which can account for about 10% of seasonal AET in drier western sites. The model showed only modest sensitivity to variation in growing season weather. This lack of sensitivity greatly decreased model accuracy during drought years: Pearson correlation coefficients between model estimates and observed values dropped from about 0.7 to 0.5, depending on vegetation type. We also evaluated an alternative approach to drive the canopy component of evapotranspiration, the Event Driven Phenology Model (EDPM). The parameterization of VegET with EDPM-simulated canopy dynamics improved the correlation by 0.1 or more and reduced the RMSE on daily AET estimates by 0.3 mm. By accounting for the progress of phenology during a particular growing season, the EDPM improves AET estimation over an NDVI climatology.

  8. Remotely Piloted Vehicle (RPV): Proposed command, control, communications (C3) structure

    NASA Technical Reports Server (NTRS)

    Hughes, R. L.; Evans, W. K.; Howard, W. G.; Wallace, A. S.

    1982-01-01

    The currently proposed command, control, and communications (C3) structure associated with the RPV system, potential problem areas in the transfer of information to and from the RPV system, and options for improving information transfer and estimate the degree of improvement to be expected were identified.

  9. Ring Current Pressure Estimation withRAM-SCB using Data Assimilation and VanAllen Probe Flux Data

    NASA Astrophysics Data System (ADS)

    Godinez, H. C.; Yu, Y.; Henderson, M. G.; Larsen, B.; Jordanova, V.

    2015-12-01

    Capturing and subsequently modeling the influence of tail plasma injections on the inner magnetosphere is particularly important for understanding the formation and evolution of Earth's ring current. In this study, the ring current distribution is estimated with the Ring Current-Atmosphere Interactions Model with Self-Consistent Magnetic field (RAM-SCB) using, for the first time, data assimilation techniques and particle flux data from the Van Allen Probes. The state of the ring current within the RAM-SCB is corrected via an ensemble based data assimilation technique by using proton flux from one of the Van Allen Probes, to capture the enhancement of ring current following an isolated substorm event on July 18 2013. The results show significant improvement in the estimation of the ring current particle distributions in the RAM-SCB model, leading to better agreement with observations. This newly implemented data assimilation technique in the global modeling of the ring current thus provides a promising tool to better characterize the effect of substorm injections in the near-Earth regions. The work is part of the Space Hazards Induced near Earth by Large, Dynamic Storms (SHIELDS) project in Los Alamos National Laboratory.

  10. Improving Factor Score Estimation Through the Use of Observed Background Characteristics

    PubMed Central

    Curran, Patrick J.; Cole, Veronica; Bauer, Daniel J.; Hussong, Andrea M.; Gottfredson, Nisha

    2016-01-01

    A challenge facing nearly all studies in the psychological sciences is how to best combine multiple items into a valid and reliable score to be used in subsequent modelling. The most ubiquitous method is to compute a mean of items, but more contemporary approaches use various forms of latent score estimation. Regardless of approach, outside of large-scale testing applications, scoring models rarely include background characteristics to improve score quality. The current paper used a Monte Carlo simulation design to study score quality for different psychometric models that did and did not include covariates across levels of sample size, number of items, and degree of measurement invariance. The inclusion of covariates improved score quality for nearly all design factors, and in no case did the covariates degrade score quality relative to not considering the influences at all. Results suggest that the inclusion of observed covariates can improve factor score estimation. PMID:28757790

  11. Improving the Manager’s Ability to Identify Alternative Technologies.

    DTIC Science & Technology

    1980-03-01

    TABLES 1, Estimated Sources of Funds for R&D by Broad---------14 Industry Classes, 1980 2. Characteristics of the Prospector ---------------- -59 7 4II...thirds of that total outlay. Table I depicts estimated sources of funds for R&D by broad industry classes in 1980. 13 .17 TABLE 1 ESTIMATED SOURCES ...information about research in progress. The Exchange is the major national source $ for unclassified information on current and recently completed research in

  12. Estimating rainfall time series and model parameter distributions using model data reduction and inversion techniques

    NASA Astrophysics Data System (ADS)

    Wright, Ashley J.; Walker, Jeffrey P.; Pauwels, Valentijn R. N.

    2017-08-01

    Floods are devastating natural hazards. To provide accurate, precise, and timely flood forecasts, there is a need to understand the uncertainties associated within an entire rainfall time series, even when rainfall was not observed. The estimation of an entire rainfall time series and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of entire rainfall input time series to be considered when estimating model parameters, and provides the ability to improve rainfall estimates from poorly gauged catchments. Current methods to estimate entire rainfall time series from streamflow records are unable to adequately invert complex nonlinear hydrologic systems. This study aims to explore the use of wavelets in the estimation of rainfall time series from streamflow records. Using the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia, it is shown that model parameter distributions and an entire rainfall time series can be estimated. Including rainfall in the estimation process improves streamflow simulations by a factor of up to 1.78. This is achieved while estimating an entire rainfall time series, inclusive of days when none was observed. It is shown that the choice of wavelet can have a considerable impact on the robustness of the inversion. Combining the use of a likelihood function that considers rainfall and streamflow errors with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.

  13. Innovation in health economic modelling of service improvements for longer-term depression: demonstration in a local health community.

    PubMed

    Tosh, Jonathan; Kearns, Ben; Brennan, Alan; Parry, Glenys; Ricketts, Thomas; Saxon, David; Kilgarriff-Foster, Alexis; Thake, Anna; Chambers, Eleni; Hutten, Rebecca

    2013-04-26

    The purpose of the analysis was to develop a health economic model to estimate the costs and health benefits of alternative National Health Service (NHS) service configurations for people with longer-term depression. Modelling methods were used to develop a conceptual and health economic model of the current configuration of services in Sheffield, England for people with longer-term depression. Data and assumptions were synthesised to estimate cost per Quality Adjusted Life Years (QALYs). Three service changes were developed and resulted in increased QALYs at increased cost. Versus current care, the incremental cost-effectiveness ratio (ICER) for a self-referral service was £11,378 per QALY. The ICER was £2,227 per QALY for the dropout reduction service and £223 per QALY for an increase in non-therapy services. These results were robust when compared to current cost-effectiveness thresholds and accounting for uncertainty. Cost-effective service improvements for longer-term depression have been identified. Also identified were limitations of the current evidence for the long term impact of services.

  14. A framework for improving the cost-effectiveness of DSM program evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sonnenblick, R.; Eto, J.

    The prudence of utility demand-side management (DSM) investments hinges on their performance, yet evaluating performance is complicated because the energy saved by DSM programs can never be observed directly but only inferred. This study frames and begins to answer the following questions: (1) how well do current evaluation methods perform in improving confidence in the measurement of energy savings produced by DSM programs; (2) in view of this performance, how can limited evaluation resources be best allocated to maximize the value of the information they provide? The authors review three major classes of methods for estimating annual energy savings: trackingmore » database (sometimes called engineering estimates), end-use metering, and billing analysis and examine them in light of the uncertainties in current estimates of DSM program measure lifetimes. The authors assess the accuracy and precision of each method and construct trade-off curves to examine the costs of increases in accuracy or precision. Several approaches for improving evaluations for the purpose of assessing program cost effectiveness are demonstrated. The methods can be easily generalized to other evaluation objectives, such as shared savings incentive payments.« less

  15. A real-time recursive filter for the attitude determination of the Spacelab instrument pointing subsystem

    NASA Technical Reports Server (NTRS)

    West, M. E.

    1992-01-01

    A real-time estimation filter which reduces sensitivity to system variations and reduces the amount of preflight computation is developed for the instrument pointing subsystem (IPS). The IPS is a three-axis stabilized platform developed to point various astronomical observation instruments aboard the shuttle. Currently, the IPS utilizes a linearized Kalman filter (LKF), with premission defined gains, to compensate for system drifts and accumulated attitude errors. Since the a priori gains are generated for an expected system, variations result in a suboptimal estimation process. This report compares the performance of three real-time estimation filters with the current LKF implementation. An extended Kalman filter and a second-order Kalman filter are developed to account for the system nonlinearities, while a linear Kalman filter implementation assumes that the nonlinearities are negligible. The performance of each of the four estimation filters are compared with respect to accuracy, stability, settling time, robustness, and computational requirements. It is shown, that for the current IPS pointing requirements, the linear Kalman filter provides improved robustness over the LKF with less computational requirements than the two real-time nonlinear estimation filters.

  16. A Secure Trust Establishment Scheme for Wireless Sensor Networks

    PubMed Central

    Ishmanov, Farruh; Kim, Sung Won; Nam, Seung Yeob

    2014-01-01

    Trust establishment is an important tool to improve cooperation and enhance security in wireless sensor networks. The core of trust establishment is trust estimation. If a trust estimation method is not robust against attack and misbehavior, the trust values produced will be meaningless, and system performance will be degraded. We present a novel trust estimation method that is robust against on-off attacks and persistent malicious behavior. Moreover, in order to aggregate recommendations securely, we propose using a modified one-step M-estimator scheme. The novelty of the proposed scheme arises from combining past misbehavior with current status in a comprehensive way. Specifically, we introduce an aggregated misbehavior component in trust estimation, which assists in detecting an on-off attack and persistent malicious behavior. In order to determine the current status of the node, we employ previous trust values and current measured misbehavior components. These components are combined to obtain a robust trust value. Theoretical analyses and evaluation results show that our scheme performs better than other trust schemes in terms of detecting an on-off attack and persistent misbehavior. PMID:24451471

  17. Improved methods to estimate the effective impervious area in urban catchments using rainfall-runoff data

    NASA Astrophysics Data System (ADS)

    Ebrahimian, Ali; Wilson, Bruce N.; Gulliver, John S.

    2016-05-01

    Impervious surfaces are useful indicators of the urbanization impacts on water resources. Effective impervious area (EIA), which is the portion of total impervious area (TIA) that is hydraulically connected to the drainage system, is a better catchment parameter in the determination of actual urban runoff. Development of reliable methods for quantifying EIA rather than TIA is currently one of the knowledge gaps in the rainfall-runoff modeling context. The objective of this study is to improve the rainfall-runoff data analysis method for estimating EIA fraction in urban catchments by eliminating the subjective part of the existing method and by reducing the uncertainty of EIA estimates. First, the theoretical framework is generalized using a general linear least square model and using a general criterion for categorizing runoff events. Issues with the existing method that reduce the precision of the EIA fraction estimates are then identified and discussed. Two improved methods, based on ordinary least square (OLS) and weighted least square (WLS) estimates, are proposed to address these issues. The proposed weighted least squares method is then applied to eleven urban catchments in Europe, Canada, and Australia. The results are compared to map measured directly connected impervious area (DCIA) and are shown to be consistent with DCIA values. In addition, both of the improved methods are applied to nine urban catchments in Minnesota, USA. Both methods were successful in removing the subjective component inherent in the analysis of rainfall-runoff data of the current method. The WLS method is more robust than the OLS method and generates results that are different and more precise than the OLS method in the presence of heteroscedastic residuals in our rainfall-runoff data.

  18. The Impact of Back-Sputtered Carbon on the Accelerator Grid Wear Rates of the NEXT and NSTAR Ion Thrusters

    NASA Technical Reports Server (NTRS)

    Soulas, George C.

    2013-01-01

    A study was conducted to quantify the impact of back-sputtered carbon on the downstream accelerator grid erosion rates of the NEXT (NASA's Evolutionary Xenon Thruster) Long Duration Test (LDT1). A similar analysis that was conducted for the NSTAR (NASA's Solar Electric Propulsion Technology Applications Readiness Program) Life Demonstration Test (LDT2) was used as a foundation for the analysis developed herein. A new carbon surface coverage model was developed that accounted for multiple carbon adlayers before complete surface coverage is achieved. The resulting model requires knowledge of more model inputs, so they were conservatively estimated using the results of past thin film sputtering studies and particle reflection predictions. In addition, accelerator current densities across the grid were rigorously determined using an ion optics code to determine accelerator current distributions and an algorithm to determine beam current densities along a grid using downstream measurements. The improved analysis was applied to the NSTAR test results for evaluation. The improved analysis demonstrated that the impact of back-sputtered carbon on pit and groove wear rate for the NSTAR LDT2 was negligible throughout most of eroded grid radius. The improved analysis also predicted the accelerator current density for transition from net erosion to net deposition considerably more accurately than the original analysis. The improved analysis was used to estimate the impact of back-sputtered carbon on the accelerator grid pit and groove wear rate of the NEXT Long Duration Test (LDT1). Unlike the NSTAR analysis, the NEXT analysis was more challenging because the thruster was operated for extended durations at various operating conditions and was unavailable for measurements because the test is ongoing. As a result, the NEXT LDT1 estimates presented herein are considered preliminary until the results of future posttest analyses are incorporated. The worst-case impact of carbon back-sputtering was determined to be the full power operating condition, but the maximum impact of back-sputtered carbon was only a four percent reduction in wear rate. As a result, back-sputtered carbon is estimated to have an insignificant impact on the first failure mode of the NEXT LDT at all operating conditions.

  19. Estimation of electric fields and current from ground-based magnetometer data

    NASA Technical Reports Server (NTRS)

    Kamide, Y.; Richmond, A. D.

    1984-01-01

    Recent advances in numerical algorithms for estimating ionospheric electric fields and currents from groundbased magnetometer data are reviewed and evaluated. Tests of the adequacy of one such algorithm in reproducing large-scale patterns of electrodynamic parameters in the high-latitude ionosphere have yielded generally positive results, at least for some simple cases. Some encouraging advances in producing realistic conductivity models, which are a critical input, are pointed out. When the algorithms are applied to extensive data sets, such as the ones from meridian chain magnetometer networks during the IMS, together with refined conductivity models, unique information on instantaneous electric field and current patterns can be obtained. Examples of electric potentials, ionospheric currents, field-aligned currents, and Joule heating distributions derived from ground magnetic data are presented. Possible directions for future improvements are also pointed out.

  20. Predictive Model Development for Aviation Black Carbon Mass Emissions from Alternative and Conventional Fuels at Ground and Cruise.

    PubMed

    Abrahamson, Joseph P; Zelina, Joseph; Andac, M Gurhan; Vander Wal, Randy L

    2016-11-01

    The first order approximation (FOA3) currently employed to estimate BC mass emissions underpredicts BC emissions due to inaccuracies in measuring low smoke numbers (SNs) produced by modern high bypass ratio engines. The recently developed Formation and Oxidation (FOX) method removes the need for and hence uncertainty associated with (SNs), instead relying upon engine conditions in order to predict BC mass. Using the true engine operating conditions from proprietary engine cycle data an improved FOX (ImFOX) predictive relation is developed. Still, the current methods are not optimized to estimate cruise emissions nor account for the use of alternative jet fuels with reduced aromatic content. Here improved correlations are developed to predict engine conditions and BC mass emissions at ground and cruise altitude. This new ImFOX is paired with a newly developed hydrogen relation to predict emissions from alternative fuels and fuel blends. The ImFOX is designed for rich-quench-lean style combustor technologies employed predominately in the current aviation fleet.

  1. Approaches to Refining Estimates of Global Burden and Economics of Dengue

    PubMed Central

    Shepard, Donald S.; Undurraga, Eduardo A.; Betancourt-Cravioto, Miguel; Guzmán, María G.; Halstead, Scott B.; Harris, Eva; Mudin, Rose Nani; Murray, Kristy O.; Tapia-Conyer, Roberto; Gubler, Duane J.

    2014-01-01

    Dengue presents a formidable and growing global economic and disease burden, with around half the world's population estimated to be at risk of infection. There is wide variation and substantial uncertainty in current estimates of dengue disease burden and, consequently, on economic burden estimates. Dengue disease varies across time, geography and persons affected. Variations in the transmission of four different viruses and interactions among vector density and host's immune status, age, pre-existing medical conditions, all contribute to the disease's complexity. This systematic review aims to identify and examine estimates of dengue disease burden and costs, discuss major sources of uncertainty, and suggest next steps to improve estimates. Economic analysis of dengue is mainly concerned with costs of illness, particularly in estimating total episodes of symptomatic dengue. However, national dengue disease reporting systems show a great diversity in design and implementation, hindering accurate global estimates of dengue episodes and country comparisons. A combination of immediate, short-, and long-term strategies could substantially improve estimates of disease and, consequently, of economic burden of dengue. Suggestions for immediate implementation include refining analysis of currently available data to adjust reported episodes and expanding data collection in empirical studies, such as documenting the number of ambulatory visits before and after hospitalization and including breakdowns by age. Short-term recommendations include merging multiple data sources, such as cohort and surveillance data to evaluate the accuracy of reporting rates (by health sector, treatment, severity, etc.), and using covariates to extrapolate dengue incidence to locations with no or limited reporting. Long-term efforts aim at strengthening capacity to document dengue transmission using serological methods to systematically analyze and relate to epidemiologic data. As promising tools for diagnosis, vaccination, vector control, and treatment are being developed, these recommended steps should improve objective, systematic measures of dengue burden to strengthen health policy decisions. PMID:25412506

  2. Approaches to refining estimates of global burden and economics of dengue.

    PubMed

    Shepard, Donald S; Undurraga, Eduardo A; Betancourt-Cravioto, Miguel; Guzmán, María G; Halstead, Scott B; Harris, Eva; Mudin, Rose Nani; Murray, Kristy O; Tapia-Conyer, Roberto; Gubler, Duane J

    2014-11-01

    Dengue presents a formidable and growing global economic and disease burden, with around half the world's population estimated to be at risk of infection. There is wide variation and substantial uncertainty in current estimates of dengue disease burden and, consequently, on economic burden estimates. Dengue disease varies across time, geography and persons affected. Variations in the transmission of four different viruses and interactions among vector density and host's immune status, age, pre-existing medical conditions, all contribute to the disease's complexity. This systematic review aims to identify and examine estimates of dengue disease burden and costs, discuss major sources of uncertainty, and suggest next steps to improve estimates. Economic analysis of dengue is mainly concerned with costs of illness, particularly in estimating total episodes of symptomatic dengue. However, national dengue disease reporting systems show a great diversity in design and implementation, hindering accurate global estimates of dengue episodes and country comparisons. A combination of immediate, short-, and long-term strategies could substantially improve estimates of disease and, consequently, of economic burden of dengue. Suggestions for immediate implementation include refining analysis of currently available data to adjust reported episodes and expanding data collection in empirical studies, such as documenting the number of ambulatory visits before and after hospitalization and including breakdowns by age. Short-term recommendations include merging multiple data sources, such as cohort and surveillance data to evaluate the accuracy of reporting rates (by health sector, treatment, severity, etc.), and using covariates to extrapolate dengue incidence to locations with no or limited reporting. Long-term efforts aim at strengthening capacity to document dengue transmission using serological methods to systematically analyze and relate to epidemiologic data. As promising tools for diagnosis, vaccination, vector control, and treatment are being developed, these recommended steps should improve objective, systematic measures of dengue burden to strengthen health policy decisions.

  3. Use of surface drifters to increase resolution and accuracy of oceanic geostrophic circulation mapped from satellite only (altimetry and gravimetry)

    NASA Astrophysics Data System (ADS)

    Mulet, Sandrine; Rio, Marie-Hélène; Etienne, Hélène

    2017-04-01

    Strong improvements have been made in our knowledge of the surface ocean geostrophic circulation thanks to satellite observations. For instance, the use of the latest GOCE (Gravity field and steady-state Ocean Circulation Explorer) geoid model with altimetry data gives good estimate of the mean oceanic circulation at spatial scales down to 125 km. However, surface drifters are essential to resolve smaller scales, it is thus mandatory to carefully process drifter data and then to combine these different data sources. In this framework, the global 1/4° CNES-CLS13 Mean Dynamic Topography (MDT) and associated mean geostrophic currents have been computed (Rio et al, 2014). First a satellite only MDT was computed from altimetric and gravimetric data. Then, an important work was to pre-process drifter data to extract only the geostrophic component in order to be consistent with physical content of satellite only MDT. This step include estimate and remove of Ekman current and wind slippage. Finally drifters and satellite only MDT were combined. Similar approaches are used regionally to go further toward higher resolution, for instance in the Agulhas current or along the Brazilian coast. Also, a case study in the Gulf of Mexico intends to use drifters in the same way to improve weekly geostrophic current estimate.

  4. PACE: Probabilistic Assessment for Contributor Estimation- A machine learning-based assessment of the number of contributors in DNA mixtures.

    PubMed

    Marciano, Michael A; Adelman, Jonathan D

    2017-03-01

    The deconvolution of DNA mixtures remains one of the most critical challenges in the field of forensic DNA analysis. In addition, of all the data features required to perform such deconvolution, the number of contributors in the sample is widely considered the most important, and, if incorrectly chosen, the most likely to negatively influence the mixture interpretation of a DNA profile. Unfortunately, most current approaches to mixture deconvolution require the assumption that the number of contributors is known by the analyst, an assumption that can prove to be especially faulty when faced with increasingly complex mixtures of 3 or more contributors. In this study, we propose a probabilistic approach for estimating the number of contributors in a DNA mixture that leverages the strengths of machine learning. To assess this approach, we compare classification performances of six machine learning algorithms and evaluate the model from the top-performing algorithm against the current state of the art in the field of contributor number classification. Overall results show over 98% accuracy in identifying the number of contributors in a DNA mixture of up to 4 contributors. Comparative results showed 3-person mixtures had a classification accuracy improvement of over 6% compared to the current best-in-field methodology, and that 4-person mixtures had a classification accuracy improvement of over 20%. The Probabilistic Assessment for Contributor Estimation (PACE) also accomplishes classification of mixtures of up to 4 contributors in less than 1s using a standard laptop or desktop computer. Considering the high classification accuracy rates, as well as the significant time commitment required by the current state of the art model versus seconds required by a machine learning-derived model, the approach described herein provides a promising means of estimating the number of contributors and, subsequently, will lead to improved DNA mixture interpretation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Index-in-retrospect and breeding objectives characterizing genetic improvement programs for South African Nguni cattle

    USDA-ARS?s Scientific Manuscript database

    The objective of the current study was to describe the historical selection applied to Nguni cattle in South Africa. Index-in-retrospect methods were applied to data originating from the National Beef Cattle Improvement Scheme. Data used were estimated breeding values (EBV) for animals born during t...

  6. Retrospective Analog Year Analyses Using NASA Satellite Precipitation and Soil Moisture Data to Improve USDA's World Agricultural Supply and Demand Estimates

    NASA Astrophysics Data System (ADS)

    Teng, W. L.; Shannon, H.

    2010-12-01

    The USDA World Agricultural Outlook Board (WAOB) coordinates the development of the monthly World Agricultural Supply and Demand Estimates (WASDE) for the U.S. and major foreign producing countries. Given the significant effect of weather on crop progress, conditions, and production, WAOB prepares frequent agricultural weather assessments in the Global Agricultural Decision Support Environment (GLADSE). Because the timing of the precipitation is often as important as the amount, in their effects on crop production, WAOB frequently examines precipitation time series to estimate crop productivity. An effective method for such assessment is the use of analog year comparisons, where precipitation time series, based on surface weather stations, from several historical years are compared with the time series from the current year. Once analog years are identified, crop yields can be estimated for the current season based on observed yields from the analog years, because of the similarities in the precipitation patterns. In this study, NASA satellite precipitation and soil moisture time series are used to identify analog years. Given that soil moisture often has a more direct effect than does precipitation on crop water availability, the time series of soil moisture could be more effective than that of precipitation, in identifying those years with similar crop yields. Retrospective analyses of analogs will be conducted to determine any reduction in the level of uncertainty in identifying analog years, and any reduction in false negatives or false positives. The comparison of analog years could potentially be improved by quantifying the selection of analogs, instead of the current visual inspection method. Various approaches to quantifying are currently being evaluated. This study is part of a larger effort to improve WAOB estimates by integrating NASA remote sensing soil moisture observations and research results into GLADSE, including (1) the integration of the Land Parameter Retrieval Model (LPRM) soil moisture algorithm for operational production and (2) the assimilation of LPRM soil moisture into the USDA Environmental Policy Integrated Climate (EPIC) crop model.

  7. Price estimates for the production of wafers from silicon ingots

    NASA Technical Reports Server (NTRS)

    Mokashi, A. R.

    1982-01-01

    The status of the inside-diameter sawing, (ID), multiblade sawing (MBS), and fixed-abrasive slicing technique (FAST) processes are discussed with respect to the estimated price each process adds on to the price of the final photovoltaic module. The expected improvements in each process, based on the knowledge of the current level of technology, are projected for the next two to five years and the expected add-on prices in 1983 and 1986 are estimated.

  8. On the Variability of Wilson Currents by Storm Type and Phase

    NASA Technical Reports Server (NTRS)

    Deierling, Wiebke; Kalb, Christina; Mach, Douglas; Liu, Chuntao; Peterson, Michael; Blakeslee, Richard

    2014-01-01

    Storm total conduction currents from electrified clouds are thought to play a major role in maintaining the potential difference between the earth's surface and the upper atmosphere within the Global Electric Circuit (GEC). However, it is not entirely known how the contributions of these currents vary by cloud type and phase of the clouds life cycle. Estimates of storm total conduction currents were obtained from data collected over two decades during multiple field campaigns involving the NASA ER-2 aircraft. In this study the variability of these currents by cloud type and lifecycle is investigated. We also compared radar derived microphysical storm properties with total storm currents to investigate whether these storm properties can be used to describe the current variability of different electrified clouds. The ultimate goal is to help improve modeling of the GEC via quantification and improved parameterization of the conduction current contribution of different cloud types.

  9. [Progress on Individual Stature Estimation in Forensic Medicine].

    PubMed

    Wu, Rong-qi; Huang, Li-na; Chen, Xin

    2015-12-01

    Individual stature estimation is one of the most important contents of forensic anthropology. Currently, it has been used that the regression equations established by the data collected by direct measurement or radiological techniques in a certain group of limbs, irregular bones, and anatomic landmarks. Due to the impact of population mobility, human physical improvement, racial and geographic differences, estimation of individual stature should be a regular study. This paper reviews the different methods of stature estimation, briefly describes the advantages and disadvantages of each method, and prospects a new research direction.

  10. Socioeconomic indicators of heat-related health risk supplemented with remotely sensed data

    PubMed Central

    Johnson, Daniel P; Wilson, Jeffrey S; Luber, George C

    2009-01-01

    Background Extreme heat events are the number one cause of weather-related fatalities in the United States. The current system of alert for extreme heat events does not take into account intra-urban spatial variation in risk. The purpose of this study is to evaluate a potential method to improve spatial delineation of risk from extreme heat events in urban environments by integrating sociodemographic risk factors with estimates of land surface temperature derived from thermal remote sensing data. Results Comparison of logistic regression models indicates that supplementing known sociodemographic risk factors with remote sensing estimates of land surface temperature improves the delineation of intra-urban variations in risk from extreme heat events. Conclusion Thermal remote sensing data can be utilized to improve understanding of intra-urban variations in risk from extreme heat. The refinement of current risk assessment systems could increase the likelihood of survival during extreme heat events and assist emergency personnel in the delivery of vital resources during such disasters. PMID:19835578

  11. The international food unit: a new measurement aid that can improve portion size estimation.

    PubMed

    Bucher, T; Weltert, M; Rollo, M E; Smith, S P; Jia, W; Collins, C E; Sun, M

    2017-09-12

    Portion size education tools, aids and interventions can be effective in helping prevent weight gain. However consumers have difficulties in estimating food portion sizes and are confused by inconsistencies in measurement units and terminologies currently used. Visual cues are an important mediator of portion size estimation, but standardized measurement units are required. In the current study, we present a new food volume estimation tool and test the ability of young adults to accurately quantify food volumes. The International Food Unit™ (IFU™) is a 4x4x4 cm cube (64cm 3 ), subdivided into eight 2 cm sub-cubes for estimating smaller food volumes. Compared with currently used measures such as cups and spoons, the IFU™ standardizes estimation of food volumes with metric measures. The IFU™ design is based on binary dimensional increments and the cubic shape facilitates portion size education and training, memory and recall, and computer processing which is binary in nature. The performance of the IFU™ was tested in a randomized between-subject experiment (n = 128 adults, 66 men) that estimated volumes of 17 foods using four methods; the IFU™ cube, a deformable modelling clay cube, a household measuring cup or no aid (weight estimation). Estimation errors were compared between groups using Kruskall-Wallis tests and post-hoc comparisons. Estimation errors differed significantly between groups (H(3) = 28.48, p < .001). The volume estimations were most accurate in the group using the IFU™ cube (Mdn = 18.9%, IQR = 50.2) and least accurate using the measuring cup (Mdn = 87.7%, IQR = 56.1). The modelling clay cube led to a median error of 44.8% (IQR = 41.9). Compared with the measuring cup, the estimation errors using the IFU™ were significantly smaller for 12 food portions and similar for 5 food portions. Weight estimation was associated with a median error of 23.5% (IQR = 79.8). The IFU™ improves volume estimation accuracy compared to other methods. The cubic shape was perceived as favourable, with subdivision and multiplication facilitating volume estimation. Further studies should investigate whether the IFU™ can facilitate portion size training and whether portion size education using the IFU™ is effective and sustainable without the aid. A 3-dimensional IFU™ could serve as a reference object for estimating food volume.

  12. A Novel Residual Frequency Estimation Method for GNSS Receivers.

    PubMed

    Nguyen, Tu Thi-Thanh; La, Vinh The; Ta, Tung Hai

    2018-01-04

    In Global Navigation Satellite System (GNSS) receivers, residual frequency estimation methods are traditionally applied in the synchronization block to reduce the transient time from acquisition to tracking, or they are used within the frequency estimator to improve its accuracy in open-loop architectures. There are several disadvantages in the current estimation methods, including sensitivity to noise and wide search space size. This paper proposes a new residual frequency estimation method depending on differential processing. Although the complexity of the proposed method is higher than the one of traditional methods, it can lead to more accurate estimates, without increasing the size of the search space.

  13. Postmortem time estimation using body temperature and a finite-element computer model.

    PubMed

    den Hartog, Emiel A; Lotens, Wouter A

    2004-09-01

    In the Netherlands most murder victims are found 2-24 h after the crime. During this period, body temperature decrease is the most reliable method to estimate the postmortem time (PMT). Recently, two murder cases were analysed in which currently available methods did not provide a sufficiently reliable estimate of the PMT. In both cases a study was performed to verify the statements of suspects. For this purpose a finite-element computer model was developed that simulates a human torso and its clothing. With this model, changes to the body and the environment can also be modelled; this was very relevant in one of the cases, as the body had been in the presence of a small fire. In both cases it was possible to falsify the statements of the suspects by improving the accuracy of the PMT estimate. The estimated PMT in both cases was within the range of Henssge's model. The standard deviation of the PMT estimate was 35 min in the first case and 45 min in the second case, compared to 168 min (2.8 h) in Henssge's model. In conclusion, the model as presented here can have additional value for improving the accuracy of the PMT estimate. In contrast to the simple model of Henssge, the current model allows for increased accuracy when more detailed information is available. Moreover, the sensitivity of the predicted PMT for uncertainty in the circumstances can be studied, which is crucial to the confidence of the judge in the results.

  14. Global deformation of the Earth, surface mass anomalies, and the geodetic infrastructure required to study these processes

    NASA Astrophysics Data System (ADS)

    Kusche, J.; Rietbroek, R.; Gunter, B.; Mark-Willem, J.

    2008-12-01

    Global deformation of the Earth can be linked to loading caused by mass changes in the atmosphere, the ocean and the terrestrial hydrosphere. World-wide geodetic observation systems like GPS, e.g., the global IGS network, can be used to study the global deformation of the Earth directly and, when other effects are properly modeled, provide information regarding the surface loading mass (e.g., to derive geo-center motion estimates). Vice versa, other observing systems that monitor mass change, either through gravitational changes (GRACE) or through a combination of in-situ and modeled quantities (e.g., the atmosphere, ocean or hydrosphere), can provide indirect information on global deformation. In the framework of the German 'Mass transport and mass distribution' program, we estimate surface mass anomalies at spherical harmonic resolution up to degree and order 30 by linking three complementary data sets in a least squares approach. Our estimates include geo-center motion and the thickness of a spatially uniform layer on top of the ocean surface (that is otherwise estimated from surface fluxes, evaporation and precipitation, and river run-off) as a time-series. As with all current Earth observing systems, each dataset has its own limitations and do not realize homogeneous coverage over the globe. To assess the impact that these limitations might have on current and future deformation and loading mass solutions, a sensitivity study was conducted. Simulated real-case and idealized solutions were explored in which the spatial distribution and quality of GPS, GRACE and OBP data sets were varied. The results show that significant improvements, e.g., over the current GRACE monthly gravity fields, in particular at the low degrees, can be achieved when these solutions are combined with present day GPS and OBP products. Our idealized scenarios also provide quantitative implications on how much surface mass change estimates may improve in the future when improved observing systems become available.

  15. Respondent-Driven Sampling: An Assessment of Current Methodology.

    PubMed

    Gile, Krista J; Handcock, Mark S

    2010-08-01

    Respondent-Driven Sampling (RDS) employs a variant of a link-tracing network sampling strategy to collect data from hard-to-reach populations. By tracing the links in the underlying social network, the process exploits the social structure to expand the sample and reduce its dependence on the initial (convenience) sample.The current estimators of population averages make strong assumptions in order to treat the data as a probability sample. We evaluate three critical sensitivities of the estimators: to bias induced by the initial sample, to uncontrollable features of respondent behavior, and to the without-replacement structure of sampling.Our analysis indicates: (1) that the convenience sample of seeds can induce bias, and the number of sample waves typically used in RDS is likely insufficient for the type of nodal mixing required to obtain the reputed asymptotic unbiasedness; (2) that preferential referral behavior by respondents leads to bias; (3) that when a substantial fraction of the target population is sampled the current estimators can have substantial bias.This paper sounds a cautionary note for the users of RDS. While current RDS methodology is powerful and clever, the favorable statistical properties claimed for the current estimates are shown to be heavily dependent on often unrealistic assumptions. We recommend ways to improve the methodology.

  16. Depth-averaged instantaneous currents in a tidally dominated shelf sea from glider observations

    NASA Astrophysics Data System (ADS)

    Merckelbach, Lucas

    2016-12-01

    Ocean gliders have become ubiquitous observation platforms in the ocean in recent years. They are also increasingly used in coastal environments. The coastal observatory system COSYNA has pioneered the use of gliders in the North Sea, a shallow tidally energetic shelf sea. For operational reasons, the gliders operated in the North Sea are programmed to resurface every 3-5 h. The glider's dead-reckoning algorithm yields depth-averaged currents, averaged in time over each subsurface interval. Under operational conditions these averaged currents are a poor approximation of the instantaneous tidal current. In this work an algorithm is developed that estimates the instantaneous current (tidal and residual) from glider observations only. The algorithm uses a first-order Butterworth low pass filter to estimate the residual current component, and a Kalman filter based on the linear shallow water equations for the tidal component. A comparison of data from a glider experiment with current data from an acoustic Doppler current profilers deployed nearby shows that the standard deviations for the east and north current components are better than 7 cm s-1 in near-real-time mode and improve to better than 6 cm s-1 in delayed mode, where the filters can be run forward and backward. In the near-real-time mode the algorithm provides estimates of the currents that the glider is expected to encounter during its next few dives. Combined with a behavioural and dynamic model of the glider, this yields predicted trajectories, the information of which is incorporated in warning messages issued to ships by the (German) authorities. In delayed mode the algorithm produces useful estimates of the depth-averaged currents, which can be used in (process-based) analyses in case no other source of measured current information is available.

  17. Stable Spheromaks with Profile Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fowler, T K; Jayakumar, R

    A spheromak equilibrium with zero edge current is shown to be stable to both ideal MHD and tearing modes that normally produce Taylor relaxation in gun-injected spheromaks. This stable equilibrium differs from the stable Taylor state in that the current density j falls to zero at the wall. Estimates indicate that this current profile could be sustained by non-inductive current drive at acceptable power levels. Stability is determined using the NIMROD code for linear stability analysis. Non-linear NIMROD calculations with non-inductive current drive could point the way to improved fusion reactors.

  18. Iterative Bayesian Estimation of Travel Times on Urban Arterials: Fusing Loop Detector and Probe Vehicle Data.

    PubMed

    Liu, Kai; Cui, Meng-Ying; Cao, Peng; Wang, Jiang-Bo

    2016-01-01

    On urban arterials, travel time estimation is challenging especially from various data sources. Typically, fusing loop detector data and probe vehicle data to estimate travel time is a troublesome issue while considering the data issue of uncertain, imprecise and even conflicting. In this paper, we propose an improved data fusing methodology for link travel time estimation. Link travel times are simultaneously pre-estimated using loop detector data and probe vehicle data, based on which Bayesian fusion is then applied to fuse the estimated travel times. Next, Iterative Bayesian estimation is proposed to improve Bayesian fusion by incorporating two strategies: 1) substitution strategy which replaces the lower accurate travel time estimation from one sensor with the current fused travel time; and 2) specially-designed conditions for convergence which restrict the estimated travel time in a reasonable range. The estimation results show that, the proposed method outperforms probe vehicle data based method, loop detector based method and single Bayesian fusion, and the mean absolute percentage error is reduced to 4.8%. Additionally, iterative Bayesian estimation performs better for lighter traffic flows when the variability of travel time is practically higher than other periods.

  19. Iterative Bayesian Estimation of Travel Times on Urban Arterials: Fusing Loop Detector and Probe Vehicle Data

    PubMed Central

    Cui, Meng-Ying; Cao, Peng; Wang, Jiang-Bo

    2016-01-01

    On urban arterials, travel time estimation is challenging especially from various data sources. Typically, fusing loop detector data and probe vehicle data to estimate travel time is a troublesome issue while considering the data issue of uncertain, imprecise and even conflicting. In this paper, we propose an improved data fusing methodology for link travel time estimation. Link travel times are simultaneously pre-estimated using loop detector data and probe vehicle data, based on which Bayesian fusion is then applied to fuse the estimated travel times. Next, Iterative Bayesian estimation is proposed to improve Bayesian fusion by incorporating two strategies: 1) substitution strategy which replaces the lower accurate travel time estimation from one sensor with the current fused travel time; and 2) specially-designed conditions for convergence which restrict the estimated travel time in a reasonable range. The estimation results show that, the proposed method outperforms probe vehicle data based method, loop detector based method and single Bayesian fusion, and the mean absolute percentage error is reduced to 4.8%. Additionally, iterative Bayesian estimation performs better for lighter traffic flows when the variability of travel time is practically higher than other periods. PMID:27362654

  20. Trends in Timber Use and Product Recovery in New York

    Treesearch

    Eric H. Wharton; Thomas W. Birch; Thomas W. Birch

    1999-01-01

    High demand for a variety of timber products from New York's forests has stimulated increased timber utilization and product recovery. Utilization studies in New York suggest that the recovery of timber has improved over the years. Although current methods of multiproduct harvesting have improved recovery of residual material, an estimated 38.6 million cubic feet...

  1. An improved adaptive weighting function method for State Estimation in Power Systems with VSC-MTDC

    NASA Astrophysics Data System (ADS)

    Zhao, Kun; Yang, Xiaonan; Lang, Yansheng; Song, Xuri; Wang, Minkun; Luo, Yadi; Wu, Lingyun; Liu, Peng

    2017-04-01

    This paper presents an effective approach for state estimation in power systems that include multi-terminal voltage source converter based high voltage direct current (VSC-MTDC), called improved adaptive weighting function method. The proposed approach is simplified in which the VSC-MTDC system is solved followed by the AC system. Because the new state estimation method only changes the weight and keeps the matrix dimension unchanged. Accurate and fast convergence of AC/DC system can be realized by adaptive weight function method. This method also provides the technical support for the simulation analysis and accurate regulation of AC/DC system. Both the oretical analysis and numerical tests verify practicability, validity and convergence of new method.

  2. Spring cleaning: rural water impacts, valuation, and property rights institutions.

    PubMed

    Kremer, Michael; Leino, Jessica; Miguel, Edward; Zwane, Alix Peterson

    2011-01-01

    Using a randomized evaluation in Kenya, we measure health impacts of spring protection, an investment that improves source water quality. We also estimate households' valuation of spring protection and simulate the welfare impacts of alternatives to the current system of common property rights in water, which limits incentives for private investment. Spring infrastructure investments reduce fecal contamination by 66%, but household water quality improves less, due to recontamination. Child diarrhea falls by one quarter. Travel-cost based revealed preference estimates of households' valuations are much smaller than both stated preference valuations and health planners' valuations, and are consistent with models in which the demand for health is highly income elastic. We estimate that private property norms would generate little additional investment while imposing large static costs due to above-marginal-cost pricing, private property would function better at higher income levels or under water scarcity, and alternative institutions could yield Pareto improvements.

  3. Accurate Realization of GPS Vertical Global Reference Frame

    NASA Technical Reports Server (NTRS)

    Elosegui, Pedro

    2005-01-01

    The goal of this project is to improve our current understanding of GPS error sources associated with estimates of radial velocities at global scales. An improvement in the accuracy of radial global velocities would have a very positive impact on a large number of geophysical studies of current general interest such as global sea-level and climate change, coastal hazards, glacial isostatic adjustment, atmospheric and oceanic loading, glaciology and ice mass variability, tectonic deformation and volcanic inflation, and geoid variability. A set of GPS error sources relevant to this project are those related to the combination of the positions and velocities of a set of globally distributed stations as determined &om the analysis of GPS data, including possible methods of combining and defining terrestrial reference frames. This is were our research activities during this reporting period have concentrated. During this reporting period, we have researched two topics: (1) The effect of errors on the GPS satellite antenna models (or lack thereof) on global GPS vertical position and velocity estimates; (2) The effect of reference W e definition and practice on estimates of the geocenter variations.

  4. Fast and accurate spectral estimation for online detection of partial broken bar in induction motors

    NASA Astrophysics Data System (ADS)

    Samanta, Anik Kumar; Naha, Arunava; Routray, Aurobinda; Deb, Alok Kanti

    2018-01-01

    In this paper, an online and real-time system is presented for detecting partial broken rotor bar (BRB) of inverter-fed squirrel cage induction motors under light load condition. This system with minor modifications can detect any fault that affects the stator current. A fast and accurate spectral estimator based on the theory of Rayleigh quotient is proposed for detecting the spectral signature of BRB. The proposed spectral estimator can precisely determine the relative amplitude of fault sidebands and has low complexity compared to available high-resolution subspace-based spectral estimators. Detection of low-amplitude fault components has been improved by removing the high-amplitude fundamental frequency using an extended-Kalman based signal conditioner. Slip is estimated from the stator current spectrum for accurate localization of the fault component. Complexity and cost of sensors are minimal as only a single-phase stator current is required. The hardware implementation has been carried out on an Intel i7 based embedded target ported through the Simulink Real-Time. Evaluation of threshold and detectability of faults with different conditions of load and fault severity are carried out with empirical cumulative distribution function.

  5. Further development of the attitude difference method for estimating deflections of the vertical in real time

    NASA Astrophysics Data System (ADS)

    Zhu, Jing; Zhou, Zebo; Li, Yong; Rizos, Chris; Wang, Xingshu

    2016-07-01

    An improvement of the attitude difference method (ADM) to estimate deflections of the vertical (DOV) in real time is described in this paper. The ADM without offline processing estimates the DOV with a limited accuracy due to the response delay. The proposed model selection-based self-adaptive delay feedback (SDF) method takes the results of the ADM as the a priori information, then uses fitting and extrapolation to estimate the DOV at the current epoch. The active region selection factor F th is used to take full advantage of the Earth model EGM2008 and the SDF with different DOV exhibitions. The factors which affect the DOV estimation accuracy are analyzed and modeled. An external observation which is specified by the velocity difference between the global navigation satellite system (GNSS) and the inertial navigation system (INS) with DOV compensated is used to select the optimal model. The response delay induced by the weak observability of an integrated INS/GNSS to the violent DOV disturbances in the ADM is compensated. The DOV estimation accuracy of the SDF method is improved by approximately 40% and 50% respectively compared to that of the EGM2008 and the ADM. With an increase in GNSS accuracy, the DOV estimation accuracy could improve further.

  6. Estimating Photosynthetically Available Radiation (PAR) at the Earth's surface from satellite observations

    NASA Technical Reports Server (NTRS)

    Frouin, Robert

    1993-01-01

    Current satellite algorithms to estimate photosynthetically available radiation (PAR) at the earth' s surface are reviewed. PAR is deduced either from an insolation estimate or obtained directly from top-of-atmosphere solar radiances. The characteristics of both approaches are contrasted and typical results are presented. The inaccuracies reported, about 10 percent and 6 percent on daily and monthly time scales, respectively, are useful to model oceanic and terrestrial primary productivity. At those time scales variability due to clouds in the ratio of PAR and insolation is reduced, making it possible to deduce PAR directly from insolation climatologies (satellite or other) that are currently available or being produced. Improvements, however, are needed in conditions of broken cloudiness and over ice/snow. If not addressed properly, calibration/validation issues may prevent quantitative use of the PAR estimates in studies of climatic change. The prospects are good for an accurate, long-term climatology of PAR over the globe.

  7. Updating estimates of low streamflow statistics to account for possible trends

    NASA Astrophysics Data System (ADS)

    Blum, A. G.; Archfield, S. A.; Hirsch, R. M.; Vogel, R. M.; Kiang, J. E.; Dudley, R. W.

    2017-12-01

    Given evidence of both increasing and decreasing trends in low flows in many streams, methods are needed to update estimators of low flow statistics used in water resources management. One such metric is the 10-year annual low-flow statistic (7Q10) calculated as the annual minimum seven-day streamflow which is exceeded in nine out of ten years on average. Historical streamflow records may not be representative of current conditions at a site if environmental conditions are changing. We present a new approach to frequency estimation under nonstationary conditions that applies a stationary nonparametric quantile estimator to a subset of the annual minimum flow record. Monte Carlo simulation experiments were used to evaluate this approach across a range of trend and no trend scenarios. Relative to the standard practice of using the entire available streamflow record, use of a nonparametric quantile estimator combined with selection of the most recent 30 or 50 years for 7Q10 estimation were found to improve accuracy and reduce bias. Benefits of data subset selection approaches were greater for higher magnitude trends annual minimum flow records with lower coefficients of variation. A nonparametric trend test approach for subset selection did not significantly improve upon always selecting the last 30 years of record. At 174 stream gages in the Chesapeake Bay region, 7Q10 estimators based on the most recent 30 years of flow record were compared to estimators based on the entire period of record. Given the availability of long records of low streamflow, using only a subset of the flow record ( 30 years) can be used to update 7Q10 estimators to better reflect current streamflow conditions.

  8. The benefit of using additional hydrological information from earth observations and reanalysis data on water allocation decisions in irrigation districts

    NASA Astrophysics Data System (ADS)

    Kaune, Alexander; López, Patricia; Werner, Micha; de Fraiture, Charlotte

    2017-04-01

    Hydrological information on water availability and demand is vital for sound water allocation decisions in irrigation districts, particularly in times of water scarcity. However, sub-optimal water allocation decisions are often taken with incomplete hydrological information, which may lead to agricultural production loss. In this study we evaluate the benefit of additional hydrological information from earth observations and reanalysis data in supporting decisions in irrigation districts. Current water allocation decisions were emulated through heuristic operational rules for water scarce and water abundant conditions in the selected irrigation districts. The Dynamic Water Balance Model based on the Budyko framework was forced with precipitation datasets from interpolated ground measurements, remote sensing and reanalysis data, to determine the water availability for irrigation. Irrigation demands were estimated based on estimates of potential evapotranspiration and coefficient for crops grown, adjusted with the interpolated precipitation data. Decisions made using both current and additional hydrological information were evaluated through the rate at which sub-optimal decisions were made. The decisions made using an amended set of decision rules that benefit from additional information on demand in the districts were also evaluated. Results show that sub-optimal decisions can be reduced in the planning phase through improved estimates of water availability. Where there are reliable observations of water availability through gauging stations, the benefit of the improved precipitation data is found in the improved estimates of demand, equally leading to a reduction of sub-optimal decisions.

  9. Improving the Accuracy of Predicting Maximal Oxygen Consumption (VO2pk)

    NASA Technical Reports Server (NTRS)

    Downs, Meghan E.; Lee, Stuart M. C.; Ploutz-Snyder, Lori; Feiveson, Alan

    2016-01-01

    Maximal oxygen (VO2pk) is the maximum amount of oxygen that the body can use during intense exercise and is used for benchmarking endurance exercise capacity. The most accurate method to determineVO2pk requires continuous measurements of ventilation and gas exchange during an exercise test to maximal effort, which necessitates expensive equipment, a trained staff, and time to set-up the equipment. For astronauts, accurate VO2pk measures are important to assess mission critical task performance capabilities and to prescribe exercise intensities to optimize performance. Currently, astronauts perform submaximal exercise tests during flight to predict VO2pk; however, while submaximal VO2pk prediction equations provide reliable estimates of mean VO2pk for populations, they can be unacceptably inaccurate for a given individual. The error in current predictions and logistical limitations of measuring VO2pk, particularly during spaceflight, highlights the need for improved estimation methods.

  10. Investigation of optical current transformer signal processing method based on an improved Kalman algorithm

    NASA Astrophysics Data System (ADS)

    Shen, Yan; Ge, Jin-ming; Zhang, Guo-qing; Yu, Wen-bin; Liu, Rui-tong; Fan, Wei; Yang, Ying-xuan

    2018-01-01

    This paper explores the problem of signal processing in optical current transformers (OCTs). Based on the noise characteristics of OCTs, such as overlapping signals, noise frequency bands, low signal-to-noise ratios, and difficulties in acquiring statistical features of noise power, an improved standard Kalman filtering algorithm was proposed for direct current (DC) signal processing. The state-space model of the OCT DC measurement system is first established, and then mixed noise can be processed by adding mixed noise into measurement and state parameters. According to the minimum mean squared error criterion, state predictions and update equations of the improved Kalman algorithm could be deduced based on the established model. An improved central difference Kalman filter was proposed for alternating current (AC) signal processing, which improved the sampling strategy and noise processing of colored noise. Real-time estimation and correction of noise were achieved by designing AC and DC noise recursive filters. Experimental results show that the improved signal processing algorithms had a good filtering effect on the AC and DC signals with mixed noise of OCT. Furthermore, the proposed algorithm was able to achieve real-time correction of noise during the OCT filtering process.

  11. Estimating Snow Water Storage in North America Using CLM4, DART, and Snow Radiance Data Assimilation

    NASA Technical Reports Server (NTRS)

    Kwon, Yonghwan; Yang, Zong-Liang; Zhao, Long; Hoar, Timothy J.; Toure, Ally M.; Rodell, Matthew

    2016-01-01

    This paper addresses continental-scale snow estimates in North America using a recently developed snow radiance assimilation (RA) system. A series of RA experiments with the ensemble adjustment Kalman filter are conducted by assimilating the Advanced Microwave Scanning Radiometer for Earth Observing System (AMSR-E) brightness temperature T(sub B) at 18.7- and 36.5-GHz vertical polarization channels. The overall RA performance in estimating snow depth for North America is improved by simultaneously updating the Community Land Model, version 4 (CLM4), snow/soil states and radiative transfer model (RTM) parameters involved in predicting T(sub B) based on their correlations with the prior T(sub B) (i.e., rule-based RA), although degradations are also observed. The RA system exhibits a more mixed performance for snow cover fraction estimates. Compared to the open-loop run (0.171m RMSE), the overall snow depth estimates are improved by 1.6% (0.168m RMSE) in the rule-based RA whereas the default RA (without a rule) results in a degradation of 3.6% (0.177mRMSE). Significant improvement of the snow depth estimates in the rule-based RA as observed for tundra snow class (11.5%, p < 0.05) and bare soil land-cover type (13.5%, p < 0.05). However, the overall improvement is not significant (p = 0.135) because snow estimates are degraded or marginally improved for other snow classes and land covers, especially the taiga snow class and forest land cover (7.1% and 7.3% degradations, respectively). The current RA system needs to be further refined to enhance snow estimates for various snow types and forested regions.

  12. Global Access to Safe Water: Accounting for Water Quality and the Resulting Impact on MDG Progress

    PubMed Central

    Onda, Kyle; LoBuglio, Joe; Bartram, Jamie

    2012-01-01

    Monitoring of progress towards the Millennium Development Goal (MDG) drinking water target relies on classification of water sources as “improved” or “unimproved” as an indicator for water safety. We adjust the current Joint Monitoring Programme (JMP) estimate by accounting for microbial water quality and sanitary risk using the only-nationally representative water quality data currently available, that from the WHO and UNICEF “Rapid Assessment of Drinking Water Quality”. A principal components analysis (PCA) of national environmental and development indicators was used to create models that predicted, for most countries, the proportions of piped and of other-improved water supplies that are faecally contaminated; and of these sources, the proportions that lack basic sanitary protection against contamination. We estimate that 1.8 billion people (28% of the global population) used unsafe water in 2010. The 2010 JMP estimate is that 783 million people (11%) use unimproved sources. Our estimates revise the 1990 baseline from 23% to 37%, and the target from 12% to 18%, resulting in a shortfall of 10% of the global population towards the MDG target in 2010. In contrast, using the indicator “use of an improved source” suggests that the MDG target for drinking-water has already been achieved. We estimate that an additional 1.2 billion (18%) use water from sources or systems with significant sanitary risks. While our estimate is imprecise, the magnitude of the estimate and the health and development implications suggest that greater attention is needed to better understand and manage drinking water safety. PMID:22690170

  13. Doctoral Supervision in Virtual Spaces: A Review of Research of Web-Based Tools to Develop Collaborative Supervision

    ERIC Educational Resources Information Center

    Maor, Dorit; Ensor, Jason D.; Fraser, Barry J.

    2016-01-01

    Supervision of doctoral students needs to be improved to increase completion rates, reduce attrition rates (estimated to be at 25% or more) and improve quality of research. The current literature review aimed to explore the contribution that technology can make to higher degree research supervision. The articles selected included empirical studies…

  14. FPGA-based fused smart-sensor for tool-wear area quantitative estimation in CNC machine inserts.

    PubMed

    Trejo-Hernandez, Miguel; Osornio-Rios, Roque Alfredo; de Jesus Romero-Troncoso, Rene; Rodriguez-Donate, Carlos; Dominguez-Gonzalez, Aurelio; Herrera-Ruiz, Gilberto

    2010-01-01

    Manufacturing processes are of great relevance nowadays, when there is a constant claim for better productivity with high quality at low cost. The contribution of this work is the development of a fused smart-sensor, based on FPGA to improve the online quantitative estimation of flank-wear area in CNC machine inserts from the information provided by two primary sensors: the monitoring current output of a servoamplifier, and a 3-axis accelerometer. Results from experimentation show that the fusion of both parameters makes it possible to obtain three times better accuracy when compared with the accuracy obtained from current and vibration signals, individually used.

  15. Trajectory-Based Takeoff Time Predictions Applied to Tactical Departure Scheduling: Concept Description, System Design, and Initial Observations

    NASA Technical Reports Server (NTRS)

    Engelland, Shawn A.; Capps, Alan

    2011-01-01

    Current aircraft departure release times are based on manual estimates of aircraft takeoff times. Uncertainty in takeoff time estimates may result in missed opportunities to merge into constrained en route streams and lead to lost throughput. However, technology exists to improve takeoff time estimates by using the aircraft surface trajectory predictions that enable air traffic control tower (ATCT) decision support tools. NASA s Precision Departure Release Capability (PDRC) is designed to use automated surface trajectory-based takeoff time estimates to improve en route tactical departure scheduling. This is accomplished by integrating an ATCT decision support tool with an en route tactical departure scheduling decision support tool. The PDRC concept and prototype software have been developed, and an initial test was completed at air traffic control facilities in Dallas/Fort Worth. This paper describes the PDRC operational concept, system design, and initial observations.

  16. Enabling Software Acquisition Improvement: Government and Industry Software Development Team Acquisition Model

    DTIC Science & Technology

    2010-04-30

    estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources , gathering and maintaining...previous and current complex SW development efforts, the program offices will have a source of objective lessons learned and metrics that can be applied...the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this

  17. Estimation of aboveground forest carbon flux in Oregon: adding components of change to stock-difference assessments

    Treesearch

    Andrew N. Gray; Thomas R. Whittier; David L. Azuma

    2014-01-01

    A substantial portion of the carbon (C) emitted by human activity is apparently being stored in forest ecosystems in the Northern Hemisphere, but the magnitude and cause are not precisely understood. Current official estimates of forest C flux are based on a combination of field measurements and other methods. The goal of this study was to improve on existing methods...

  18. Prediction of embankment settlement over soft soils.

    DOT National Transportation Integrated Search

    2009-06-01

    The objective of this project was to review and verify the current design procedures used by TxDOT : to estimate the total and rate of consolidation settlement in embankments constructed on soft soils. Methods : to improve the settlement predictions ...

  19. The Impact of Back-Sputtered Carbon on the Accelerator Grid Wear Rates of the NEXT and NSTAR Ion Thrusters

    NASA Technical Reports Server (NTRS)

    Soulas, George C.

    2013-01-01

    A study was conducted to quantify the impact of back-sputtered carbon on the downstream accelerator grid erosion rates of the NASA's Evolutionary Xenon Thruster (NEXT) Long Duration Test (LDT1). A similar analysis that was conducted for the NASA's Solar Electric Propulsion Technology Applications Readiness Program (NSTAR) Life Demonstration Test (LDT2) was used as a foundation for the analysis developed herein. A new carbon surface coverage model was developed that accounted for multiple carbon adlayers before complete surface coverage is achieved. The resulting model requires knowledge of more model inputs, so they were conservatively estimated using the results of past thin film sputtering studies and particle reflection predictions. In addition, accelerator current densities across the grid were rigorously determined using an ion optics code to determine accelerator current distributions and an algorithm to determine beam current densities along a grid using downstream measurements. The improved analysis was applied to the NSTAR test results for evaluation. The improved analysis demonstrated that the impact of back-sputtered carbon on pit and groove wear rate for the NSTAR LDT2 was negligible throughout most of eroded grid radius. The improved analysis also predicted the accelerator current density for transition from net erosion to net deposition considerably more accurately than the original analysis. The improved analysis was used to estimate the impact of back-sputtered carbon on the accelerator grid pit and groove wear rate of the NEXT Long Duration Test (LDT1). Unlike the NSTAR analysis, the NEXT analysis was more challenging because the thruster was operated for extended durations at various operating conditions and was unavailable for measurements because the test is ongoing. As a result, the NEXT LDT1 estimates presented herein are considered preliminary until the results of future post-test analyses are incorporated. The worst-case impact of carbon back-sputtering was determined to be the full power operating condition, but the maximum impact of back-sputtered carbon was only a 4 percent reduction in wear rate. As a result, back-sputtered carbon is estimated to have an insignificant impact on the first failure mode of the NEXT LDT1 at all operating conditions.

  20. Developing estimates of potential demand for renewable wood energy products in Alaska

    Treesearch

    Allen M. Brackley; Valerie A. Barber; Cassie Pinkel

    2010-01-01

    Goal three of the current U.S. Department of Agriculture, Forest Service strategy for improving the use of woody biomass is to help develop and expand markets for woody biomass products. This report is concerned with the existing volumes of renewable wood energy products (RWEP) that are currently used in Alaska and the potential demand for RWEP for residential and...

  1. A 16 MJ compact pulsed power system for electromagnetic launch

    NASA Astrophysics Data System (ADS)

    Dai, Ling; Zhang, Qin; Zhong, Heqing; Lin, Fuchang; Li, Hua; Wang, Yan; Su, Cheng; Huang, Qinghua; Chen, Xu

    2015-07-01

    This paper has established a compact pulsed power system (PPS) of 16 MJ for electromagnetic rail gun. The PPS consists of pulsed forming network (PFN), chargers, monitoring system, and current junction. The PFN is composed of 156 pulse forming units (PFUs). Every PFU can be triggered simultaneously or sequentially in order to obtain different total current waveforms. The whole device except general control table is divided into two frameworks with size of 7.5 m × 2.2 m × 2.3 m. It is important to estimate the discharge current of PFU accurately for the design of the whole electromagnetic launch system. In this paper, the on-state characteristics of pulse thyristor have been researched to improve the estimation accuracy. The on-state characteristics of pulse thyristor are expressed as a logarithmic function based on experimental data. The circuit current waveform of the single PFU agrees with the simulating one. On the other hand, the coaxial discharge cable is a quick wear part in PFU because the discharge current will be up to dozens of kA even hundreds of kA. In this article, the electromagnetic field existing in the coaxial cable is calculated by finite element method. On basis of the calculation results, the structure of cable is optimized in order to improve the limit current value of the cable. At the end of the paper, the experiment current wave of the PPS with the load of rail gun is provided.

  2. A 16 MJ compact pulsed power system for electromagnetic launch.

    PubMed

    Dai, Ling; Zhang, Qin; Zhong, Heqing; Lin, Fuchang; Li, Hua; Wang, Yan; Su, Cheng; Huang, Qinghua; Chen, Xu

    2015-07-01

    This paper has established a compact pulsed power system (PPS) of 16 MJ for electromagnetic rail gun. The PPS consists of pulsed forming network (PFN), chargers, monitoring system, and current junction. The PFN is composed of 156 pulse forming units (PFUs). Every PFU can be triggered simultaneously or sequentially in order to obtain different total current waveforms. The whole device except general control table is divided into two frameworks with size of 7.5 m × 2.2 m × 2.3 m. It is important to estimate the discharge current of PFU accurately for the design of the whole electromagnetic launch system. In this paper, the on-state characteristics of pulse thyristor have been researched to improve the estimation accuracy. The on-state characteristics of pulse thyristor are expressed as a logarithmic function based on experimental data. The circuit current waveform of the single PFU agrees with the simulating one. On the other hand, the coaxial discharge cable is a quick wear part in PFU because the discharge current will be up to dozens of kA even hundreds of kA. In this article, the electromagnetic field existing in the coaxial cable is calculated by finite element method. On basis of the calculation results, the structure of cable is optimized in order to improve the limit current value of the cable. At the end of the paper, the experiment current wave of the PPS with the load of rail gun is provided.

  3. Estimating meme fitness in adaptive memetic algorithms for combinatorial problems.

    PubMed

    Smith, J E

    2012-01-01

    Among the most promising and active research areas in heuristic optimisation is the field of adaptive memetic algorithms (AMAs). These gain much of their reported robustness by adapting the probability with which each of a set of local improvement operators is applied, according to an estimate of their current value to the search process. This paper addresses the issue of how the current value should be estimated. Assuming the estimate occurs over several applications of a meme, we consider whether the extreme or mean improvements should be used, and whether this aggregation should be global, or local to some part of the solution space. To investigate these issues, we use the well-established COMA framework that coevolves the specification of a population of memes (representing different local search algorithms) alongside a population of candidate solutions to the problem at hand. Two very different memetic algorithms are considered: the first using adaptive operator pursuit to adjust the probabilities of applying a fixed set of memes, and a second which applies genetic operators to dynamically adapt and create memes and their functional definitions. For the latter, especially on combinatorial problems, credit assignment mechanisms based on historical records, or on notions of landscape locality, will have limited application, and it is necessary to estimate the value of a meme via some form of sampling. The results on a set of binary encoded combinatorial problems show that both methods are very effective, and that for some problems it is necessary to use thousands of variables in order to tease apart the differences between different reward schemes. However, for both memetic algorithms, a significant pattern emerges that reward based on mean improvement is better than that based on extreme improvement. This contradicts recent findings from adapting the parameters of operators involved in global evolutionary search. The results also show that local reward schemes outperform global reward schemes in combinatorial spaces, unlike in continuous spaces. An analysis of evolving meme behaviour is used to explain these findings.

  4. Retrospective Analog Year Analyses Using NASA Satellite Precipitation and Soil Moisture Data to Improve USDA's World Agricultural Supply and Demand Estimates

    NASA Technical Reports Server (NTRS)

    Teng, William; Shannon, Harlan; Mladenova, Iliana; Fang, Fan

    2010-01-01

    A primary goal of the U.S. Department of Agriculture (USDA) is to expand markets for U.S. agricultural products and support global economic development. The USDA World Agricultural Outlook Board (WAOB) supports this goal by coordinating monthly World Agricultural Supply and Demand Estimates (WASDE) for the U.S. and major foreign producing countries. Because weather has a significant impact on crop progress, conditions, and production, WAOB prepares frequent agricultural weather assessments, in a GIS-based, Global Agricultural Decision Support Environment (GLADSE). The main goal of this project, thus, is to improve WAOB's estimates by integrating NASA remote sensing soil moisture observations and research results into GLADSE (See diagram below). Soil moisture is currently a primary data gap at WAOB.

  5. Cogestion and recreation site demand: a model of demand-induced quality effects

    USGS Publications Warehouse

    Douglas, Aaron J.; Johnson, Richard L.

    1993-01-01

    This analysis focuses on problems of estimating site-specific dollar benefits conferred by outdoor recreation sites in the face of congestion costs. Encounters, crowding effects and congestion costs have often been treated by natural resource economists in a piecemeal fashion. In the current paper, encounters and crowding effects are treated systematically. We emphasize the quantitative impact of congestion costs on site-specific estimates of benefits conferred by improvements in outdoor recreation sites. The principal analytic conclusion is that techniques that streamline on data requirements produce biased estimates of benefits conferred by site improvements at facilities with significant crowding effects. The principal policy recommendation is that the Federal and state agencies should collect and store information on visitation rates, encounter levels and congestion costs at various outdoor recreation sites.

  6. Shrinkage estimation of effect sizes as an alternative to hypothesis testing followed by estimation in high-dimensional biology: applications to differential gene expression.

    PubMed

    Montazeri, Zahra; Yanofsky, Corey M; Bickel, David R

    2010-01-01

    Research on analyzing microarray data has focused on the problem of identifying differentially expressed genes to the neglect of the problem of how to integrate evidence that a gene is differentially expressed with information on the extent of its differential expression. Consequently, researchers currently prioritize genes for further study either on the basis of volcano plots or, more commonly, according to simple estimates of the fold change after filtering the genes with an arbitrary statistical significance threshold. While the subjective and informal nature of the former practice precludes quantification of its reliability, the latter practice is equivalent to using a hard-threshold estimator of the expression ratio that is not known to perform well in terms of mean-squared error, the sum of estimator variance and squared estimator bias. On the basis of two distinct simulation studies and data from different microarray studies, we systematically compared the performance of several estimators representing both current practice and shrinkage. We find that the threshold-based estimators usually perform worse than the maximum-likelihood estimator (MLE) and they often perform far worse as quantified by estimated mean-squared risk. By contrast, the shrinkage estimators tend to perform as well as or better than the MLE and never much worse than the MLE, as expected from what is known about shrinkage. However, a Bayesian measure of performance based on the prior information that few genes are differentially expressed indicates that hard-threshold estimators perform about as well as the local false discovery rate (FDR), the best of the shrinkage estimators studied. Based on the ability of the latter to leverage information across genes, we conclude that the use of the local-FDR estimator of the fold change instead of informal or threshold-based combinations of statistical tests and non-shrinkage estimators can be expected to substantially improve the reliability of gene prioritization at very little risk of doing so less reliably. Since the proposed replacement of post-selection estimates with shrunken estimates applies as well to other types of high-dimensional data, it could also improve the analysis of SNP data from genome-wide association studies.

  7. A de-noising method using the improved wavelet threshold function based on noise variance estimation

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Wang, Weida; Xiang, Changle; Han, Lijin; Nie, Haizhao

    2018-01-01

    The precise and efficient noise variance estimation is very important for the processing of all kinds of signals while using the wavelet transform to analyze signals and extract signal features. In view of the problem that the accuracy of traditional noise variance estimation is greatly affected by the fluctuation of noise values, this study puts forward the strategy of using the two-state Gaussian mixture model to classify the high-frequency wavelet coefficients in the minimum scale, which takes both the efficiency and accuracy into account. According to the noise variance estimation, a novel improved wavelet threshold function is proposed by combining the advantages of hard and soft threshold functions, and on the basis of the noise variance estimation algorithm and the improved wavelet threshold function, the research puts forth a novel wavelet threshold de-noising method. The method is tested and validated using random signals and bench test data of an electro-mechanical transmission system. The test results indicate that the wavelet threshold de-noising method based on the noise variance estimation shows preferable performance in processing the testing signals of the electro-mechanical transmission system: it can effectively eliminate the interference of transient signals including voltage, current, and oil pressure and maintain the dynamic characteristics of the signals favorably.

  8. Analysis of variance to assess statistical significance of Laplacian estimation accuracy improvement due to novel variable inter-ring distances concentric ring electrodes.

    PubMed

    Makeyev, Oleksandr; Joe, Cody; Lee, Colin; Besio, Walter G

    2017-07-01

    Concentric ring electrodes have shown promise in non-invasive electrophysiological measurement demonstrating their superiority to conventional disc electrodes, in particular, in accuracy of Laplacian estimation. Recently, we have proposed novel variable inter-ring distances concentric ring electrodes. Analytic and finite element method modeling results for linearly increasing distances electrode configurations suggested they may decrease the truncation error resulting in more accurate Laplacian estimates compared to currently used constant inter-ring distances configurations. This study assesses statistical significance of Laplacian estimation accuracy improvement due to novel variable inter-ring distances concentric ring electrodes. Full factorial design of analysis of variance was used with one categorical and two numerical factors: the inter-ring distances, the electrode diameter, and the number of concentric rings in the electrode. The response variables were the Relative Error and the Maximum Error of Laplacian estimation computed using a finite element method model for each of the combinations of levels of three factors. Effects of the main factors and their interactions on Relative Error and Maximum Error were assessed and the obtained results suggest that all three factors have statistically significant effects in the model confirming the potential of using inter-ring distances as a means of improving accuracy of Laplacian estimation.

  9. Rapid flood loss estimation for large scale floods in Germany

    NASA Astrophysics Data System (ADS)

    Schröter, Kai; Kreibich, Heidi; Merz, Bruno

    2013-04-01

    Rapid evaluations of flood events are needed for efficient responses both in emergency management and financial appraisal. Beyond that, closely monitoring and documenting the formation and development of flood events and their impacts allows for an improved understanding and in depth analyses of the interplay between meteorological, hydrological, hydraulic and societal causes leading to flood damage. This contribution focuses on the development of a methodology for the rapid assessment of flood events. In the first place, the focus is on the prediction of damage to residential buildings caused by large scale floods in Germany. For this purpose an operational flood event analysis system is developed. This system has basic spatial thematic data available and supports data capturing about the current flood situation. This includes the retrieval of online gauge data and the integration of remote sensing data. Further, it provides functionalities to evaluate the current flood situation, to assess the hazard extent and intensity and to estimate the current flood impact using the flood loss estimation model FLEMOps+r. The operation of the flood event analysis system will be demonstrated for the past flood event from January 2011 with a focus on the Elbe/Saale region. On this grounds, further requirements and potential for improving the information basis as for instance by including hydrological and /or hydraulic model results as well as information from social sensors will be discussed.

  10. The development of an hourly gridded rainfall product for hydrological applications in England and Wales

    NASA Astrophysics Data System (ADS)

    Liguori, Sara; O'Loughlin, Fiachra; Souvignet, Maxime; Coxon, Gemma; Freer, Jim; Woods, Ross

    2014-05-01

    This research presents a newly developed observed sub-daily gridded precipitation product for England and Wales. Importantly our analysis specifically allows a quantification of rainfall errors from grid to the catchment scale, useful for hydrological model simulation and the evaluation of prediction uncertainties. Our methodology involves the disaggregation of the current one kilometre daily gridded precipitation records available for the United Kingdom[1]. The hourly product is created using information from: 1) 2000 tipping-bucket rain gauges; and 2) the United Kingdom Met-Office weather radar network. These two independent datasets provide rainfall estimates at temporal resolutions much smaller than the current daily gridded rainfall product; thus allowing the disaggregation of the daily rainfall records to an hourly timestep. Our analysis is conducted for the period 2004 to 2008, limited by the current availability of the datasets. We analyse the uncertainty components affecting the accuracy of this product. Specifically we explore how these uncertainties vary spatially, temporally and with climatic regimes. Preliminary results indicate scope for improvement of hydrological model performance by the utilisation of this new hourly gridded rainfall product. Such product will improve our ability to diagnose and identify structural errors in hydrological modelling by including the quantification of input errors. References [1] Keller V, Young AR, Morris D, Davies H (2006) Continuous Estimation of River Flows. Technical Report: Estimation of Precipitation Inputs. in Agency E (ed.). Environmental Agency.

  11. An Innovative Method for Estimating Soil Retention at a Continental Scale

    EPA Science Inventory

    Planning for a sustainable future should include an accounting of services currently provided by ecosystems such as erosion control. Retention of soil improves fertility, increases water retention, and decreases sedimentation in streams and rivers. Landscapes patterns that fac...

  12. Spatial-altitudinal and temporal variation of Degree Day Factors (DDFs) in the Upper Indus Basin

    NASA Astrophysics Data System (ADS)

    Khan, Asif; Attaullah, Haleema; Masud, Tabinda; Khan, Mujahid

    2017-04-01

    Melt contribution from snow and ice in the Hindukush-Karakoram-Himalayan (HKH) region could account for more than 80% of annual river flows in the Upper Indus Basin (UIB). Increase or decrease in precipitation, energy input and glacier reserves can significantly affect water resources of this region. Therefore improved hydrological modelling and accurate future water resources prediction are vital for food production and hydro-power generation for millions of people living downstream, and are intensively needed. In mountain regions Degree Day Factors (DDFs) significantly vary on spatial and altitudinal basis, and are primary inputs of temperature-based hydrological modelling. However previous studies have used different DDFs as calibration parameters without due attention to the physical meaning of the values employed, and these estimates possess significant variability and uncertainty. This study provides estimates of DDFs for various altitudinal zones in the UIB at sub-basin level. Snow, clean ice and ice with debris cover bear different melt rates (or DDFs), therefore areally-averaged DDFs based on snow, clean and debris-covered ice classes in various altitudinal zones have been estimated for all sub-basins of the UIB. Zonal estimates of DDFs in the current study are significantly different from earlier adopted DDFs, hence suggest a revisit of previous hydrological modelling studies. DDFs presented in current study have been validated by using Snowmelt Runoff Model (SRM) in various sub-basins with good Nash Sutcliffe coefficients (R2 > 0.85) and low volumetric errors (Dv<10%). DDFs and methods provided in the current study can be used in future improved hydrological modelling and to provide accurate predictions of future river flows changes. The methodology used for estimation of DDFs is robust, and can be adopted to produce such estimates in other regions of the, particularly in the nearby other HKH basins.

  13. Evaluating Childhood Vaccination Coverage of NIP Vaccines: Coverage Survey versus Zhejiang Provincial Immunization Information System.

    PubMed

    Hu, Yu; Chen, Yaping

    2017-07-11

    Vaccination coverage in Zhejiang province, east China, is evaluated through repeated coverage surveys. The Zhejiang provincial immunization information system (ZJIIS) was established in 2004 with links to all immunization clinics. ZJIIS has become an alternative to quickly assess the vaccination coverage. To assess the current completeness and accuracy on the vaccination coverage derived from ZJIIS, we compared the estimates from ZJIIS with the estimates from the most recent provincial coverage survey in 2014, which combined interview data with verified data from ZJIIS. Of the enrolled 2772 children in the 2014 provincial survey, the proportions of children with vaccination cards and registered in ZJIIS were 94.0% and 87.4%, respectively. Coverage estimates from ZJIIS were systematically higher than the corresponding estimates obtained through the survey, with a mean difference of 4.5%. Of the vaccination doses registered in ZJIIS, 16.7% differed from the date recorded in the corresponding vaccination cards. Under-registration in ZJIIS significantly influenced the coverage estimates derived from ZJIIS. Therefore, periodic coverage surveys currently provide more complete and reliable results than the estimates based on ZJIIS alone. However, further improvement of completeness and accuracy of ZJIIS will likely allow more reliable and timely estimates in future.

  14. Improving and Evaluating Nested Sampling Algorithm for Marginal Likelihood Estimation

    NASA Astrophysics Data System (ADS)

    Ye, M.; Zeng, X.; Wu, J.; Wang, D.; Liu, J.

    2016-12-01

    With the growing impacts of climate change and human activities on the cycle of water resources, an increasing number of researches focus on the quantification of modeling uncertainty. Bayesian model averaging (BMA) provides a popular framework for quantifying conceptual model and parameter uncertainty. The ensemble prediction is generated by combining each plausible model's prediction, and each model is attached with a model weight which is determined by model's prior weight and marginal likelihood. Thus, the estimation of model's marginal likelihood is crucial for reliable and accurate BMA prediction. Nested sampling estimator (NSE) is a new proposed method for marginal likelihood estimation. The process of NSE is accomplished by searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm is often used for local sampling. However, M-H is not an efficient sampling algorithm for high-dimensional or complicated parameter space. For improving the efficiency of NSE, it could be ideal to incorporate the robust and efficient sampling algorithm - DREAMzs into the local sampling of NSE. The comparison results demonstrated that the improved NSE could improve the efficiency of marginal likelihood estimation significantly. However, both improved and original NSEs suffer from heavy instability. In addition, the heavy computation cost of huge number of model executions is overcome by using an adaptive sparse grid surrogates.

  15. Estimating parameters from rotating ring disc electrode measurements

    DOE PAGES

    Santhanagopalan, Shriram; White, Ralph E.

    2017-10-21

    Rotating ring disc electrode (RRDE) experiments are a classic tool for investigating kinetics of electrochemical reactions. Several standardized methods exist for extracting transport parameters and reaction rate constants using RRDE measurements. Here in this work, we compare some approximate solutions to the convective diffusion used popularly in the literature to a rigorous numerical solution of the Nernst-Planck equations coupled to the three dimensional flow problem. In light of these computational advancements, we explore design aspects of the RRDE that will help improve sensitivity of our parameter estimation procedure to experimental data. We use the oxygen reduction in acidic media involvingmore » three charge transfer reactions and a chemical reaction as an example, and identify ways to isolate reaction currents for the individual processes in order to accurately estimate the exchange current densities.« less

  16. Robust Mosaicking of Stereo Digital Elevation Models from the Ames Stereo Pipeline

    NASA Technical Reports Server (NTRS)

    Kim, Tae Min; Moratto, Zachary M.; Nefian, Ara Victor

    2010-01-01

    Robust estimation method is proposed to combine multiple observations and create consistent, accurate, dense Digital Elevation Models (DEMs) from lunar orbital imagery. The NASA Ames Intelligent Robotics Group (IRG) aims to produce higher-quality terrain reconstructions of the Moon from Apollo Metric Camera (AMC) data than is currently possible. In particular, IRG makes use of a stereo vision process, the Ames Stereo Pipeline (ASP), to automatically generate DEMs from consecutive AMC image pairs. However, the DEMs currently produced by the ASP often contain errors and inconsistencies due to image noise, shadows, etc. The proposed method addresses this problem by making use of multiple observations and by considering their goodness of fit to improve both the accuracy and robustness of the estimate. The stepwise regression method is applied to estimate the relaxed weight of each observation.

  17. Investigation of Unsteady Pressure-Sensitive Paint (uPSP) and a Dynamic Loads Balance to Predict Launch Vehicle Buffet Environments

    NASA Technical Reports Server (NTRS)

    Schuster, David M.; Panda, Jayanta; Ross, James C.; Roozeboom, Nettie H.; Burnside, Nathan J.; Ngo, Christina L.; Kumagai, Hiro; Sellers, Marvin; Powell, Jessica M.; Sekula, Martin K.; hide

    2016-01-01

    This NESC assessment examined the accuracy of estimating buffet loads on in-line launch vehicles without booster attachments using sparse unsteady pressure measurements. The buffet loads computed using sparse sensor data were compared with estimates derived using measurements with much higher spatial resolution. The current method for estimating launch vehicle buffet loads is through wind tunnel testing of models with approximately 400 unsteady pressure transducers. Even with this relatively large number of sensors, the coverage can be insufficient to provide reliable integrated unsteady loads on vehicles. In general, sparse sensor spacing requires the use of coherence-length-based corrections in the azimuthal and axial directions to integrate the unsteady pressures and obtain reasonable estimates of the buffet loads. Coherence corrections have been used to estimate buffet loads for a variety of launch vehicles with the assumption methodology results in reasonably conservative loads. For the Space Launch System (SLS), the first estimates of buffet loads exceeded the limits of the vehicle structure, so additional tests with higher sensor density were conducted to better define the buffet loads and possibly avoid expensive modifications to the vehicle design. Without the additional tests and improvements to the coherence-length analysis methods, there would have been significant impacts to the vehicle weight, cost, and schedule. If the load estimates turn out to be too low, there is significant risk of structural failure of the vehicle. This assessment used a combination of unsteady pressure-sensitive paint (uPSP), unsteady pressure transducers, and a dynamic force and moment balance to investigate the integration schemes used with limited unsteady pressure data by comparing them with direct integration of extremely dense fluctuating pressure measurements. An outfall of the assessment was to evaluate the potential of using the emerging uPSP technique in a production test environment for future launch vehicles. The results show that modifications to the current technique can improve the accuracy of buffet estimates. More importantly, the uPSP worked remarkably well and, with improvements to the frequency response, sensitivity, and productivity, will provide an enhanced method for measuring wind tunnel buffet forcing functions (BFFs).

  18. IMPROVED EQUIPMENT CLEANING IN COATED AND LAMINATED SUBSTRATE MANUFACTURING FACILITIES (PHASE I)

    EPA Science Inventory

    The report gives results of a Phase I study to characterize current equipment cleaning practices in the coated and laminated substrate manufacturing industry, to identify alternative cleaning technologies, and to identify demonstrable technologies and estimate their emissions imp...

  19. Improving rotorcraft survivability to RPG attack using inverse methods

    NASA Astrophysics Data System (ADS)

    Anderson, D.; Thomson, D. G.

    2009-09-01

    This paper presents the results of a preliminary investigation of optimal threat evasion strategies for improving the survivability of rotorcraft under attack by rocket propelled grenades (RPGs). The basis of this approach is the application of inverse simulation techniques pioneered for simulation of aggressive helicopter manoeuvres to the RPG engagement problem. In this research, improvements in survivability are achieved by computing effective evasive manoeuvres. The first step in this process uses the missile approach warning system camera (MAWS) on the aircraft to provide angular information of the threat. Estimates of the RPG trajectory and impact point are then estimated. For the current flight state an appropriate evasion response is selected then realised via inverse simulation of the platform dynamics. Results are presented for several representative engagements showing the efficacy of the approach.

  20. Spatial-temporal variability in groundwater abstraction across Uganda: Implications to sustainable water resources management

    NASA Astrophysics Data System (ADS)

    Nanteza, J.; Thomas, B. F.; Mukwaya, P. I.

    2017-12-01

    The general lack of knowledge about the current rates of water abstraction/use is a challenge to sustainable water resources management in many countries, including Uganda. Estimates of water abstraction/use rates over Uganda, currently available from the FAO are not disaggregated according to source, making it difficult to understand how much is taken out of individual water stores, limiting effective management. Modelling efforts have disaggregated water use rates according to source (i.e. groundwater and surface water). However, over Sub-Saharan Africa countries, these model use estimates are highly uncertain given the scale limitations in applying water use (i.e. point versus regional), thus influencing model calibration/validation. In this study, we utilize data from the water supply atlas project over Uganda to estimate current rates of groundwater abstraction across the country based on location, well type and other relevant information. GIS techniques are employed to demarcate areas served by each water source. These areas are combined with past population distributions and average daily water needed per person to estimate water abstraction/use through time. The results indicate an increase in groundwater use, and isolate regions prone to groundwater depletion where improved management is required to sustainably management groundwater use.

  1. Terminal Area Productivity Airport Wind Analysis and Chicago O'Hare Model Description

    NASA Technical Reports Server (NTRS)

    Hemm, Robert; Shapiro, Gerald

    1998-01-01

    This paper describes two results from a continuing effort to provide accurate cost-benefit analyses of the NASA Terminal Area Productivity (TAP) program technologies. Previous tasks have developed airport capacity and delay models and completed preliminary cost benefit estimates for TAP technologies at 10 U.S. airports. This task covers two improvements to the capacity and delay models. The first improvement is the completion of a detailed model set for the Chicago O'Hare (ORD) airport. Previous analyses used a more general model to estimate the benefits for ORD. This paper contains a description of the model details with results corresponding to current conditions. The second improvement is the development of specific wind speed and direction criteria for use in the delay models to predict when the Aircraft Vortex Spacing System (AVOSS) will allow use of reduced landing separations. This paper includes a description of the criteria and an estimate of AVOSS utility for 10 airports based on analysis of 35 years of weather data.

  2. A New Multi-Sensor Fusion Scheme to Improve the Accuracy of Knee Flexion Kinematics for Functional Rehabilitation Movements.

    PubMed

    Tannous, Halim; Istrate, Dan; Benlarbi-Delai, Aziz; Sarrazin, Julien; Gamet, Didier; Ho Ba Tho, Marie Christine; Dao, Tien Tuan

    2016-11-15

    Exergames have been proposed as a potential tool to improve the current practice of musculoskeletal rehabilitation. Inertial or optical motion capture sensors are commonly used to track the subject's movements. However, the use of these motion capture tools suffers from the lack of accuracy in estimating joint angles, which could lead to wrong data interpretation. In this study, we proposed a real time quaternion-based fusion scheme, based on the extended Kalman filter, between inertial and visual motion capture sensors, to improve the estimation accuracy of joint angles. The fusion outcome was compared to angles measured using a goniometer. The fusion output shows a better estimation, when compared to inertial measurement units and Kinect outputs. We noted a smaller error (3.96°) compared to the one obtained using inertial sensors (5.04°). The proposed multi-sensor fusion system is therefore accurate enough to be applied, in future works, to our serious game for musculoskeletal rehabilitation.

  3. An integrated uncertainty analysis and data assimilation approach for improved streamflow predictions

    NASA Astrophysics Data System (ADS)

    Hogue, T. S.; He, M.; Franz, K. J.; Margulis, S. A.; Vrugt, J. A.

    2010-12-01

    The current study presents an integrated uncertainty analysis and data assimilation approach to improve streamflow predictions while simultaneously providing meaningful estimates of the associated uncertainty. Study models include the National Weather Service (NWS) operational snow model (SNOW17) and rainfall-runoff model (SAC-SMA). The proposed approach uses the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) to simultaneously estimate uncertainties in model parameters, forcing, and observations. An ensemble Kalman filter (EnKF) is configured with the DREAM-identified uncertainty structure and applied to assimilating snow water equivalent data into the SNOW17 model for improved snowmelt simulations. Snowmelt estimates then serves as an input to the SAC-SMA model to provide streamflow predictions at the basin outlet. The robustness and usefulness of the approach is evaluated for a snow-dominated watershed in the northern Sierra Mountains. This presentation describes the implementation of DREAM and EnKF into the coupled SNOW17 and SAC-SMA models and summarizes study results and findings.

  4. FPGA-Based Fused Smart-Sensor for Tool-Wear Area Quantitative Estimation in CNC Machine Inserts

    PubMed Central

    Trejo-Hernandez, Miguel; Osornio-Rios, Roque Alfredo; de Jesus Romero-Troncoso, Rene; Rodriguez-Donate, Carlos; Dominguez-Gonzalez, Aurelio; Herrera-Ruiz, Gilberto

    2010-01-01

    Manufacturing processes are of great relevance nowadays, when there is a constant claim for better productivity with high quality at low cost. The contribution of this work is the development of a fused smart-sensor, based on FPGA to improve the online quantitative estimation of flank-wear area in CNC machine inserts from the information provided by two primary sensors: the monitoring current output of a servoamplifier, and a 3-axis accelerometer. Results from experimentation show that the fusion of both parameters makes it possible to obtain three times better accuracy when compared with the accuracy obtained from current and vibration signals, individually used. PMID:22319304

  5. Improved estimates of partial volume coefficients from noisy brain MRI using spatial context.

    PubMed

    Manjón, José V; Tohka, Jussi; Robles, Montserrat

    2010-11-01

    This paper addresses the problem of accurate voxel-level estimation of tissue proportions in the human brain magnetic resonance imaging (MRI). Due to the finite resolution of acquisition systems, MRI voxels can contain contributions from more than a single tissue type. The voxel-level estimation of this fractional content is known as partial volume coefficient estimation. In the present work, two new methods to calculate the partial volume coefficients under noisy conditions are introduced and compared with current similar methods. Concretely, a novel Markov Random Field model allowing sharp transitions between partial volume coefficients of neighbouring voxels and an advanced non-local means filtering technique are proposed to reduce the errors due to random noise in the partial volume coefficient estimation. In addition, a comparison was made to find out how the different methodologies affect the measurement of the brain tissue type volumes. Based on the obtained results, the main conclusions are that (1) both Markov Random Field modelling and non-local means filtering improved the partial volume coefficient estimation results, and (2) non-local means filtering was the better of the two strategies for partial volume coefficient estimation. Copyright 2010 Elsevier Inc. All rights reserved.

  6. Improved estimates of Belgian private health expenditure can give important lessons to other OECD countries.

    PubMed

    Calcoen, Piet; Moens, Dirk; Verlinden, Pieter; van de Ven, Wynand P M M; Pacolet, Jozef

    2015-03-01

    OECD Health Data are a well-known source for detailed information about health expenditure. These data enable us to analyze health policy issues over time and in comparison with other countries. However, current official Belgian estimates of private expenditure (as published in the OECD Health Data) have proven not to be reliable. We distinguish four potential major sources of problems with estimating private health spending: interpretation of definitions, formulation of assumptions, missing or incomplete data and incorrect data. Using alternative sources of billing information, we have reached more accurate estimates of private and out-of-pocket expenditure. For Belgium we found differences of more than 100% between our estimates and the official Belgian estimates of private health expenditure (as published in the OECD Health Data). For instance, according to OECD Health Data private expenditure on hospitals in Belgium amounts to €3.1 billion, while according to our alternative calculations these expenses represent only €1.1 billion. Total private expenditure differs only 1%, but this is a mere coincidence. This exercise may be of interest to other OECD countries looking to improve their estimates of private expenditure on health. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  7. Some safe and sensible shortcuts for efficiently upscaled updates of existing elevation models.

    NASA Astrophysics Data System (ADS)

    Knudsen, Thomas; Aasbjerg Nielsen, Allan

    2013-04-01

    The Danish national elevation model, DK-DEM, was introduced in 2009 and is based on LiDAR data collected in the time frame 2005-2007. Hence, DK-DEM is aging, and it is time to consider how to integrate new data with the current model in a way that improves the representation of new landscape features, while still preserving the overall (very high) quality of the model. In LiDAR terms, 2005 is equivalent to some time between the palaeolithic and the neolithic. So evidently, when (and if) an update project is launched, we may expect some notable improvements due to the technical and scientific developments from the last half decade. To estimate the magnitude of these potential improvements, and to devise efficient and effective ways of integrating the new and old data, we currently carry out a number of case studies based on comparisons between the current terrain model (with a ground sample distance, GSD, of 1.6 m), and a number of new high resolution point clouds (10-70 points/m2). Not knowing anything about the terms of a potential update project, we consider multiple scenarios ranging from business as usual: A new model with the same GSD, but improved precision, to aggressive upscaling: A new model with 4 times better GSD, i.e. a 16-fold increase in the amount of data. Especially in the latter case speeding up the gridding process is important. Luckily recent results from one of our case studies reveal that for very high resolution data in smooth terrain (which is the common case in Denmark), using local mean (LM) as grid value estimator is only negligibly worse than using the theoretically "best" estimator, i.e. ordinary kriging (OK) with rigorous modelling of the semivariogram. The bias in a leave one out cross validation differs on the micrometer level, while the RMSE differs on the 0.1 mm level. This is fortunate, since a LM estimator can be implemented in plain stream mode, letting the points from the unstructured point cloud (i.e. no TIN generation) stream through the processor, individually contributing to the nearest grid posts in a memory mapped grid file. Algorithmically this is very efficient, but it would be even more efficient if we did not have to handle so much data. Another of our recent case studies focuses on this. The basic idea is to ignore data that does not tell us anything new. We do this by looking at anomalies between the current height model and the new point cloud, then computing a correction grid for the current model. Points with insignificant anomalies are simply removed from the point cloud, and the correction grid is computed using the remaining point anomalies only. Hence, we only compute updates in areas of significant change, speeding up the process, and giving us new insight of the precision of the current model which in turn results in improved metadata for both the current and the new model. Currently we focus on simple approaches for creating a smooth update process for integration of heterogeneous data sets. On the other hand, as years go by and multiple generations of data become available, more advanced approaches will probably become necessary (e.g. a multi campaign bundle adjustment, improving the oldest data using cross-over adjustment with newer campaigns). But to prepare for such approaches, it is important already now to organize and evaluate the ancillary (GPS, INS) and engineering level data for the current data sets. This is essential if future generations of DEM users should be able to benefit from future conceptions of "some safe and sensible shortcuts for efficiently upscaled updates of existing elevation models".

  8. Estimating Surgical Procedure Times Using Anesthesia Billing Data and Operating Room Records.

    PubMed

    Burgette, Lane F; Mulcahy, Andrew W; Mehrotra, Ateev; Ruder, Teague; Wynn, Barbara O

    2017-02-01

    The median time required to perform a surgical procedure is important in determining payment under Medicare's physician fee schedule. Prior studies have demonstrated that the current methodology of using physician surveys to determine surgical times results in overstated times. To measure surgical times more accurately, we developed and validated a methodology using available data from anesthesia billing data and operating room (OR) records. We estimated surgical times using Medicare 2011 anesthesia claims and New York Statewide Planning and Research Cooperative System 2011 OR times. Estimated times were validated using data from the National Surgical Quality Improvement Program. We compared our time estimates to those used by Medicare in the fee schedule. We estimate surgical times via piecewise linear median regression models. Using 3.0 million observations of anesthesia and OR times, we estimated surgical time for 921 procedures. Correlation between these time estimates and directly measured surgical time from the validation database was 0.98. Our estimates of surgical time were shorter than the Medicare fee schedule estimates for 78 percent of procedures. Anesthesia and OR times can be used to measure surgical time and thereby improve the payment for surgical procedures in the Medicare fee schedule. © Health Research and Educational Trust.

  9. Estimation of geopotential differences over intercontinental locations using satellite and terrestrial measurements. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Pavlis, Nikolaos K.

    1991-01-01

    An error analysis study was conducted in order to assess the current accuracies and the future anticipated improvements in the estimation of geopotential differences over intercontinental locations. An observation/estimation scheme was proposed and studied, whereby gravity disturbance measurements on the Earth's surface, in caps surrounding the estimation points, are combined with corresponding data in caps directly over these points at the altitude of a low orbiting satellite, for the estimation of the geopotential difference between the terrestrial stations. The mathematical modeling required to relate the primary observables to the parameters to be estimated, was studied for the terrestrial data and the data at altitude. Emphasis was placed on the examination of systematic effects and on the corresponding reductions that need to be applied to the measurements to avoid systematic errors. The error estimation for the geopotential differences was performed using both truncation theory and least squares collocation with ring averages, in case observations on the Earth's surface only are used. The error analysis indicated that with the currently available global geopotential model OSU89B and with gravity disturbance data in 2 deg caps surrounding the estimation points, the error of the geopotential difference arising from errors in the reference model and the cap data is about 23 kgal cm, for 30 deg station separation.

  10. A Robust Adaptive Unscented Kalman Filter for Nonlinear Estimation with Uncertain Noise Covariance.

    PubMed

    Zheng, Binqi; Fu, Pengcheng; Li, Baoqing; Yuan, Xiaobing

    2018-03-07

    The Unscented Kalman filter (UKF) may suffer from performance degradation and even divergence while mismatch between the noise distribution assumed as a priori by users and the actual ones in a real nonlinear system. To resolve this problem, this paper proposes a robust adaptive UKF (RAUKF) to improve the accuracy and robustness of state estimation with uncertain noise covariance. More specifically, at each timestep, a standard UKF will be implemented first to obtain the state estimations using the new acquired measurement data. Then an online fault-detection mechanism is adopted to judge if it is necessary to update current noise covariance. If necessary, innovation-based method and residual-based method are used to calculate the estimations of current noise covariance of process and measurement, respectively. By utilizing a weighting factor, the filter will combine the last noise covariance matrices with the estimations as the new noise covariance matrices. Finally, the state estimations will be corrected according to the new noise covariance matrices and previous state estimations. Compared with the standard UKF and other adaptive UKF algorithms, RAUKF converges faster to the actual noise covariance and thus achieves a better performance in terms of robustness, accuracy, and computation for nonlinear estimation with uncertain noise covariance, which is demonstrated by the simulation results.

  11. A Robust Adaptive Unscented Kalman Filter for Nonlinear Estimation with Uncertain Noise Covariance

    PubMed Central

    Zheng, Binqi; Yuan, Xiaobing

    2018-01-01

    The Unscented Kalman filter (UKF) may suffer from performance degradation and even divergence while mismatch between the noise distribution assumed as a priori by users and the actual ones in a real nonlinear system. To resolve this problem, this paper proposes a robust adaptive UKF (RAUKF) to improve the accuracy and robustness of state estimation with uncertain noise covariance. More specifically, at each timestep, a standard UKF will be implemented first to obtain the state estimations using the new acquired measurement data. Then an online fault-detection mechanism is adopted to judge if it is necessary to update current noise covariance. If necessary, innovation-based method and residual-based method are used to calculate the estimations of current noise covariance of process and measurement, respectively. By utilizing a weighting factor, the filter will combine the last noise covariance matrices with the estimations as the new noise covariance matrices. Finally, the state estimations will be corrected according to the new noise covariance matrices and previous state estimations. Compared with the standard UKF and other adaptive UKF algorithms, RAUKF converges faster to the actual noise covariance and thus achieves a better performance in terms of robustness, accuracy, and computation for nonlinear estimation with uncertain noise covariance, which is demonstrated by the simulation results. PMID:29518960

  12. Learning/cost-improvement curves

    NASA Technical Reports Server (NTRS)

    Delionback, L. M.

    1976-01-01

    Review guide is an aid to manager or engineer who must determine production costs for components, systems, or services. Methods are described by which manufacturers may use historical data, task characteristics, and current cost data to estimate unit prices as function of number of units to be produced.

  13. Improving estimations of greenhouse gas transfer velocities by atmosphere-ocean couplers in Earth-System and regional models

    NASA Astrophysics Data System (ADS)

    Vieira, V. M. N. C. S.; Sahlée, E.; Jurus, P.; Clementi, E.; Pettersson, H.; Mateus, M.

    2015-09-01

    Earth-System and regional models, forecasting climate change and its impacts, simulate atmosphere-ocean gas exchanges using classical yet too simple generalizations relying on wind speed as the sole mediator while neglecting factors as sea-surface agitation, atmospheric stability, current drag with the bottom, rain and surfactants. These were proved fundamental for accurate estimates, particularly in the coastal ocean, where a significant part of the atmosphere-ocean greenhouse gas exchanges occurs. We include several of these factors in a customizable algorithm proposed for the basis of novel couplers of the atmospheric and oceanographic model components. We tested performances with measured and simulated data from the European coastal ocean, having found our algorithm to forecast greenhouse gas exchanges largely different from the forecasted by the generalization currently in use. Our algorithm allows calculus vectorization and parallel processing, improving computational speed roughly 12× in a single cpu core, an essential feature for Earth-System models applications.

  14. APPLICATION OF THE 3D MODEL OF RAILWAY VIADUCTS TO COST ESTIMATION AND CONSTRUCTION

    NASA Astrophysics Data System (ADS)

    Fujisawa, Yasuo; Yabuki, Nobuyoshi; Igarashi, Zenichi; Yoshino, Hiroyuki

    Three dimensional models of civil engineering structures are only partially used in either design or construction but not both. Research on integration of design, cost estimation and construction by 3Dmodels has not been heard in civil engineering domain yet. Using continuously a 3D product model of a structure from design to construction through estimation should improve the efficiency and decrease the occurrence of mistakes, hence enhancing the quality. In this research, we investigated the current practices of flow from design to construction, particularly focusing on cost estimation. Then, we identified advantages and issues on utilization of 3D design models to estimation and construction by applying 3D models to an actual railway construction project.

  15. Impact of TRMM and SSM/I-derived Precipitation and Moisture Data on the GEOS Global Analysis

    NASA Technical Reports Server (NTRS)

    Hou, Arthur Y.; Zhang, Sara Q.; daSilva, Arlindo M.; Olson, William S.

    1999-01-01

    Current global analyses contain significant errors in primary hydrological fields such as precipitation, evaporation, and related cloud and moisture in the tropics. The Data Assimilation Office at NASA's Goddard Space Flight Center has been exploring the use of space-based rainfall and total precipitable water (TPW) estimates to constrain these hydrological parameters in the Goddard Earth Observing System (GEOS) global data assimilation system. We present results showing that assimilating the 6-hour averaged rain rates and TPW estimates from the Tropical Rainfall Measuring Mission (TRMM) and Special Sensor Microwave/Imager (SSM/I) instruments improves not only the precipitation and moisture estimates but also reduce state-dependent systematic errors in key climate parameters directly linked to convection such as the outgoing longwave radiation, clouds, and the large-scale circulation. The improved analysis also improves short-range forecasts beyond 1 day, but the impact is relatively modest compared with improvements in the time-averaged analysis. The study shows that, in the presence of biases and other errors of the forecast model, improving the short-range forecast is not necessarily prerequisite for improving the assimilation as a climate data set. The full impact of a given type of observation on the assimilated data set should not be measured solely in terms of forecast skills.

  16. Does competition improve health care quality?

    PubMed

    Scanlon, Dennis P; Swaminathan, Shailender; Lee, Woolton; Chernew, Michael

    2008-12-01

    To identify the effect of competition on health maintenance organizations' (HMOs) quality measures. Longitudinal analysis of a 5-year panel of the Healthcare Effectiveness Data and Information Set (HEDIS) and Consumer Assessment of Health Plans Survey(R) (CAHPS) data (calendar years 1998-2002). All plans submitting data to the National Committee for Quality Assurance (NCQA) were included regardless of their decision to allow NCQA to disclose their results publicly. NCQA, Interstudy, the Area Resource File, and the Bureau of Labor Statistics. Fixed-effects models were estimated that relate HMO competition to HMO quality controlling for an unmeasured, time-invariant plan, and market traits. Results are compared with estimates from models reliant on cross-sectional variation. Estimates suggest that plan quality does not improve with increased levels of HMO competition (as measured by either the Herfindahl index or the number of HMOs). Similarly, increased HMO penetration is generally not associated with improved quality. Cross-sectional models tend to suggest an inverse relationship between competition and quality. The strategies that promote competition among HMOs in the current market setting may not lead to improved HMO quality. It is possible that price competition dominates, with purchasers and consumers preferring lower premiums at the expense of improved quality, as measured by HEDIS and CAHPS. It is also possible that the fragmentation associated with competition hinders quality improvement.

  17. A digital combining-weight estimation algorithm for broadband sources with the array feed compensation system

    NASA Technical Reports Server (NTRS)

    Vilnrotter, V. A.; Rodemich, E. R.

    1994-01-01

    An algorithm for estimating the optimum combining weights for the Ka-band (33.7-GHz) array feed compensation system was developed and analyzed. The input signal is assumed to be broadband radiation of thermal origin, generated by a distant radio source. Currently, seven video converters operating in conjunction with the real-time correlator are used to obtain these weight estimates. The algorithm described here requires only simple operations that can be implemented on a PC-based combining system, greatly reducing the amount of hardware. Therefore, system reliability and portability will be improved.

  18. ASTEROSEISMIC-BASED ESTIMATION OF THE SURFACE GRAVITY FOR THE LAMOST GIANT STARS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Chao; Wu, Yue; Deng, Li-Cai

    2015-07-01

    Asteroseismology is one of the most accurate approaches to estimate the surface gravity of a star. However, most of the data from the current spectroscopic surveys do not have asteroseismic measurements, which is very expensive and time consuming. In order to improve the spectroscopic surface gravity estimates for a large amount of survey data with the help of the small subset of the data with seismic measurements, we set up a support vector regression (SVR) model for the estimation of the surface gravity supervised by 1374 Large Sky Area Multi-object Fiber Spectroscopic Telescope (LAMOST) giant stars with Kepler seismic surfacemore » gravity. The new approach can reduce the uncertainty of the estimates down to about 0.1 dex, which is better than the LAMOST pipeline by at least a factor of 2, for the spectra with signal-to-noise ratio higher than 20. Compared with the log g estimated from the LAMOST pipeline, the revised log g values provide a significantly improved match to the expected distribution of red clump and red giant branch stars from stellar isochrones. Moreover, even the red bump stars, which extend to only about 0.1 dex in log g, can be discriminated from the new estimated surface gravity. The method is then applied to about 350,000 LAMOST metal-rich giant stars to provide improved surface gravity estimates. In general, the uncertainty of the distance estimate based on the SVR surface gravity can be reduced to about 12% for the LAMOST data.« less

  19. The Role of Satellite Imagery to Improve Pastureland Estimates in South America

    NASA Astrophysics Data System (ADS)

    Graesser, J.

    2015-12-01

    Agriculture has changed substantially across the globe over the past half century. While much work has been done to improve spatial-temporal estimates of agricultural changes, we still know more about the extent of row-crop agriculture than livestock-grazed land. The gap between cropland and pastureland estimates exists largely because it is challenging to characterize natural versus grazed grasslands from a remote sensing perspective. However, the impasse of pastureland estimates is set to break, with an increasing number of spaceborne sensors and freely available satellite data. The Landsat satellite archive in particular provides researchers with immense amounts of data to improve pastureland information. Here we focus on South America, where pastureland expansion has been scrutinized for the past few decades. We explore the challenges of estimating pastureland using temporal Landsat imagery and focus on key agricultural countries, regions, and ecosystems. We focus on the suggested shift of pastureland from the Argentine Pampas to northern Argentina, and the mixing of small-scale and large-scale ranching in eastern Paraguay and how it could impact the Chaco forest to the west. Further, the Beni Savannahs of northern Bolivia and the Colombian Llanos—both grassland and savannah regions historically used for livestock grazing—have been hinted at as future areas for cropland expansion. There are certainly environmental concerns with pastureland expansion into forests; but what are the environmental implications when well-managed pasture systems are converted to intensive soybean or palm oil plantation? Tropical, grazed grasslands are important habitats for biodiversity, and pasturelands can mitigate soil erosion when well managed. Thus, we must improve estimates of grazed land before we can make informed policy and conservation decisions. This talk presents insights into pastureland estimates in South America and discusses the feasibility to improve current areal, land use, and scale estimates of livestock grazing using satellite imagery.

  20. The current economic burden of illness of osteoporosis in Canada

    PubMed Central

    Burke, N.; Von Keyserlingk, C.; Leslie, W. D.; Morin, S. N.; Adachi, J. D.; Papaioannou, A.; Bessette, L.; Brown, J. P.; Pericleous, L.; Tarride, J.

    2016-01-01

    Summary We estimate the current burden of illness of osteoporosis in Canada is double ($4.6 billion) our previous estimates ($2.3 billion) due to improved data capture of the multiple encounters and services that accompany a fracture: emergency room, admissions to acute and step-down non-acute institutions, rehabilitation, home-assisted or long-term residency support. Introduction We previously estimated the economic burden of illness of osteoporosis-attributable fractures in Canada for the year 2008 to be $2.3 billion in the base case and as much as $3.9 billion. The aim of this study is to update the estimate of the economic burden of illness for osteoporosis-attributable fractures for Canada based on newly available home care and long-term care (LTC) data. Methods Multiple national databases were used for the fiscal-year ending March 31, 2011 (FY 2010/2011) for acute institutional care, emergency visits, day surgery, secondary admissions for rehabilitation, and complex continuing care, as well as national dispensing data for osteoporosis medications. Gaps in national data were supplemented by provincial and community survey data. Osteoporosis-attributable fractures for Canadians age 50+ were identified by ICD-10-CA codes. Costs were expressed in 2014 dollars. Results In FY 2010/2011, the number of osteoporosis-attributable fractures was 131,443 resulting in 64,884 acute care admissions and 983,074 acute hospital days. Acute care costs were $1.5 billion, an 18 % increase since 2008. The cost of LTC was 33.4 times the previous estimate ($31 million versus $1.03 billion) because of improved data capture. The cost for rehabilitation and secondary admissions increased 3.4 fold, while drug costs decreased 19 %. The overall cost of osteoporosis was over $4.6 billion, an increase of 83 % from the 2008 estimate. Conclusion Since the 2008 estimate, new Canadian data on home care and LTC are available which provided a better estimate of the burden of osteoporosis in Canada. This suggests that our previous estimates were seriously underestimated. PMID:27166680

  1. The current economic burden of illness of osteoporosis in Canada.

    PubMed

    Hopkins, R B; Burke, N; Von Keyserlingk, C; Leslie, W D; Morin, S N; Adachi, J D; Papaioannou, A; Bessette, L; Brown, J P; Pericleous, L; Tarride, J

    2016-10-01

    We estimate the current burden of illness of osteoporosis in Canada is double ($4.6 billion) our previous estimates ($2.3 billion) due to improved data capture of the multiple encounters and services that accompany a fracture: emergency room, admissions to acute and step-down non-acute institutions, rehabilitation, home-assisted or long-term residency support. We previously estimated the economic burden of illness of osteoporosis-attributable fractures in Canada for the year 2008 to be $2.3 billion in the base case and as much as $3.9 billion. The aim of this study is to update the estimate of the economic burden of illness for osteoporosis-attributable fractures for Canada based on newly available home care and long-term care (LTC) data. Multiple national databases were used for the fiscal-year ending March 31, 2011 (FY 2010/2011) for acute institutional care, emergency visits, day surgery, secondary admissions for rehabilitation, and complex continuing care, as well as national dispensing data for osteoporosis medications. Gaps in national data were supplemented by provincial and community survey data. Osteoporosis-attributable fractures for Canadians age 50+ were identified by ICD-10-CA codes. Costs were expressed in 2014 dollars. In FY 2010/2011, the number of osteoporosis-attributable fractures was 131,443 resulting in 64,884 acute care admissions and 983,074 acute hospital days. Acute care costs were $1.5 billion, an 18 % increase since 2008. The cost of LTC was 33.4 times the previous estimate ($31 million versus $1.03 billion) because of improved data capture. The cost for rehabilitation and secondary admissions increased 3.4 fold, while drug costs decreased 19 %. The overall cost of osteoporosis was over $4.6 billion, an increase of 83 % from the 2008 estimate. Since the 2008 estimate, new Canadian data on home care and LTC are available which provided a better estimate of the burden of osteoporosis in Canada. This suggests that our previous estimates were seriously underestimated.

  2. U.S. Virgin Islands Transportation Petroleum Reduction Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, C.

    2011-09-01

    This NREL technical report determines a way for USVI to meet its petroleum reduction goal in the transportation sector. It does so first by estimating current petroleum use and key statistics and characteristics of USVI transportation. It then breaks the goal down into subordinate goals and estimates the petroleum impacts of these goals with a wedge analysis. These goals focus on reducing vehicle miles, improving fuel economy, improving traffic flow, using electric vehicles, using biodiesel and renewable diesel, and using 10% ethanol in gasoline. The final section of the report suggests specific projects to achieve the goals, and ranks themore » projects according to cost, petroleum reduction, time frame, and popularity.« less

  3. Demographic and traditional knowledge perspectives on the current status of Canadian polar bear subpopulations.

    PubMed

    York, Jordan; Dowsley, Martha; Cornwell, Adam; Kuc, Miroslaw; Taylor, Mitchell

    2016-05-01

    Subpopulation growth rates and the probability of decline at current harvest levels were determined for 13 subpopulations of polar bears (Ursus maritimus) that are within or shared with Canada based on mark-recapture estimates of population numbers and vital rates, and harvest statistics using population viability analyses (PVA). Aboriginal traditional ecological knowledge (TEK) on subpopulation trend agreed with the seven stable/increasing results and one of the declining results, but disagreed with PVA status of five other declining subpopulations. The decline in the Baffin Bay subpopulation appeared to be due to over-reporting of harvested numbers from outside Canada. The remaining four disputed subpopulations (Southern Beaufort Sea, Northern Beaufort Sea, Southern Hudson Bay, and Western Hudson Bay) were all incompletely mark-recapture (M-R) sampled, which may have biased their survival and subpopulation estimates. Three of the four incompletely sampled subpopulations were PVA identified as nonviable (i.e., declining even with zero harvest mortality). TEK disagreement was nonrandom with respect to M-R sampling protocols. Cluster analysis also grouped subpopulations with ambiguous demographic and harvest rate estimates separately from those with apparently reliable demographic estimates based on PVA probability of decline and unharvested subpopulation growth rate criteria. We suggest that the correspondence between TEK and scientific results can be used to improve the reliability of information on natural systems and thus improve resource management. Considering both TEK and scientific information, we suggest that the current status of Canadian polar bear subpopulations in 2013 was 12 stable/increasing and one declining (Kane Basin). We do not find support for the perspective that polar bears within or shared with Canada are currently in any sort of climate crisis. We suggest that monitoring the impacts of climate change (including sea ice decline) on polar bear subpopulations should be continued and enhanced and that adaptive management practices are warranted.

  4. Exemplar-based human action pose correction.

    PubMed

    Shen, Wei; Deng, Ke; Bai, Xiang; Leyvand, Tommer; Guo, Baining; Tu, Zhuowen

    2014-07-01

    The launch of Xbox Kinect has built a very successful computer vision product and made a big impact on the gaming industry. This sheds lights onto a wide variety of potential applications related to action recognition. The accurate estimation of human poses from the depth image is universally a critical step. However, existing pose estimation systems exhibit failures when facing severe occlusion. In this paper, we propose an exemplar-based method to learn to correct the initially estimated poses. We learn an inhomogeneous systematic bias by leveraging the exemplar information within a specific human action domain. Furthermore, as an extension, we learn a conditional model by incorporation of pose tags to further increase the accuracy of pose correction. In the experiments, significant improvements on both joint-based skeleton correction and tag prediction are observed over the contemporary approaches, including what is delivered by the current Kinect system. Our experiments for the facial landmark correction also illustrate that our algorithm can improve the accuracy of other detection/estimation systems.

  5. Spectromicroscopy and coherent diffraction imaging: focus on energy materials applications.

    PubMed

    Hitchcock, Adam P; Toney, Michael F

    2014-09-01

    Current and future capabilities of X-ray spectromicroscopy are discussed based on coherence-limited imaging methods which will benefit from the dramatic increase in brightness expected from a diffraction-limited storage ring (DLSR). The methods discussed include advanced coherent diffraction techniques and nanoprobe-based real-space imaging using Fresnel zone plates or other diffractive optics whose performance is affected by the degree of coherence. The capabilities of current systems, improvements which can be expected, and some of the important scientific themes which will be impacted are described, with focus on energy materials applications. Potential performance improvements of these techniques based on anticipated DLSR performance are estimated. Several examples of energy sciences research problems which are out of reach of current instrumentation, but which might be solved with the enhanced DLSR performance, are discussed.

  6. Simulated wind-generated inertial oscillations compared to current measurements in the northern North Sea

    NASA Astrophysics Data System (ADS)

    Bruserud, Kjersti; Haver, Sverre; Myrhaug, Dag

    2018-06-01

    Measured current speed data show that episodes of wind-generated inertial oscillations dominate the current conditions in parts of the northern North Sea. In order to acquire current data of sufficient duration for robust estimation of joint metocean design conditions, such as wind, waves, and currents, a simple model for episodes of wind-generated inertial oscillations is adapted for the northern North Sea. The model is validated with and compared against measured current data at one location in the northern North Sea and found to reproduce the measured maximum current speed in each episode with considerable accuracy. The comparison is further improved when a small general background current is added to the simulated maximum current speeds. Extreme values of measured and simulated current speed are estimated and found to compare well. To assess the robustness of the model and the sensitivity of current conditions from location to location, the validated model is applied at three other locations in the northern North Sea. In general, the simulated maximum current speeds are smaller than the measured, suggesting that wind-generated inertial oscillations are not as prominent at these locations and that other current conditions may be governing. Further analysis of the simulated current speed and joint distribution of wind, waves, and currents for design of offshore structures will be presented in a separate paper.

  7. Simulated wind-generated inertial oscillations compared to current measurements in the northern North Sea

    NASA Astrophysics Data System (ADS)

    Bruserud, Kjersti; Haver, Sverre; Myrhaug, Dag

    2018-04-01

    Measured current speed data show that episodes of wind-generated inertial oscillations dominate the current conditions in parts of the northern North Sea. In order to acquire current data of sufficient duration for robust estimation of joint metocean design conditions, such as wind, waves, and currents, a simple model for episodes of wind-generated inertial oscillations is adapted for the northern North Sea. The model is validated with and compared against measured current data at one location in the northern North Sea and found to reproduce the measured maximum current speed in each episode with considerable accuracy. The comparison is further improved when a small general background current is added to the simulated maximum current speeds. Extreme values of measured and simulated current speed are estimated and found to compare well. To assess the robustness of the model and the sensitivity of current conditions from location to location, the validated model is applied at three other locations in the northern North Sea. In general, the simulated maximum current speeds are smaller than the measured, suggesting that wind-generated inertial oscillations are not as prominent at these locations and that other current conditions may be governing. Further analysis of the simulated current speed and joint distribution of wind, waves, and currents for design of offshore structures will be presented in a separate paper.

  8. On improving the speed and reliability of T2-Relaxation-Under-Spin-Tagging (TRUST) MRI

    PubMed Central

    Xu, Feng; Uh, Jinsoo; Liu, Peiying; Lu, Hanzhang

    2011-01-01

    A T2-Relaxation-Under-Spin-Tagging (TRUST) technique was recently developed to estimate cerebral blood oxygenation, providing potentials for non-invasive assessment of the brain's oxygen consumption. A limitation of the current sequence is the need for long TR, as shorter TR causes an over-estimation in blood R2. The present study proposes a post-saturation TRUST by placing a non-selective 90° pulse after the signal acquisition to reset magnetization in the whole brain. This scheme was found to eliminate estimation bias at a slight cost of precision. To improve the precision, TE of the sequence was optimized and it was found that a modest TE shortening of 3.4ms can reduce the estimation error by 49%. We recommend the use of post-saturation TRUST sequence with a TR of 3000ms and a TE of 3.6ms, which allows the determination of global venous oxygenation with scan duration of 1 minute 12 seconds and an estimation precision of ±1% (in units of oxygen saturation percentage). PMID:22127845

  9. Estimated SLR station position and network frame sensitivity to time-varying gravity

    NASA Astrophysics Data System (ADS)

    Zelensky, Nikita P.; Lemoine, Frank G.; Chinn, Douglas S.; Melachroinos, Stavros; Beckley, Brian D.; Beall, Jennifer Wiser; Bordyugov, Oleg

    2014-06-01

    This paper evaluates the sensitivity of ITRF2008-based satellite laser ranging (SLR) station positions estimated weekly using LAGEOS-1/2 data from 1993 to 2012 to non-tidal time-varying gravity (TVG). Two primary methods for modeling TVG from degree-2 are employed. The operational approach applies an annual GRACE-derived field, and IERS recommended linear rates for five coefficients. The experimental approach uses low-order/degree coefficients estimated weekly from SLR and DORIS processing of up to 11 satellites (tvg4x4). This study shows that the LAGEOS-1/2 orbits and the weekly station solutions are sensitive to more detailed modeling of TVG than prescribed in the current IERS standards. Over 1993-2012 tvg4x4 improves SLR residuals by 18 % and shows 10 % RMS improvement in station stability. Tests suggest that the improved stability of the tvg4x4 POD solution frame may help clarify geophysical signals present in the estimated station position time series. The signals include linear and seasonal station motion, and motion of the TRF origin, particularly in Z. The effect on both POD and the station solutions becomes increasingly evident starting in 2006. Over 2008-2012, the tvg4x4 series improves SLR residuals by 29 %. Use of the GRGS RL02 series shows similar improvement in POD. Using tvg4x4, secular changes in the TRF origin Z component double over the last decade and although not conclusive, it is consistent with increased geocenter rate expected due to continental ice melt. The test results indicate that accurate modeling of TVG is necessary for improvement of station position estimation using SLR data.

  10. Modeling of dust deposition in central Asia

    USDA-ARS?s Scientific Manuscript database

    The deposition of dust particles has a significant influence on the global bio-geochemical cycle. Currently, the lack of spatiotemporal data creates great uncertainty in estimating the global dust budget. To improve our understanding of the fate, transport and cycling of airborne dust, there is a ne...

  11. Novel Filtration Markers for GFR Estimation

    PubMed Central

    Inker, Lesley A.; Coresh, Josef; Levey, Andrew S.; Eckfeldt, John H.

    2017-01-01

    Creatinine-based glomerular filtration rate estimation (eGFRcr) has been improved and refined since the 1970s through both the Modification of Diet in Renal Disease (MDRD) Study equation in 1999 and the CKD Epidemiology Collaboration (CKD-EPI) equation in 2009, with current clinical practice dependent primarily on eGFR for accurate assessment of GFR. However, researchers and clinicians have recognized limitations of relying on creatinine as the only filtration marker, which can lead to inaccurate GFR estimates in certain populations due to the influence of non-GFR determinants of serum or plasma creatinine. Therefore, recent literature has proposed incorporation of multiple serum or plasma filtration markers into GFR estimation to improve precision and accuracy and decrease the impact of non-GFR determinants for any individual biomarker. To this end, the CKD-EPI combined creatinine-cystatin C equation (eGFRcr-cys) was developed in 2012 and demonstrated superior accuracy to equations relying on creatinine or cystatin C alone (eGFRcr or eGFRcys). Now, the focus has broadened to include additional novel filtration markers to further refine and improve GFR estimation. Beta-2-microglobulin (B2M) and beta-trace-protein (BTP) are two filtration markers with established assays that have been proposed as candidates for improving both GFR estimation and risk prediction. GFR estimating equations based on B2M and BTP have been developed and validated, with the CKD-EPI combined BTP-B2M equation (eGFRBTP-B2M) demonstrating similar performance to eGFR and eGFR. Additionally, several studies have demonstrated that both B2M and BTP are associated with outcomes in CKD patients, including cardiovascular events, ESRD and mortality. This review will primarily focus on these two biomarkers, and will highlight efforts to identify additional candidate biomarkers through metabolomics-based approaches. PMID:29333147

  12. Improvements in GRACE Gravity Field Determination through Stochastic Observation Modeling

    NASA Astrophysics Data System (ADS)

    McCullough, C.; Bettadpur, S. V.

    2016-12-01

    Current unconstrained Release 05 GRACE gravity field solutions from the Center for Space Research (CSR RL05) assume random observation errors following an independent multivariate Gaussian distribution. This modeling of observations, a simplifying assumption, fails to account for long period, correlated errors arising from inadequacies in the background force models. Fully modeling the errors inherent in the observation equations, through the use of a full observation covariance (modeling colored noise), enables optimal combination of GPS and inter-satellite range-rate data and obviates the need for estimating kinematic empirical parameters during the solution process. Most importantly, fully modeling the observation errors drastically improves formal error estimates of the spherical harmonic coefficients, potentially enabling improved uncertainty quantification of scientific results derived from GRACE and optimizing combinations of GRACE with independent data sets and a priori constraints.

  13. Gibbs Sampler-Based λ-Dynamics and Rao-Blackwell Estimator for Alchemical Free Energy Calculation.

    PubMed

    Ding, Xinqiang; Vilseck, Jonah Z; Hayes, Ryan L; Brooks, Charles L

    2017-06-13

    λ-dynamics is a generalized ensemble method for alchemical free energy calculations. In traditional λ-dynamics, the alchemical switch variable λ is treated as a continuous variable ranging from 0 to 1 and an empirical estimator is utilized to approximate the free energy. In the present article, we describe an alternative formulation of λ-dynamics that utilizes the Gibbs sampler framework, which we call Gibbs sampler-based λ-dynamics (GSLD). GSLD, like traditional λ-dynamics, can be readily extended to calculate free energy differences between multiple ligands in one simulation. We also introduce a new free energy estimator, the Rao-Blackwell estimator (RBE), for use in conjunction with GSLD. Compared with the current empirical estimator, the advantage of RBE is that RBE is an unbiased estimator and its variance is usually smaller than the current empirical estimator. We also show that the multistate Bennett acceptance ratio equation or the unbinned weighted histogram analysis method equation can be derived using the RBE. We illustrate the use and performance of this new free energy computational framework by application to a simple harmonic system as well as relevant calculations of small molecule relative free energies of solvation and binding to a protein receptor. Our findings demonstrate consistent and improved performance compared with conventional alchemical free energy methods.

  14. Evaluating Childhood Vaccination Coverage of NIP Vaccines: Coverage Survey versus Zhejiang Provincial Immunization Information System

    PubMed Central

    Hu, Yu; Chen, Yaping

    2017-01-01

    Vaccination coverage in Zhejiang province, east China, is evaluated through repeated coverage surveys. The Zhejiang provincial immunization information system (ZJIIS) was established in 2004 with links to all immunization clinics. ZJIIS has become an alternative to quickly assess the vaccination coverage. To assess the current completeness and accuracy on the vaccination coverage derived from ZJIIS, we compared the estimates from ZJIIS with the estimates from the most recent provincial coverage survey in 2014, which combined interview data with verified data from ZJIIS. Of the enrolled 2772 children in the 2014 provincial survey, the proportions of children with vaccination cards and registered in ZJIIS were 94.0% and 87.4%, respectively. Coverage estimates from ZJIIS were systematically higher than the corresponding estimates obtained through the survey, with a mean difference of 4.5%. Of the vaccination doses registered in ZJIIS, 16.7% differed from the date recorded in the corresponding vaccination cards. Under-registration in ZJIIS significantly influenced the coverage estimates derived from ZJIIS. Therefore, periodic coverage surveys currently provide more complete and reliable results than the estimates based on ZJIIS alone. However, further improvement of completeness and accuracy of ZJIIS will likely allow more reliable and timely estimates in future. PMID:28696387

  15. Assessing vaccination coverage in infants, survey studies versus the Flemish immunisation register: achieving the best of both worlds.

    PubMed

    Braeckman, Tessa; Lernout, Tinne; Top, Geert; Paeps, Annick; Roelants, Mathieu; Hoppenbrouwers, Karel; Van Damme, Pierre; Theeten, Heidi

    2014-01-09

    Infant immunisation coverage in Flanders, Belgium, is monitored through repeated coverage surveys. With the increased use of Vaccinnet, the web-based ordering system for vaccines in Flanders set up in 2004 and linked to an immunisation register, this database could become an alternative to quickly estimate vaccination coverage. To evaluate its current accuracy, coverage estimates generated from Vaccinnet alone were compared with estimates from the most recent survey (2012) that combined interview data with data from Vaccinnet and medical files. Coverage rates from registrations in Vaccinnet were systematically lower than the corresponding estimates obtained through the survey (mean difference 7.7%). This difference increased by dose number for vaccines that require multiple doses. Differences in administration date between the two sources were observed for 3.8-8.2% of registered doses. Underparticipation in Vaccinnet thus significantly impacts on the register-based immunisation coverage estimates, amplified by underregistration of administered doses among vaccinators using Vaccinnet. Therefore, survey studies, despite being labour-intensive and expensive, currently provide more complete and reliable results than register-based estimates alone in Flanders. However, further improvement of Vaccinnet's completeness will likely allow more accurate estimates in the nearby future. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Higher heritabilities for gait components than for overall gait scores may improve mobility in ducks.

    PubMed

    Duggan, Brendan M; Rae, Anne M; Clements, Dylan N; Hocking, Paul M

    2017-05-02

    Genetic progress in selection for greater body mass and meat yield in poultry has been associated with an increase in gait problems which are detrimental to productivity and welfare. The incidence of suboptimal gait in breeding flocks is controlled through the use of a visual gait score, which is a subjective assessment of walking ability of each bird. The subjective nature of the visual gait score has led to concerns over its effectiveness in reducing the incidence of suboptimal gait in poultry through breeding. The aims of this study were to assess the reliability of the current visual gait scoring system in ducks and to develop a more objective method to select for better gait. Experienced gait scorers assessed short video clips of walking ducks to estimate the reliability of the current visual gait scoring system. Kendall's coefficients of concordance between and within observers were estimated at 0.49 and 0.75, respectively. In order to develop a more objective scoring system, gait components were visually scored on more than 4000 pedigreed Pekin ducks and genetic parameters were estimated for these components. Gait components, which are a more objective measure, had heritabilities that were as good as, or better than, those of the overall visual gait score. Measurement of gait components is simpler and therefore more objective than the standard visual gait score. The recording of gait components can potentially be automated, which may increase accuracy further and may improve heritability estimates. Genetic correlations were generally low, which suggests that it is possible to use gait components to select for an overall improvement in both economic traits and gait as part of a balanced breeding programme.

  17. Extracting Prior Distributions from a Large Dataset of In-Situ Measurements to Support SWOT-based Estimation of River Discharge

    NASA Astrophysics Data System (ADS)

    Hagemann, M.; Gleason, C. J.

    2017-12-01

    The upcoming (2021) Surface Water and Ocean Topography (SWOT) NASA satellite mission aims, in part, to estimate discharge on major rivers worldwide using reach-scale measurements of stream width, slope, and height. Current formalizations of channel and floodplain hydraulics are insufficient to fully constrain this problem mathematically, resulting in an infinitely large solution set for any set of satellite observations. Recent work has reformulated this problem in a Bayesian statistical setting, in which the likelihood distributions derive directly from hydraulic flow-law equations. When coupled with prior distributions on unknown flow-law parameters, this formulation probabilistically constrains the parameter space, and results in a computationally tractable description of discharge. Using a curated dataset of over 200,000 in-situ acoustic Doppler current profiler (ADCP) discharge measurements from over 10,000 USGS gaging stations throughout the United States, we developed empirical prior distributions for flow-law parameters that are not observable by SWOT, but that are required in order to estimate discharge. This analysis quantified prior uncertainties on quantities including cross-sectional area, at-a-station hydraulic geometry width exponent, and discharge variability, that are dependent on SWOT-observable variables including reach-scale statistics of width and height. When compared against discharge estimation approaches that do not use this prior information, the Bayesian approach using ADCP-derived priors demonstrated consistently improved performance across a range of performance metrics. This Bayesian approach formally transfers information from in-situ gaging stations to remote-sensed estimation of discharge, in which the desired quantities are not directly observable. Further investigation using large in-situ datasets is therefore a promising way forward in improving satellite-based estimates of river discharge.

  18. Calibration and Validation of Landsat Tree Cover in the Taiga-Tundra Ecotone

    NASA Technical Reports Server (NTRS)

    Montesano, Paul Mannix; Neigh, Christopher S. R.; Sexton, Joseph; Feng, Min; Channan, Saurabh; Ranson, Kenneth J.; Townshend, John R.

    2016-01-01

    Monitoring current forest characteristics in the taiga-tundra ecotone (TTE) at multiple scales is critical for understanding its vulnerability to structural changes. A 30 m spatial resolution Landsat-based tree canopy cover map has been calibrated and validated in the TTE with reference tree cover data from airborne LiDAR and high resolution spaceborne images across the full range of boreal forest tree cover. This domain-specific calibration model used estimates of forest height to determine reference forest cover that best matched Landsat estimates. The model removed the systematic under-estimation of tree canopy cover greater than 80% and indicated that Landsat estimates of tree canopy cover more closely matched canopies at least 2 m in height rather than 5 m. The validation improved estimates of uncertainty in tree canopy cover in discontinuous TTE forests for three temporal epochs (2000, 2005, and 2010) by reducing systematic errors, leading to increases in tree canopy cover uncertainty. Average pixel-level uncertainties in tree canopy cover were 29.0%, 27.1% and 31.1% for the 2000, 2005 and 2010 epochs, respectively. Maps from these calibrated data improve the uncertainty associated with Landsat tree canopy cover estimates in the discontinuous forests of the circumpolar TTE.

  19. Improving accuracy of portion-size estimations through a stimulus equivalence paradigm.

    PubMed

    Hausman, Nicole L; Borrero, John C; Fisher, Alyssa; Kahng, SungWoo

    2014-01-01

    The prevalence of obesity continues to increase in the United States (Gordon-Larsen, The, & Adair, 2010). Obesity can be attributed, in part, to overconsumption of energy-dense foods. Given that overeating plays a role in the development of obesity, interventions that teach individuals to identify and consume appropriate portion sizes are warranted. Specifically, interventions that teach individuals to estimate portion sizes correctly without the use of aids may be critical to the success of nutrition education programs. The current study evaluated the use of a stimulus equivalence paradigm to teach 9 undergraduate students to estimate portion size accurately. Results suggested that the stimulus equivalence paradigm was effective in teaching participants to make accurate portion size estimations without aids, and improved accuracy was observed in maintenance sessions that were conducted 1 week after training. Furthermore, 5 of 7 participants estimated the target portion size of novel foods during extension sessions. These data extend existing research on teaching accurate portion-size estimations and may be applicable to populations who seek treatment (e.g., overweight or obese children and adults) to teach healthier eating habits. © Society for the Experimental Analysis of Behavior.

  20. General Aviation Aircraft Reliability Study

    NASA Technical Reports Server (NTRS)

    Pettit, Duane; Turnbull, Andrew; Roelant, Henk A. (Technical Monitor)

    2001-01-01

    This reliability study was performed in order to provide the aviation community with an estimate of Complex General Aviation (GA) Aircraft System reliability. To successfully improve the safety and reliability for the next generation of GA aircraft, a study of current GA aircraft attributes was prudent. This was accomplished by benchmarking the reliability of operational Complex GA Aircraft Systems. Specifically, Complex GA Aircraft System reliability was estimated using data obtained from the logbooks of a random sample of the Complex GA Aircraft population.

  1. Improving the representation of Arctic photosynthesis in Earth system models

    NASA Astrophysics Data System (ADS)

    Rogers, A.; Serbin, S.; Ely, K.; Sloan, V. L.; Wyatt, R. A.; Kubien, D. S.; Ali, A. A.; Xu, C.; Wullschleger, S. D.

    2015-12-01

    The primary goal of Earth System Models (ESMs) is to improve understanding and projection of future global change. In order to do this they must accurately represent the carbon fluxes associated with the terrestrial carbon cycle. Although Arctic carbon fluxes are small - relative to global carbon fluxes - uncertainty is large. As part of a multidisciplinary project to improve the representation of the Arctic in ESMs (Next Generation Ecosystem Experiments - Arctic) we are examining the photosynthetic parameterization of the Arctic plant functional type (PFT) in ESMs. Photosynthetic CO2 uptake is well described by the Farquhar, von Caemmerer and Berry (FvCB) model of photosynthesis. Most ESMs use a derivation of the FvCB model to calculate gross primary productivity. Two key parameters required by the FvCB model are an estimate of the maximum rate of carboxylation by the enzyme Rubisco (Vc,max) and the maximum rate of electron transport (Jmax). In ESMs the parameter Vc,max is usually fixed for a given PFT. Only four ESMs currently have an explicit Arctic PFT and the data used to derive Vc,max for the Arctic PFT in these models relies on small data sets and unjustified assumptions. We examined the derivation of Vc,max and Jmax in current Arctic PFTs and estimated Vc,max and Jmax for 7 species representing both dominant vegetation and key Arctic PFTs growing on the Barrow Environmental Observatory, Barrow, AK. The values of Vc,max currently used to represent Arctic PFTs in ESMs are 70% lower than the values we measured in these species. Examination of the derivation of Vc,max in ESMs identified that the cause of the relatively low Vc,max value was the result of underestimating both the leaf N content and the investment of that N in Rubisco. Contemporary temperature response functions for Vc,max also appear to underestimate Vc,max at low temperature. ESMs typically use a single multiplier (JVratio) to convert Vc,max to Jmax for all PFTs. We found that the JVratio of Arctic plants is higher than current estimates suggesting that the Arctic PFT will be more responsive to rising carbon dioxide than currently projected. Our data suggest that the Arctic tundra has a much greater capacity for CO2 uptake, particularly at low temperature, and will be more CO2 responsive than is currently represented in ESMs.

  2. A Method for Improving Temporal and Spatial Resolution of Carbon Dioxide Emissions

    NASA Astrophysics Data System (ADS)

    Gregg, J. S.; Andres, R. J.

    2003-12-01

    Using United States data, a method is developed to estimate the monthly consumption of solid, liquid and gaseous fossil fuels for each state in the union. This technique employs monthly sales data to estimate the relative monthly proportions of the total annual national fossil fuel use. These proportions are then used to estimate the total monthly carbon dioxide emissions for each state. To assess the success of this technique, the results from this method are compared with the data obtained from other independent methods. To determine the temporal success of the method, the resulting national time series is compared to the model produced by Carbon Dioxide Information Analysis Center (CDIAC) and the current model being developed by T. J. Blasing and C. Broniak at the Oak Ridge National Laboratory (ORNL). The University of North Dakota (UND) method fits well temporally with the results of the CDIAC and current ORNL research. To determine the success of the spatial component, the individual state results are compared to the annual state totals calculated by ORNL. Using ordinary least squares regression, the annual state totals of this method are plotted against the ORNL data. This allows a direct comparison of estimates in the form of ordered pairs against a one-to-one ideal correspondence line, and allows for easy detection of outliers in the results obtained by this estimation method. Analyzing the residuals of the linear regression model for each type of fuel permits an improved understanding of the strengths and shortcomings of the spatial component of this estimation technique. Spatially, the model is successful when compared to the current ORNL research. The primary advantages of this method are its ease of implementation and universal applicability. In general, this technique compares favorably to more labor-intensive methods that rely on more detailed data. The more detailed data is generally not available for most countries in the world. The methodology used here will be applied to other nations in the world to better understand their sub-annual cycle and sub-national spatial distribution of carbon dioxide emissions from fossil fuel consumption. Better understanding of the cycle will lead to better models used for predicting and responding to global environmental changes currently observed and anticipated.

  3. The safety of high-hazard water infrastructures in the U.S. Pacific Northwest in a changing climate

    NASA Astrophysics Data System (ADS)

    Chen, X.; Hossain, F.; Leung, L. R.

    2017-12-01

    The safety of large and aging water infrastructures is gaining attention in water management given the accelerated rate of change in landscape, climate and society. In current engineering practice, such safety is ensured by the design of infrastructure for the Probable Maximum Precipitation (PMP). Recently, several numerical modeling approaches have been proposed to modernize the conventional and ad hoc PMP estimation approach. However, the underlying physics have not been investigated and thus differing PMP estimates are obtained without clarity on their interpretation. In this study, we present a hybrid approach that takes advantage of both traditional engineering practice and modern climate science to estimate PMP for current and future climate conditions. The traditional PMP approach is improved and applied to five statistically downscaled CMIP5 model outputs, producing an ensemble of PMP estimates in the Pacific Northwest (PNW) during the historical (1970-2016) and future (2050-2099) time periods. The new historical PMP estimates are verified against the traditional estimates. PMP in the PNW will increase by 50%±30% of the current level by 2099 under the RCP8.5 scenario. Most of the increase is caused by warming, which mainly affects moisture availability through increased sea surface temperature, with minor contributions from changes in storm efficiency in the future. Moist track change tends to reduce the future PMP. Compared with extreme precipitation, PMP exhibits higher internal variability. Thus long-time records of high-quality data in both precipitation and related meteorological fields (temperature, wind fields) are required to reduce uncertainties in the ensemble PMP estimates.

  4. Probable Maximum Precipitation in the U.S. Pacific Northwest in a Changing Climate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xiaodong; Hossain, Faisal; Leung, Lai-Yung

    2017-12-22

    The safety of large and aging water infrastructures is gaining attention in water management given the accelerated rate of change in landscape, climate and society. In current engineering practice, such safety is ensured by the design of infrastructure for the Probable Maximum Precipitation (PMP). Recently, several physics-based numerical modeling approaches have been proposed to modernize the conventional and ad hoc PMP estimation approach. However, the underlying physics has not been investigated and thus differing PMP estimates are obtained without clarity on their interpretation. In this study, we present a hybrid approach that takes advantage of both traditional engineering wisdom andmore » modern climate science to estimate PMP for current and future climate conditions. The traditional PMP approach is improved and applied to outputs from an ensemble of five CMIP5 models. This hybrid approach is applied in the Pacific Northwest (PNW) to produce ensemble PMP estimation for the historical (1970-2016) and future (2050-2099) time periods. The new historical PMP estimates are verified by comparing them with the traditional estimates. PMP in the PNW will increase by 50% of the current level by 2099 under the RCP8.5 scenario. Most of the increase is caused by warming, which mainly affects moisture availability, with minor contributions from changes in storm efficiency in the future. Moist track change tends to reduce the future PMP. Compared with extreme precipitation, ensemble PMP exhibits higher internal variation. Thus high-quality data of both precipitation and related meteorological fields (temperature, wind fields) are required to reduce uncertainties in the ensemble PMP estimates.« less

  5. Multiparameter elastic full waveform inversion with facies-based constraints

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen-dong; Alkhalifah, Tariq; Naeini, Ehsan Zabihi; Sun, Bingbing

    2018-06-01

    Full waveform inversion (FWI) incorporates all the data characteristics to estimate the parameters described by the assumed physics of the subsurface. However, current efforts to utilize FWI beyond improved acoustic imaging, like in reservoir delineation, faces inherent challenges related to the limited resolution and the potential trade-off between the elastic model parameters. Some anisotropic parameters are insufficiently updated because of their minor contributions to the surface collected data. Adding rock physics constraints to the inversion helps mitigate such limited sensitivity, but current approaches to add such constraints are based on including them as a priori knowledge mostly valid around the well or as a global constraint for the whole area. Since similar rock formations inside the Earth admit consistent elastic properties and relative values of elasticity and anisotropy parameters (this enables us to define them as a seismic facies), utilizing such localized facies information in FWI can improve the resolution of inverted parameters. We propose a novel approach to use facies-based constraints in both isotropic and anisotropic elastic FWI. We invert for such facies using Bayesian theory and update them at each iteration of the inversion using both the inverted models and a priori information. We take the uncertainties of the estimated parameters (approximated by radiation patterns) into consideration and improve the quality of estimated facies maps. Four numerical examples corresponding to different acquisition, physical assumptions and model circumstances are used to verify the effectiveness of the proposed method.

  6. The NASA Lightning Nitrogen Oxides Model (LNOM): Application to Air Quality Modeling

    NASA Technical Reports Server (NTRS)

    Koshak, William; Peterson, Harold; Khan, Maudood; Biazar, Arastoo; Wang, Lihua

    2011-01-01

    Recent improvements to the NASA Marshall Space Flight Center Lightning Nitrogen Oxides Model (LNOM) and its application to the Community Multiscale Air Quality (CMAQ) modeling system are discussed. The LNOM analyzes Lightning Mapping Array (LMA) and National Lightning Detection Network(TradeMark)(NLDN) data to estimate the raw (i.e., unmixed and otherwise environmentally unmodified) vertical profile of lightning NO(x) (= NO + NO2). The latest LNOM estimates of lightning channel length distributions, lightning 1-m segment altitude distributions, and the vertical profile of lightning NO(x) are presented. The primary improvement to the LNOM is the inclusion of non-return stroke lightning NOx production due to: (1) hot core stepped and dart leaders, (2) stepped leader corona sheath, K-changes, continuing currents, and M-components. The impact of including LNOM-estimates of lightning NO(x) for an August 2006 run of CMAQ is discussed.

  7. Research and Development of Automated Eddy Current Testing for Composite Overwrapped Pressure Vessels

    NASA Technical Reports Server (NTRS)

    Carver, Kyle L.; Saulsberry, Regor L.; Nichols, Charles T.; Spencer, Paul R.; Lucero, Ralph E.

    2012-01-01

    Eddy current testing (ET) was used to scan bare metallic liners used in the fabrication of composite overwrapped pressure vessels (COPVs) for flaws which could result in premature failure of the vessel. The main goal of the project was to make improvements in the areas of scan signal to noise ratio, sensitivity of flaw detection, and estimation of flaw dimensions. Scan settings were optimized resulting in an increased signal to noise ratio. Previously undiscovered flaw indications were observed and investigated. Threshold criteria were determined for the system software's flaw report and estimation of flaw dimensions were brought to an acceptable level of accuracy. Computer algorithms were written to import data for filtering and a numerical derivative filtering algorithm was evaluated.

  8. The impact of reflectivity correction and conversion methods to improve precipitation estimation by weather radar for an extreme low-land Mesoscale Convective System

    NASA Astrophysics Data System (ADS)

    Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko

    2014-05-01

    Between 25 and 27 August 2010 a long-duration mesoscale convective system was observed above the Netherlands. For most of the country this led to over 15 hours of near-continuous precipitation, which resulted in total event accumulations exceeding 150 mm in the eastern part of the Netherlands. Such accumulations belong to the largest sums ever recorded in this country and gave rise to local flooding. Measuring precipitation by weather radar within such mesoscale convective systems is known to be a challenge, since measurements are affected by multiple sources of error. For the current event the operational weather radar rainfall product only estimated about 30% of the actual amount of precipitation as measured by rain gauges. In the current presentation we will try to identify what gave rise to such large underestimations. In general weather radar measurement errors can be subdivided into two different groups: 1) errors affecting the volumetric reflectivity measurements taken, and 2) errors related to the conversion of reflectivity values in rainfall intensity and attenuation estimates. To correct for the first group of errors, the quality of the weather radar reflectivity data was improved by successively correcting for 1) clutter and anomalous propagation, 2) radar calibration, 3) wet radome attenuation, 4) signal attenuation and 5) the vertical profile of reflectivity. Such consistent corrections are generally not performed by operational meteorological services. Results show a large improvement in the quality of the precipitation data, however still only ~65% of the actual observed accumulations was estimated. To further improve the quality of the precipitation estimates, the second group of errors are corrected for by making use of disdrometer measurements taken in close vicinity of the radar. Based on these data the parameters of a normalized drop size distribution are estimated for the total event as well as for each precipitation type separately (convective, stratiform and undefined). These are then used to obtain coherent parameter sets for the radar reflectivity-rainfall rate (Z-R) and radar reflectivity-attenuation (Z-k) relationship, specifically applicable for this event. By applying a single parameter set to correct for both sources of errors, the quality of the rainfall product improves further, leading to >80% of the observed accumulations. However, by differentiating between precipitation type no better results are obtained as when using the operational relationships. This leads to the question: how representative are local disdrometer observations to correct large scale weather radar measurements? In order to tackle this question a Monte Carlo approach was used to generate >10000 sets of the normalized dropsize distribution parameters and to assess their impact on the estimated precipitation amounts. Results show that a large number of parameter sets result in improved precipitation estimated by the weather radar closely resembling observations. However, these optimal sets vary considerably as compared to those obtained from the local disdrometer measurements.

  9. Anxiety Disorders in Childhood: Casting a Nomological Net

    ERIC Educational Resources Information Center

    Weems, Carl F.; Stickle, Timothy R.

    2005-01-01

    Empirical research highlights the need for improving the childhood anxiety disorder diagnostic classification system. In particular, inconsistencies in the stability estimates of childhood anxiety disorders and high rates of comorbidity call into the question the utility of the current "DSM" criteria. This paper makes a case for utilizing a…

  10. Improving Attachments of Non-Invasive (Type III) Electronic Data Loggers to Cetaceans

    DTIC Science & Technology

    2014-09-30

    the assessment tag impact on animal health and well-being. Specifically, we are working to develop methods that will enable the accurate estimates...currently not available for any marine mammal, about animal health and activity has the potential to revolutionize how animals are cared for in these

  11. REVIEW AND EVALUATION OF CURRENT METHODS AND USER NEEDS FOR OTHER STATIONARY COMBUSTION SOURCES

    EPA Science Inventory

    The report gives results of Phase 1 of an effort to develop improved methodologies for estimating area source emissions of air pollutants from stationary combustion sources. The report (1) evaluates Area and Mobile Source (AMS) subsystem methodologies; (2) compares AMS results w...

  12. Improving toxicity extrapolation using molecular sequence similarity: A case study of pyrethroids and the sodium ion channel

    EPA Science Inventory

    A significant challenge in ecotoxicology has been determining chemical hazards to species with limited or no toxicity data. Currently, extrapolation tools like U.S. EPA’s Web-based Interspecies Correlation Estimation (Web-ICE; www3.epa.gov/webice) models categorize toxicity...

  13. Upper-Ocean Processes under the Stratus Cloud Deck in the Southeast Pacific Ocean

    DTIC Science & Technology

    2010-01-01

    resolving Hybrid Coordinate Ocean Model (HYCOM). Both are compared with estimates based on Woods Hole Oceano - graphic Institution (WHOI) Improved...Jason-1 and Jason-2 sea surface heights and geostrophic currents (computed from absolute topography) produced by Segment Sol Multimissions d’Altimétrie

  14. Improving post-detonation energetics residues estimations for the Life Cycle Environmental Assessment process for munitions.

    EPA Science Inventory

    The Life Cycle Environmental Assessment (LCEA) process for military munitions tracks possible environmental impacts incurred during all phases of the life of a munition. The greatest energetics-based emphasis in the current LCEA process is on manufacturing. A review of recent LCE...

  15. Quantifying short-lived events in multistate ionic current measurements.

    PubMed

    Balijepalli, Arvind; Ettedgui, Jessica; Cornio, Andrew T; Robertson, Joseph W F; Cheung, Kin P; Kasianowicz, John J; Vaz, Canute

    2014-02-25

    We developed a generalized technique to characterize polymer-nanopore interactions via single channel ionic current measurements. Physical interactions between analytes, such as DNA, proteins, or synthetic polymers, and a nanopore cause multiple discrete states in the current. We modeled the transitions of the current to individual states with an equivalent electrical circuit, which allowed us to describe the system response. This enabled the estimation of short-lived states that are presently not characterized by existing analysis techniques. Our approach considerably improves the range and resolution of single-molecule characterization with nanopores. For example, we characterized the residence times of synthetic polymers that are three times shorter than those estimated with existing algorithms. Because the molecule's residence time follows an exponential distribution, we recover nearly 20-fold more events per unit time that can be used for analysis. Furthermore, the measurement range was extended from 11 monomers to as few as 8. Finally, we applied this technique to recover a known sequence of single-stranded DNA from previously published ion channel recordings, identifying discrete current states with subpicoampere resolution.

  16. Valuing preferences over stormwater management outcomes including improved hydrologic function

    NASA Astrophysics Data System (ADS)

    LondoñO Cadavid, Catalina; Ando, Amy W.

    2013-07-01

    Stormwater runoff causes environmental problems such as flooding, soil erosion, and water pollution. Conventional stormwater management has focused primarily on flood reduction, while a new generation of decentralized stormwater solutions yields ancillary benefits such as healthier aquatic habitat, improved surface water quality, and increased water table recharge. Previous research has estimated values for flood reduction from stormwater management, but no estimates exist for the willingness to pay (WTP) for some of the other environmental benefits of alternative approaches to stormwater control. This paper uses a choice experiment survey of households in Champaign-Urbana, Illinois, to estimate the values of several attributes of stormwater management outcomes. We analyzed data from 131 surveyed households in randomly selected neighborhoods. We find that people value reduced basement flooding more than reductions in yard or street flooding, but WTP for basement flood reduction in the area only exists if individuals are currently experiencing significant flooding themselves. Citizens value both improved water quality and improved hydrologic function and aquatic habitat from runoff reduction. Thus, widespread investment in low impact development stormwater solutions could have very large total benefits, and stormwater managers should be wary of policies and infrastructure plans that reduce flooding at the expense of water quality and aquatic habitat.

  17. Improved equivalent magnetic network modeling for analyzing working points of PMs in interior permanent magnet machine

    NASA Astrophysics Data System (ADS)

    Guo, Liyan; Xia, Changliang; Wang, Huimin; Wang, Zhiqiang; Shi, Tingna

    2018-05-01

    As is well known, the armature current will be ahead of the back electromotive force (back-EMF) under load condition of the interior permanent magnet (PM) machine. This kind of advanced armature current will produce a demagnetizing field, which may make irreversible demagnetization appeared in PMs easily. To estimate the working points of PMs more accurately and take demagnetization under consideration in the early design stage of a machine, an improved equivalent magnetic network model is established in this paper. Each PM under each magnetic pole is segmented, and the networks in the rotor pole shoe are refined, which makes a more precise model of the flux path in the rotor pole shoe possible. The working point of each PM under each magnetic pole can be calculated accurately by the established improved equivalent magnetic network model. Meanwhile, the calculated results are compared with those calculated by FEM. And the effects of d-axis component and q-axis component of armature current, air-gap length and flux barrier size on working points of PMs are analyzed by the improved equivalent magnetic network model.

  18. Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models

    NASA Astrophysics Data System (ADS)

    Rothenberger, Michael J.

    This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input-output measurements, and is the approach used in this dissertation. Research in the literature studies optimal current input shaping for high-order electrochemical battery models and focuses on offline laboratory cycling. While this body of research highlights improvements in identifiability through optimal input shaping, each optimal input is a function of nominal parameters, which creates a tautology. The parameter values must be known a priori to determine the optimal input for maximizing estimation speed and accuracy. The system identification literature presents multiple studies containing methods that avoid the challenges of this tautology, but these methods are absent from the battery parameter estimation domain. The gaps in the above literature are addressed in this dissertation through the following five novel and unique contributions. First, this dissertation optimizes the parameter identifiability of a thermal battery model, which Sergio Mendoza experimentally validates through a close collaboration with this dissertation's author. Second, this dissertation extends input-shaping optimization to a linear and nonlinear equivalent-circuit battery model and illustrates the substantial improvements in Fisher identifiability for a periodic optimal signal when compared against automotive benchmark cycles. Third, this dissertation presents an experimental validation study of the simulation work in the previous contribution. The estimation study shows that the automotive benchmark cycles either converge slower than the optimized cycle, or not at all for certain parameters. Fourth, this dissertation examines how automotive battery packs with additional power electronic components that dynamically route current to individual cells/modules can be used for parameter identifiability optimization. While the user and vehicle supervisory controller dictate the current demand for these packs, the optimized internal allocation of current still improves identifiability. Finally, this dissertation presents a robust Bayesian sequential input shaping optimization study to maximize the conditional Fisher information of the battery model parameters without prior knowledge of the nominal parameter set. This iterative algorithm only requires knowledge of the prior parameter distributions to converge to the optimal input trajectory.

  19. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    PubMed Central

    Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu

    2016-01-01

    Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127

  20. The effects of survey question wording on rape estimates: evidence from a quasi-experimental design.

    PubMed

    Fisher, Bonnie S

    2009-02-01

    The measurement of rape is among the leading methodological issues in the violence against women field. Methodological discussion continues to focus on decreasing measurement errors and improving the accuracy of rape estimates. The current study used a quasi-experimental design to examine the effect of survey question wording on estimates of completed and attempted rape and verbal threats of rape. Specifically, the study statistically compares self-reported rape estimates from two nationally representative studies of college women's sexual victimization experiences, the National College Women Sexual Victimization study and the National Violence Against College Women study. Results show significant differences between the two sets of rape estimates, with National Violence Against College Women study rape estimates ranging from 4.4% to 10.4% lower than the National College Women Sexual Victimization study rape estimates. Implications for future methodological research are discussed.

  1. Proportion of patients needing an implantable cardioverter defibrillator on the basis of current guidelines: impact on healthcare resources in Italy and the USA. Data from the ALPHA study registry.

    PubMed

    Pedretti, Roberto F E; Curnis, Antonio; Massa, Riccardo; Morandi, Fabrizio; Tritto, Massimo; Manca, Lorenzo; Occhetta, Eraldo; Molon, Giulio; De Ferrari, Gaetano M; Sarzi Braga, Simona; Raciti, Giovanni; Klersy, Catherine; Salerno-Uriarte, Jorge A

    2010-08-01

    Implantable cardioverter defibrillators (ICD) improve survival in selected patients with left ventricular dysfunction or heart failure (HF). The objective is to estimate the number of ICD candidates and to assess the potential impact on public health expenditure in Italy and the USA. Data from 3513 consecutive patients (ALPHA study registry) were screened. A model based on international guidelines inclusion criteria and epidemiological data was used to estimate the number of eligible patients. A comparison with current ICD implant rate was done to estimate the necessary incremental rate to treat eligible patients within 5 years. Up to 54% of HF patients are estimated to be eligible for ICD implantation. An implantation policy based on guidelines would significantly increase the ICD number to 2671 implants per million inhabitants in Italy and to 4261 in the USA. An annual increment of prophylactic ICD implants of 20% in the USA and 68% in Italy would be necessary to treat all indicated patients in a 5-year timeframe. Implantable cardioverter defibrillator implantation policy based on current evidence may have significant impact on public health expenditure. Effective risk stratification may be useful in order to maximize benefit of ICD therapy and its cost-effectiveness in primary prevention.

  2. Assessing the Benefits Provided by SWOT Data Towards Estimating Reservoir Residence Time in the Mekong River Basin

    NASA Astrophysics Data System (ADS)

    Bonnema, M.; Hossain, F.

    2016-12-01

    The Mekong River Basin is undergoing rapid hydropower development. Nine dams are planned on the main stem of the Mekong and many more on its extensive tributaries. Understanding the effects that current and future dams have on the river system and water cycle as a whole is vital for the millions of people living in the basin. reservoir residence time, the amount of time water spends stored in a reservoir, is a key parameter in investigating these impacts. The forthcoming Surface Water and Ocean Topography (SWOT) mission is poised to provide an unprecedented amount of surface water observations. SWOT, when augmented by current satellite missions, will provide the necessary information to estimate the residence time of reservoirs across the entire basin in a more comprehensive way than ever before. In this study, we first combine observations from current satellite missions (altimetry, spectral imaging, precipitation) to estimate the residence times of existing reservoirs. We then use this information to project how future reservoirs will increase the residence time of the river system. Next, we explore how SWOT observations can be used to improve residence time estimation by examining the accuracy of reservoir surface area and elevation observations as well as the accuracy of river discharge observations.

  3. Improving Assimilated Global Climate Data Using TRMM and SSM/I Rainfall and Moisture Data

    NASA Technical Reports Server (NTRS)

    Hou, Arthur Y.; Zhang, Sara Q.; daSilva, Arlindo M.; Olson, William S.

    1999-01-01

    Current global analyses contain significant errors in primary hydrological fields such as precipitation, evaporation, and related cloud and moisture in the tropics. Work has been underway at NASA's Data Assimilation Office to explore the use of TRMM and SSM/I-derived rainfall and total precipitable water (TPW) data in global data assimilation to directly constrain these hydrological parameters. We found that assimilating these data types improves not only the precipitation and moisture estimates but also key climate parameters directly linked to convection such as the outgoing longwave radiation, clouds, and the large-scale circulation in the tropics. We will present results showing that assimilating TRMM and SSM/I 6-hour averaged rain rates and TPW estimates significantly reduces the state-dependent systematic errors in assimilated products. Specifically, rainfall assimilation improves cloud and latent heating distributions, which, in turn, improves the cloudy-sky radiation and the large-scale circulation, while TPW assimilation reduces moisture biases to improve radiation in clear-sky regions. Rainfall and TPW assimilation also improves tropical forecasts beyond 1 day.

  4. Electrical stimulation therapy for dysphagia: a follow-up survey of USA dysphagia practitioners.

    PubMed

    Barikroo, Ali; Carnaby, Giselle; Crary, Michael

    2017-12-01

    The aim of this study was to compare current application, practice patterns, clinical outcomes, and professional attitudes of dysphagia practitioners regarding electrical stimulation (e-stim) therapy with similar data obtained in 2005. A web-based survey was posted on the American Speech-Language-Hearing Association Special Interest Group 13 webpage for 1 month. A total of 271 survey responses were analyzed and descriptively compared with the archived responses from the 2005 survey. Results suggested that e-stim application increased by 47% among dysphagia practitioners over the last 10 years. The frequency of weekly e-stim therapy sessions decreased while the reported total number of treatment sessions increased between the two surveys. Advancement in oral diet was the most commonly reported improvement in both surveys. Overall, reported satisfaction levels of clinicians and patients regarding e-stim therapy decreased. Still, the majority of e-stim practitioners continue to recommend this treatment modality to other dysphagia practitioners. Results from the novel items in the current survey suggested that motor level e-stim (e.g. higher amplitude) is most commonly used during dysphagia therapy with no preferred electrode placement. Furthermore, the majority of clinicians reported high levels of self-confidence regarding their ability to perform e-stim. The results of this survey highlight ongoing changes in application, practice patterns, clinical outcomes, and professional attitudes associated with e-stim therapy among dysphagia practitioners.

  5. An Improved BeiDou-2 Satellite-Induced Code Bias Estimation Method.

    PubMed

    Fu, Jingyang; Li, Guangyun; Wang, Li

    2018-04-27

    Different from GPS, GLONASS, GALILEO and BeiDou-3, it is confirmed that the code multipath bias (CMB), which originate from the satellite end and can be over 1 m, are commonly found in the code observations of BeiDou-2 (BDS) IGSO and MEO satellites. In order to mitigate their adverse effects on absolute precise applications which use the code measurements, we propose in this paper an improved correction model to estimate the CMB. Different from the traditional model which considering the correction values are orbit-type dependent (estimating two sets of values for IGSO and MEO, respectively) and modeling the CMB as a piecewise linear function with a elevation node separation of 10°, we estimate the corrections for each BDS IGSO + MEO satellite on one hand, and a denser elevation node separation of 5° is used to model the CMB variations on the other hand. Currently, the institutions such as IGS-MGEX operate over 120 stations which providing the daily BDS observations. These large amounts of data provide adequate support to refine the CMB estimation satellite by satellite in our improved model. One month BDS observations from MGEX are used for assessing the performance of the improved CMB model by means of precise point positioning (PPP). Experimental results show that for the satellites on the same orbit type, obvious differences can be found in the CMB at the same node and frequency. Results show that the new correction model can improve the wide-lane (WL) ambiguity usage rate for WL fractional cycle bias estimation, shorten the WL and narrow-lane (NL) time to first fix (TTFF) in PPP ambiguity resolution (AR) as well as improve the PPP positioning accuracy. With our improved correction model, the usage of WL ambiguity is increased from 94.1% to 96.0%, the WL and NL TTFF of PPP AR is shorten from 10.6 to 9.3 min, 67.9 to 63.3 min, respectively, compared with the traditional correction model. In addition, both the traditional and improved CMB model have a better performance in these aspects compared with the model which does not account for the CMB correction.

  6. An Improved BeiDou-2 Satellite-Induced Code Bias Estimation Method

    PubMed Central

    Fu, Jingyang; Li, Guangyun; Wang, Li

    2018-01-01

    Different from GPS, GLONASS, GALILEO and BeiDou-3, it is confirmed that the code multipath bias (CMB), which originate from the satellite end and can be over 1 m, are commonly found in the code observations of BeiDou-2 (BDS) IGSO and MEO satellites. In order to mitigate their adverse effects on absolute precise applications which use the code measurements, we propose in this paper an improved correction model to estimate the CMB. Different from the traditional model which considering the correction values are orbit-type dependent (estimating two sets of values for IGSO and MEO, respectively) and modeling the CMB as a piecewise linear function with a elevation node separation of 10°, we estimate the corrections for each BDS IGSO + MEO satellite on one hand, and a denser elevation node separation of 5° is used to model the CMB variations on the other hand. Currently, the institutions such as IGS-MGEX operate over 120 stations which providing the daily BDS observations. These large amounts of data provide adequate support to refine the CMB estimation satellite by satellite in our improved model. One month BDS observations from MGEX are used for assessing the performance of the improved CMB model by means of precise point positioning (PPP). Experimental results show that for the satellites on the same orbit type, obvious differences can be found in the CMB at the same node and frequency. Results show that the new correction model can improve the wide-lane (WL) ambiguity usage rate for WL fractional cycle bias estimation, shorten the WL and narrow-lane (NL) time to first fix (TTFF) in PPP ambiguity resolution (AR) as well as improve the PPP positioning accuracy. With our improved correction model, the usage of WL ambiguity is increased from 94.1% to 96.0%, the WL and NL TTFF of PPP AR is shorten from 10.6 to 9.3 min, 67.9 to 63.3 min, respectively, compared with the traditional correction model. In addition, both the traditional and improved CMB model have a better performance in these aspects compared with the model which does not account for the CMB correction. PMID:29702559

  7. A global view of shifting cultivation: Recent, current, and future extent

    PubMed Central

    Mertz, Ole; Frolking, Steve; Egelund Christensen, Andreas; Hurni, Kaspar; Sedano, Fernando; Parsons Chini, Louise; Sahajpal, Ritvik; Hansen, Matthew; Hurtt, George

    2017-01-01

    Mosaic landscapes under shifting cultivation, with their dynamic mix of managed and natural land covers, often fall through the cracks in remote sensing–based land cover and land use classifications, as these are unable to adequately capture such landscapes’ dynamic nature and complex spectral and spatial signatures. But information about such landscapes is urgently needed to improve the outcomes of global earth system modelling and large-scale carbon and greenhouse gas accounting. This study combines existing global Landsat-based deforestation data covering the years 2000 to 2014 with very high-resolution satellite imagery to visually detect the specific spatio-temporal pattern of shifting cultivation at a one-degree cell resolution worldwide. The accuracy levels of our classification were high with an overall accuracy above 87%. We estimate the current global extent of shifting cultivation and compare it to other current global mapping endeavors as well as results of literature searches. Based on an expert survey, we make a first attempt at estimating past trends as well as possible future trends in the global distribution of shifting cultivation until the end of the 21st century. With 62% of the investigated one-degree cells in the humid and sub-humid tropics currently showing signs of shifting cultivation—the majority in the Americas (41%) and Africa (37%)—this form of cultivation remains widespread, and it would be wrong to speak of its general global demise in the last decades. We estimate that shifting cultivation landscapes currently cover roughly 280 million hectares worldwide, including both cultivated fields and fallows. While only an approximation, this estimate is clearly smaller than the areas mentioned in the literature which range up to 1,000 million hectares. Based on our expert survey and historical trends we estimate a possible strong decrease in shifting cultivation over the next decades, raising issues of livelihood security and resilience among people currently depending on shifting cultivation. PMID:28886132

  8. A global view of shifting cultivation: Recent, current, and future extent

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heinimann, Andreas; Mertz, Ole; Frolking, Steve

    Mosaic landscapes under shifting cultivation, with their dynamic mix of managed and natural land covers, often fall through the cracks in remote sensing-based land cover and land use classifications, as these are unable to adequately capture such landscapes' dynamic nature and complex spectral and spatial signatures. But information about such landscapes is urgently needed to improve the outcomes of global earth system modelling and large-scale carbon and greenhouse gas accounting. This study combines existing global Landsat-based deforestation data covering the years 2000 to 2014 with very high-resolution satellite imagery to visually detect the specific spatio-temporal pattern of shifting cultivation atmore » a one-degree cell resolution worldwide. The accuracy levels of our classification were high with an overall accuracy above 87%. We estimate the current global extent of shifting cultivation and compare it to other current global mapping endeavors as well as results of literature searches. Based on an expert survey, we make a first attempt at estimating past trends as well as possible future trends in the global distribution of shifting cultivation until the end of the 21 st century. With 62% of the investigated one-degree cells in the humid and sub-humid tropics currently showing signs of shifting cultivation$-$the majority in the Americas (41%) and Africa (37%)$-$this form of cultivation remains widespread, and it would be wrong to speak of its general global demise in the last decades. We estimate that shifting cultivation landscapes currently cover roughly 280 million hectares worldwide, including both cultivated fields and fallows. While only an approximation, this estimate is clearly smaller than the areas mentioned in the literature which range up to 1,000 million hectares. Based on our expert survey and historical trends we estimate a possible strong decrease in shifting cultivation over the next decades, raising issues of livelihood security and resilience among people currently depending on shifting cultivation.« less

  9. A global view of shifting cultivation: Recent, current, and future extent

    DOE PAGES

    Heinimann, Andreas; Mertz, Ole; Frolking, Steve; ...

    2017-09-08

    Mosaic landscapes under shifting cultivation, with their dynamic mix of managed and natural land covers, often fall through the cracks in remote sensing-based land cover and land use classifications, as these are unable to adequately capture such landscapes' dynamic nature and complex spectral and spatial signatures. But information about such landscapes is urgently needed to improve the outcomes of global earth system modelling and large-scale carbon and greenhouse gas accounting. This study combines existing global Landsat-based deforestation data covering the years 2000 to 2014 with very high-resolution satellite imagery to visually detect the specific spatio-temporal pattern of shifting cultivation atmore » a one-degree cell resolution worldwide. The accuracy levels of our classification were high with an overall accuracy above 87%. We estimate the current global extent of shifting cultivation and compare it to other current global mapping endeavors as well as results of literature searches. Based on an expert survey, we make a first attempt at estimating past trends as well as possible future trends in the global distribution of shifting cultivation until the end of the 21 st century. With 62% of the investigated one-degree cells in the humid and sub-humid tropics currently showing signs of shifting cultivation$-$the majority in the Americas (41%) and Africa (37%)$-$this form of cultivation remains widespread, and it would be wrong to speak of its general global demise in the last decades. We estimate that shifting cultivation landscapes currently cover roughly 280 million hectares worldwide, including both cultivated fields and fallows. While only an approximation, this estimate is clearly smaller than the areas mentioned in the literature which range up to 1,000 million hectares. Based on our expert survey and historical trends we estimate a possible strong decrease in shifting cultivation over the next decades, raising issues of livelihood security and resilience among people currently depending on shifting cultivation.« less

  10. The Hubble Constant.

    PubMed

    Jackson, Neal

    2007-01-01

    I review the current state of determinations of the Hubble constant, which gives the length scale of the Universe by relating the expansion velocity of objects to their distance. In the last 20 years, much progress has been made and estimates now range between 60 and 75 km s -1 Mpc -1 , with most now between 70 and 75 km s -1 Mpc -1 , a huge improvement over the factor-of-2 uncertainty which used to prevail. Further improvements which gave a generally agreed margin of error of a few percent rather than the current 10% would be vital input to much other interesting cosmology. There are several programmes which are likely to lead us to this point in the next 10 years.

  11. Adjusting Estimates of the Expected Value of Information for Implementation: Theoretical Framework and Practical Application.

    PubMed

    Andronis, Lazaros; Barton, Pelham M

    2016-04-01

    Value of information (VoI) calculations give the expected benefits of decision making under perfect information (EVPI) or sample information (EVSI), typically on the premise that any treatment recommendations made in light of this information will be implemented instantly and fully. This assumption is unlikely to hold in health care; evidence shows that obtaining further information typically leads to "improved" rather than "perfect" implementation. To present a method of calculating the expected value of further research that accounts for the reality of improved implementation. This work extends an existing conceptual framework by introducing additional states of the world regarding information (sample information, in addition to current and perfect information) and implementation (improved implementation, in addition to current and optimal implementation). The extension allows calculating the "implementation-adjusted" EVSI (IA-EVSI), a measure that accounts for different degrees of implementation. Calculations of implementation-adjusted estimates are illustrated under different scenarios through a stylized case study in non-small cell lung cancer. In the particular case study, the population values for EVSI and IA-EVSI were £ 25 million and £ 8 million, respectively; thus, a decision assuming perfect implementation would have overestimated the expected value of research by about £ 17 million. IA-EVSI was driven by the assumed time horizon and, importantly, the specified rate of change in implementation: the higher the rate, the greater the IA-EVSI and the lower the difference between IA-EVSI and EVSI. Traditionally calculated measures of population VoI rely on unrealistic assumptions about implementation. This article provides a simple framework that accounts for improved, rather than perfect, implementation and offers more realistic estimates of the expected value of research. © The Author(s) 2015.

  12. The role of non-invasive cardiovascular imaging in the assessment of cardiovascular risk in rheumatoid arthritis: where we are and where we need to be.

    PubMed

    Fent, Graham J; Greenwood, John P; Plein, Sven; Buch, Maya H

    2017-07-01

    This review assesses the risk assessment of cardiovascular disease (CVD) in rheumatoid arthritis (RA) and how non-invasive imaging modalities may improve risk stratification in future. RA is common and patients are at greater risk of CVD than the general population. Cardiovascular (CV) risk stratification is recommended in European guidelines for patients at high and very high CV risk in order to commence preventative therapy. Ideally, such an assessment should be carried out immediately after diagnosis and as part of ongoing long-term patient care in order to improve patient outcomes. The risk profile in RA is different from the general population and is not well estimated using conventional clinical CVD risk algorithms, particularly in patients estimated as intermediate CVD risk. Non-invasive imaging techniques may therefore play an important role in improving risk assessment. However, there are currently very limited prognostic data specific to patients with RA to guide clinicians in risk stratification using these imaging techniques. RA is associated with increased risk of CV mortality, mainly attributable to atherosclerotic disease, though in addition, RA is associated with many other disease processes which further contribute to increased CV mortality. There is reasonable evidence for using carotid ultrasound in patients estimated to be at intermediate risk of CV mortality using clinical CVD risk algorithms. Newer imaging techniques such as cardiovascular magnetic resonance and CT offer the potential to improve risk stratification further; however, longitudinal data with hard CVD outcomes are currently lacking. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  13. How drugs are developed and approved by the FDA: current process and future directions.

    PubMed

    Ciociola, Arthur A; Cohen, Lawrence B; Kulkarni, Prasad

    2014-05-01

    This article provides an overview of FDA's regulatory processes for drug development and approval, and the estimated costs associated with the development of a drug, and also examines the issues and challenges facing the FDA in the near future. A literature search was performed using MEDLINE to summarize the current FDA drug approval processes and future directions. MEDLINE was further utilized to search for all cost analysis studies performed to evaluate the pharmaceutical industry R&D productivity and drug development cost estimates. While the drug approval process remains at high risk and spans over multiple years, the FDA drug review and approval process has improved, with the median approval time for new molecular drugs been reduced from 19 months to 10 months. The overall cost to development of a drug remains quite high and has been estimated to range from $868M to $1,241M USD. Several new laws have been enacted, including the FDA Safety and Innovation Act (FDASIA) of 2013, which is designed to improve the drug approval process and enhance access to new medicines. The FDA's improved processes for drug approval and post-market surveillance have achieved the goal of providing patients with timely access to effective drugs while minimizing the risk of drug-related harm. The FDA drug approval process is not without controversy, as a number of well-known gastroenterology drugs have been withdrawn from the US market over the past few years. With the approval of the new FDASIA law, the FDA will continue to improve their processes and, working together with the ACG through the FDA-Related Matters Committee, continue to develop safe and effective drugs for our patients.

  14. A Vision-Based Relative Navigation Approach for Autonomous Multirotor Aircraft

    NASA Astrophysics Data System (ADS)

    Leishman, Robert C.

    Autonomous flight in unstructured, confined, and unknown GPS-denied environments is a challenging problem. Solutions could be tremendously beneficial for scenarios that require information about areas that are difficult to access and that present a great amount of risk. The goal of this research is to develop a new framework that enables improved solutions to this problem and to validate the approach with experiments using a hardware prototype. In Chapter 2 we examine the consequences and practical aspects of using an improved dynamic model for multirotor state estimation, using only IMU measurements. The improved model correctly explains the measurements available from the accelerometers on a multirotor. We provide hardware results demonstrating the improved attitude, velocity and even position estimates that can be achieved through the use of this model. We propose a new architecture to simplify some of the challenges that constrain GPS-denied aerial flight in Chapter 3. At the core, the approach combines visual graph-SLAM with a multiplicative extended Kalman filter (MEKF). More importantly, we depart from the common practice of estimating global states and instead keep the position and yaw states of the MEKF relative to the current node in the map. This relative navigation approach provides a tremendous benefit compared to maintaining estimates with respect to a single global coordinate frame. We discuss the architecture of this new system and provide important details for each component. We verify the approach with goal-directed autonomous flight-test results. The MEKF is the basis of the new relative navigation approach and is detailed in Chapter 4. We derive the relative filter and show how the states must be augmented and marginalized each time a new node is declared. The relative estimation approach is verified using hardware flight test results accompanied by comparisons to motion capture truth. Additionally, flight results with estimates in the control loop are provided. We believe that the relative, vision-based framework described in this work is an important step in furthering the capabilities of indoor aerial navigation in confined, unknown environments. Current approaches incur challenging problems by requiring globally referenced states. Utilizing a relative approach allows more flexibility as the critical, real-time processes of localization and control do not depend on computationally-demanding optimization and loop-closure processes.

  15. Improved Spatial Registration and Target Tracking Method for Sensors on Multiple Missiles.

    PubMed

    Lu, Xiaodong; Xie, Yuting; Zhou, Jun

    2018-05-27

    Inspired by the problem that the current spatial registration methods are unsuitable for three-dimensional (3-D) sensor on high-dynamic platform, this paper focuses on the estimation for the registration errors of cooperative missiles and motion states of maneuvering target. There are two types of errors being discussed: sensor measurement biases and attitude biases. Firstly, an improved Kalman Filter on Earth-Centered Earth-Fixed (ECEF-KF) coordinate algorithm is proposed to estimate the deviations mentioned above, from which the outcomes are furtherly compensated to the error terms. Secondly, the Pseudo Linear Kalman Filter (PLKF) and the nonlinear scheme the Unscented Kalman Filter (UKF) with modified inputs are employed for target tracking. The convergence of filtering results are monitored by a position-judgement logic, and a low-pass first order filter is selectively introduced before compensation to inhibit the jitter of estimations. In the simulation, the ECEF-KF enhancement is proven to improve the accuracy and robustness of the space alignment, while the conditional-compensation-based PLKF method is demonstrated to be the optimal performance in target tracking.

  16. Improved Prediction of Quasi-Global Vegetation Conditions Using Remotely-Sensed Surface Soil Moisture

    NASA Technical Reports Server (NTRS)

    Bolten, John; Crow, Wade

    2012-01-01

    The added value of satellite-based surface soil moisture retrievals for agricultural drought monitoring is assessed by calculating the lagged rank correlation between remotely-sensed vegetation indices (VI) and soil moisture estimates obtained both before and after the assimilation of surface soil moisture retrievals derived from the Advanced Microwave Scanning Radiometer-EOS (AMSR-E) into a soil water balance model. Higher soil moisture/VI lag correlations imply an enhanced ability to predict future vegetation conditions using estimates of current soil moisture. Results demonstrate that the assimilation of AMSR-E surface soil moisture retrievals substantially improve the performance of a global drought monitoring system - particularly in sparsely-instrumented areas of the world where high-quality rainfall observations are unavailable.

  17. Lightning Reporting at 45th Weather Squadron: Recent Improvements

    NASA Technical Reports Server (NTRS)

    Finn, Frank C.; Roeder, William P.; Buchanan, Michael D.; McNamara, Todd M.; McAllenan, Michael; Winters, Katherine A.; Fitzpatrick, Michael E.; Huddleston, Lisa L.

    2010-01-01

    The 45th Weather Squadron (45 WS) provides daily lightning reports to space launch customers at CCAFS/KSC. These reports are provided to assess the need to inspect the electronics of satellite payloads, space launch vehicles, and ground support equipment for induced current damage from nearby lightning strokes. The 45 WS has made several improvements to the lightning reports during 2008-2009. The 4DLSS, implemented in April 2008, provides all lightning strokes as opposed to just one stroke per flash as done by the previous system. The 45 WS discovered that the peak current was being truncated to the nearest kilo amp in the database used to generate the daily lightning reports, which led to an up to 4% underestimate in the peak current for average lightning. This error was corrected and led to elimination of this underestimate. The 45 WS and their mission partners developed lightning location error ellipses for 99% and 95% location accuracies tailored to each individual stroke and began providing them in the spring of 2009. The new procedure provides the distance from the point of interest to the best location of the stroke (the center of the error ellipse) and the distance to the closest edge of the ellipse. This information is now included in the lightning reports, along with the peak current of the stroke. The initial method of calculating the error ellipses could only be used during normal duty hours, i.e. not during nights, weekends, or holidays. This method was improved later to provide lightning reports in near real-time, 24/7. The calculation of the distance to the closest point on the ellipse was also significantly improved later. Other improvements were also implemented. A new method to calculate the probability of any nearby lightning stroke. being within any radius of any point of interest was developed and is being implemented. This may supersede the use of location error ellipses. The 45 WS is pursuing adding data from nine NLDN sensors into 4DLSS in real-time. This will overcome the problem of 4DLSS missing some of the strong local strokes. This will also improve the location accuracy, reduce the size and eccentricity of the location error ellipses, and reduce the probability of nearby strokes being inside the areas of interest when few of the 4DLSS sensors are used in the stroke solution. This will not reduce 4DLSS performance when most of the 4DLSS sensors are used in the stroke solution. Finally, several possible future improvements were discussed, especially for improving the peak current estimate and the error estimate for peak current, and upgrading the 4DLSS. Some possible approaches for both of these goals were discussed.

  18. Practical Applications for Earthquake Scenarios Using ShakeMap

    NASA Astrophysics Data System (ADS)

    Wald, D. J.; Worden, B.; Quitoriano, V.; Goltz, J.

    2001-12-01

    In planning and coordinating emergency response, utilities, local government, and other organizations are best served by conducting training exercises based on realistic earthquake situations-ones that they are most likely to face. Scenario earthquakes can fill this role; they can be generated for any geologically plausible earthquake or for actual historic earthquakes. ShakeMap Web pages now display selected earthquake scenarios (www.trinet.org/shake/archive/scenario/html) and more events will be added as they are requested and produced. We will discuss the methodology and provide practical examples where these scenarios are used directly for risk reduction. Given a selected event, we have developed tools to make it relatively easy to generate a ShakeMap earthquake scenario using the following steps: 1) Assume a particular fault or fault segment will (or did) rupture over a certain length, 2) Determine the magnitude of the earthquake based on assumed rupture dimensions, 3) Estimate the ground shaking at all locations in the chosen area around the fault, and 4) Represent these motions visually by producing ShakeMaps and generating ground motion input for loss estimation modeling (e.g., FEMA's HAZUS). At present, ground motions are estimated using empirical attenuation relationships to estimate peak ground motions on rock conditions. We then correct the amplitude at that location based on the local site soil (NEHRP) conditions as we do in the general ShakeMap interpolation scheme. Finiteness is included explicitly, but directivity enters only through the empirical relations. Although current ShakeMap earthquake scenarios are empirically based, substantial improvements in numerical ground motion modeling have been made in recent years. However, loss estimation tools, HAZUS for example, typically require relatively high frequency (3 Hz) input for predicting losses, above the range of frequencies successfully modeled to date. Achieving full-synthetic ground motion estimates that will substantially improve over empirical relations at these frequencies will require developing cost-effective numerical tools for proper theoretical inclusion of known complex ground motion effects. Current efforts underway must continue in order to obtain site, basin, and deeper crustal structure, and to characterize and test 3D earth models (including attenuation and nonlinearity). In contrast, longer period synthetics (>2 sec) are currently being generated in a deterministic fashion to include 3D and shallow site effects, an improvement on empirical estimates alone. As progress is made, we will naturally incorporate such advances into the ShakeMap scenario earthquake and processing methodology. Our scenarios are currently used heavily in emergency response planning and loss estimation. Primary users include city, county, state and federal government agencies (e.g., the California Office of Emergency Services, FEMA, the County of Los Angeles) as well as emergency response planners and managers for utilities, businesses, and other large organizations. We have found the scenarios are also of fundamental interest to many in the media and the general community interested in the nature of the ground shaking likely experienced in past earthquakes as well as effects of rupture on known faults in the future.

  19. Estimation of Health Benefits From a Local Living Wage Ordinance

    PubMed Central

    Bhatia, Rajiv; Katz, Mitchell

    2001-01-01

    Objectives. This study estimated the magnitude of health improvements resulting from a proposed living wage ordinance in San Francisco. Methods. Published observational models of the relationship of income to health were applied to predict improvements in health outcomes associated with proposed wage increases in San Francisco. Results. With adoption of a living wage of $11.00 per hour, we predict decreases in premature death from all causes for adults aged 24 to 44 years working full-time in families whose current annual income is $20 000 (for men, relative hazard [RH] = 0.94, 95% confidence interval [CI] = 0.92, 0.97; for women, RH = 0.96, 95% CI = 0.95, 0.98). Improvements in subjectively rated health and reductions in the number of days sick in bed, in limitations of work and activities of daily living, and in depressive symptoms were also predicted, as were increases in daily alcohol consumption. For the offspring of full-time workers currently earning $20 000, a living wage predicts an increase of 0.25 years (95% CI = 0.20, 0.30) of completed education, increased odds of completing high school (odds ratio = 1.34, 95% CI = 1.20, 1.49), and a reduced risk of early childbirth (RH = 0.78, 95% CI = 0.69, 0.86). Conclusions. A living wage in San Francisco is associated with substantial health improvement. PMID:11527770

  20. Green Routing Fuel Saving Opportunity Assessment: A Case Study on California Large-Scale Real-World Travel Data: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Lei; Holden, Jacob; Gonder, Jeff

    New technologies, such as connected and automated vehicles, have attracted more and more researchers for improving the energy efficiency and environmental impact of current transportation systems. The green routing strategy instructs a vehicle to select the most fuel-efficient route before the vehicle departs. It benefits the current transportation system with fuel saving opportunity through identifying the greenest route. This paper introduces an evaluation framework for estimating benefits of green routing based on large-scale, real-world travel data. The framework has the capability to quantify fuel savings by estimating the fuel consumption of actual routes and comparing to routes procured by navigationmore » systems. A route-based fuel consumption estimation model, considering road traffic conditions, functional class, and road grade is proposed and used in the framework. An experiment using a large-scale data set from the California Household Travel Survey global positioning system trajectory data base indicates that 31% of actual routes have fuel savings potential with a cumulative estimated fuel savings of 12%.« less

  1. Green Routing Fuel Saving Opportunity Assessment: A Case Study on California Large-Scale Real-World Travel Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Lei; Holden, Jacob; Gonder, Jeffrey D

    New technologies, such as connected and automated vehicles, have attracted more and more researchers for improving the energy efficiency and environmental impact of current transportation systems. The green routing strategy instructs a vehicle to select the most fuel-efficient route before the vehicle departs. It benefits the current transportation system with fuel saving opportunity through identifying the greenest route. This paper introduces an evaluation framework for estimating benefits of green routing based on large-scale, real-world travel data. The framework has the capability to quantify fuel savings by estimating the fuel consumption of actual routes and comparing to routes procured by navigationmore » systems. A route-based fuel consumption estimation model, considering road traffic conditions, functional class, and road grade is proposed and used in the framework. An experiment using a large-scale data set from the California Household Travel Survey global positioning system trajectory data base indicates that 31% of actual routes have fuel savings potential with a cumulative estimated fuel savings of 12%.« less

  2. Improvement of automatic control system for high-speed current collectors

    NASA Astrophysics Data System (ADS)

    Sidorov, O. A.; Goryunov, V. N.; Golubkov, A. S.

    2018-01-01

    The article considers the ways of regulation of pantographs to provide quality and reliability of current collection at high speeds. To assess impact of regulation was proposed integral criterion of the quality of current collection, taking into account efficiency and reliability of operation of the pantograph. The study was carried out using mathematical model of interaction of pantograph and catenary system, allowing to assess contact force and intensity of arcing at the contact zone at different movement speeds. The simulation results allowed us to estimate the efficiency of different methods of regulation of pantographs and determine the best option.

  3. 2013 Cost of Wind Energy Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mone, C.; Smith, A.; Maples, B.

    2015-02-01

    This report uses representative project types to estimate the levelized cost of wind energy (LCOE) in the United States for 2013. Scheduled to be published on an annual basis, it relies on both market and modeled data to maintain a current understanding of wind generation cost trends and drivers. It is intended to provide insight into current component-level costs and a basis for understanding current component-level costs and a basis for understanding variability in the LCOE across the industry. Data and tools developed from this analysis are used to inform wind technology cost projections, goals, and improvement opportunities.

  4. A numerical study of wave-current interaction through surface and bottom stresses: Coastal ocean response to Hurricane Fran of 1996

    NASA Astrophysics Data System (ADS)

    Xie, L.; Pietrafesa, L. J.; Wu, K.

    2003-02-01

    A three-dimensional wave-current coupled modeling system is used to examine the influence of waves on coastal currents and sea level. This coupled modeling system consists of the wave model-WAM (Cycle 4) and the Princeton Ocean Model (POM). The results from this study show that it is important to incorporate surface wave effects into coastal storm surge and circulation models. Specifically, we find that (1) storm surge models without coupled surface waves generally under estimate not only the peak surge but also the coastal water level drop which can also cause substantial impact on the coastal environment, (2) introducing wave-induced surface stress effect into storm surge models can significantly improve storm surge prediction, (3) incorporating wave-induced bottom stress into the coupled wave-current model further improves storm surge prediction, and (4) calibration of the wave module according to minimum error in significant wave height does not necessarily result in an optimum wave module in a wave-current coupled system for current and storm surge prediction.

  5. A Probabilistic Model for Estimating the Depth and Threshold Temperature of C-fiber Nociceptors

    PubMed Central

    Dezhdar, Tara; Moshourab, Rabih A.; Fründ, Ingo; Lewin, Gary R.; Schmuker, Michael

    2015-01-01

    The subjective experience of thermal pain follows the detection and encoding of noxious stimuli by primary afferent neurons called nociceptors. However, nociceptor morphology has been hard to access and the mechanisms of signal transduction remain unresolved. In order to understand how heat transducers in nociceptors are activated in vivo, it is important to estimate the temperatures that directly activate the skin-embedded nociceptor membrane. Hence, the nociceptor’s temperature threshold must be estimated, which in turn will depend on the depth at which transduction happens in the skin. Since the temperature at the receptor cannot be accessed experimentally, such an estimation can currently only be achieved through modeling. However, the current state-of-the-art model to estimate temperature at the receptor suffers from the fact that it cannot account for the natural stochastic variability of neuronal responses. We improve this model using a probabilistic approach which accounts for uncertainties and potential noise in system. Using a data set of 24 C-fibers recorded in vitro, we show that, even without detailed knowledge of the bio-thermal properties of the system, the probabilistic model that we propose here is capable of providing estimates of threshold and depth in cases where the classical method fails. PMID:26638830

  6. Buried transuranic wastes at ORNL: Review of past estimates and reconciliation with current data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trabalka, J.R.

    1997-09-01

    Inventories of buried (generally meaning disposed of) transuranic (TRU) wastes at Oak Ridge National Laboratory (ORNL) have been estimated for site remediation and waste management planning over a period of about two decades. Estimates were required because of inadequate waste characterization and incomplete disposal records. For a variety of reasons, including changing definitions of TRU wastes, differing objectives for the estimates, and poor historical data, the published results have sometimes been in conflict. The purpose of this review was (1) to attempt to explain both the rationale for and differences among the various estimates, and (2) to update the estimatesmore » based on more recent information obtained from waste characterization and from evaluations of ORNL waste data bases and historical records. The latter included information obtained from an expert panel`s review and reconciliation of inconsistencies in data identified during preparation of the ORNL input for the third revision of the Baseline Inventory Report for the Waste Isolation Pilot Plant. The results summarize current understanding of the relationship between past estimates of buried TRU wastes and provide the most up-to-date information on recorded burials thereafter. The limitations of available information on the latter and thus the need for improved waste characterization are highlighted.« less

  7. Optimal interpolation schemes to constrain pmPM2.5 in regional modeling over the United States

    NASA Astrophysics Data System (ADS)

    Sousan, Sinan Dhia Jameel

    This thesis presents the use of data assimilation with optimal interpolation (OI) to develop atmospheric aerosol concentration estimates for the United States at high spatial and temporal resolutions. Concentration estimates are highly desirable for a wide range of applications, including visibility, climate, and human health. OI is a viable data assimilation method that can be used to improve Community Multiscale Air Quality (CMAQ) model fine particulate matter (PM2.5) estimates. PM2.5 is the mass of solid and liquid particles with diameters less than or equal to 2.5 µm suspended in the gas phase. OI was employed by combining model estimates with satellite and surface measurements. The satellite data assimilation combined 36 x 36 km aerosol concentrations from CMAQ with aerosol optical depth (AOD) measured by MODIS and AERONET over the continental United States for 2002. Posterior model concentrations generated by the OI algorithm were compared with surface PM2.5 measurements to evaluate a number of possible data assimilation parameters, including model error, observation error, and temporal averaging assumptions. Evaluation was conducted separately for six geographic U.S. regions in 2002. Variability in model error and MODIS biases limited the effectiveness of a single data assimilation system for the entire continental domain. The best combinations of four settings and three averaging schemes led to a domain-averaged improvement in fractional error from 1.2 to 0.97 and from 0.99 to 0.89 at respective IMPROVE and STN monitoring sites. For 38% of OI results, MODIS OI degraded the forward model skill due to biases and outliers in MODIS AOD. Surface data assimilation combined 36 × 36 km aerosol concentrations from the CMAQ model with surface PM2.5 measurements over the continental United States for 2002. The model error covariance matrix was constructed by using the observational method. The observation error covariance matrix included site representation that scaled the observation error by land use (i.e. urban or rural locations). In theory, urban locations should have less effect on surrounding areas than rural sites, which can be controlled using site representation error. The annual evaluations showed substantial improvements in model performance with increases in the correlation coefficient from 0.36 (prior) to 0.76 (posterior), and decreases in the fractional error from 0.43 (prior) to 0.15 (posterior). In addition, the normalized mean error decreased from 0.36 (prior) to 0.13 (posterior), and the RMSE decreased from 5.39 µg m-3 (prior) to 2.32 µg m-3 (posterior). OI decreased model bias for both large spatial areas and point locations, and could be extended to more advanced data assimilation methods. The current work will be applied to a five year (2000-2004) CMAQ simulation aimed at improving aerosol model estimates. The posterior model concentrations will be used to inform exposure studies over the U.S. that relate aerosol exposure to mortality and morbidity rates. Future improvements for the OI techniques used in the current study will include combining both surface and satellite data to improve posterior model estimates. Satellite data have high spatial and temporal resolutions in comparison to surface measurements, which are scarce but more accurate than model estimates. The satellite data are subject to noise affected by location and season of retrieval. The implementation of OI to combine satellite and surface data sets has the potential to improve posterior model estimates for locations that have no direct measurements.

  8. Improving Best Air Conditioner Efficiency by 20-30% through a High Efficiency Fan and Diffuser Stage Coupled with an Evaporative Condenser Pre-Cooler

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, Danny S; Sherwin, John R; Raustad, Richard

    2014-04-10

    The Florida Solar Energy Center (FSEC) conducted a research project to improve the best residential air conditioner condenser technology currently available on the market by retrofitting a commercially-available unit with both a high efficiency fan system and an evaporative pre-cooler. The objective was to integrate these two concepts to achieve an ultra-efficient residential air conditioner design. The project produced a working prototype that was 30% more efficient compared to the best currently-available technologies; the peak the energy efficiency ratio (EER) was improved by 41%. Efficiency at the Air-Conditioning and Refrigeration Institute (ARI) standard B-condition which is used to estimate seasonalmore » energy efficiency ratio (SEER), was raised from a nominal 21 Btu/Wh to 32 Btu/Wh.« less

  9. Stochastic error model corrections to improve the performance of bottom-up precipitation products for hydrologic applications

    NASA Astrophysics Data System (ADS)

    Maggioni, V.; Massari, C.; Ciabatta, L.; Brocca, L.

    2016-12-01

    Accurate quantitative precipitation estimation is of great importance for water resources management, agricultural planning, and forecasting and monitoring of natural hazards such as flash floods and landslides. In situ observations are limited around the Earth, especially in remote areas (e.g., complex terrain, dense vegetation), but currently available satellite precipitation products are able to provide global precipitation estimates with an accuracy that depends upon many factors (e.g., type of storms, temporal sampling, season, etc.). The recent SM2RAIN approach proposes to estimate rainfall by using satellite soil moisture observations. As opposed to traditional satellite precipitation methods, which sense cloud properties to retrieve instantaneous estimates, this new bottom-up approach makes use of two consecutive soil moisture measurements for obtaining an estimate of the fallen precipitation within the interval between two satellite overpasses. As a result, the nature of the measurement is different and complementary to the one of classical precipitation products and could provide a different valid perspective to substitute or improve current rainfall estimates. However, uncertainties in the SM2RAIN product are still not well known and could represent a limitation in utilizing this dataset for hydrological applications. Therefore, quantifying the uncertainty associated with SM2RAIN is necessary for enabling its use. The study is conducted over the Italian territory for a 5-yr period (2010-2014). A number of satellite precipitation error properties, typically used in error modeling, are investigated and include probability of detection, false alarm rates, missed events, spatial correlation of the error, and hit biases. After this preliminary uncertainty analysis, the potential of applying the stochastic rainfall error model SREM2D to correct SM2RAIN and to improve its performance in hydrologic applications is investigated. The use of SREM2D for characterizing the error in precipitation by SM2RAIN would be highly useful for the merging and the integration steps in its algorithm, i.e., the merging of multiple soil moisture derived products (e.g., SMAP, SMOS, ASCAT) and the integration of soil moisture derived and state of the art satellite precipitation products (e.g., GPM IMERG).

  10. Confidence estimation for quantitative photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Gröhl, Janek; Kirchner, Thomas; Maier-Hein, Lena

    2018-02-01

    Quantification of photoacoustic (PA) images is one of the major challenges currently being addressed in PA research. Tissue properties can be quantified by correcting the recorded PA signal with an estimation of the corresponding fluence. Fluence estimation itself, however, is an ill-posed inverse problem which usually needs simplifying assumptions to be solved with state-of-the-art methods. These simplifications, as well as noise and artifacts in PA images reduce the accuracy of quantitative PA imaging (PAI). This reduction in accuracy is often localized to image regions where the assumptions do not hold true. This impedes the reconstruction of functional parameters when averaging over entire regions of interest (ROI). Averaging over a subset of voxels with a high accuracy would lead to an improved estimation of such parameters. To achieve this, we propose a novel approach to the local estimation of confidence in quantitative reconstructions of PA images. It makes use of conditional probability densities to estimate confidence intervals alongside the actual quantification. It encapsulates an estimation of the errors introduced by fluence estimation as well as signal noise. We validate the approach using Monte Carlo generated data in combination with a recently introduced machine learning-based approach to quantitative PAI. Our experiments show at least a two-fold improvement in quantification accuracy when evaluating on voxels with high confidence instead of thresholding signal intensity.

  11. A Sensorless Predictive Current Controlled Boost Converter by Using an EKF with Load Variation Effect Elimination Function

    PubMed Central

    Tong, Qiaoling; Chen, Chen; Zhang, Qiao; Zou, Xuecheng

    2015-01-01

    To realize accurate current control for a boost converter, a precise measurement of the inductor current is required to achieve high resolution current regulating. Current sensors are widely used to measure the inductor current. However, the current sensors and their processing circuits significantly contribute extra hardware cost, delay and noise to the system. They can also harm the system reliability. Therefore, current sensorless control techniques can bring cost effective and reliable solutions for various boost converter applications. According to the derived accurate model, which contains a number of parasitics, the boost converter is a nonlinear system. An Extended Kalman Filter (EKF) is proposed for inductor current estimation and output voltage filtering. With this approach, the system can have the same advantages as sensored current control mode. To implement EKF, the load value is necessary. However, the load may vary from time to time. This can lead to errors of current estimation and filtered output voltage. To solve this issue, a load variation elimination effect elimination (LVEE) module is added. In addition, a predictive average current controller is used to regulate the current. Compared with conventional voltage controlled system, the transient response is greatly improved since it only takes two switching cycles for the current to reach its reference. Finally, experimental results are presented to verify the stable operation and output tracking capability for large-signal transients of the proposed algorithm. PMID:25928061

  12. Cost-effectiveness and population outcomes of general population screening for hepatitis C.

    PubMed

    Coffin, Phillip O; Scott, John D; Golden, Matthew R; Sullivan, Sean D

    2012-05-01

    Current US guidelines recommend limiting hepatitis C virus (HCV) screening to high-risk individuals, and 50%-75% of infected persons remain unaware of their status. To estimate the cost-effectiveness and population-level impact of adding one-time HCV screening of US population aged 20-69 years to current guidelines, we developed a decision analytic model for the screening intervention and Markov model with annual transitions to estimate natural history. Subanalyses included protease inhibitor therapy and screening those at highest risk of infection (birth year 1945-1965). We relied on published literature and took a lifetime, societal perspective. Compared to current guidelines, incremental cost per quality-adjusted life year gained (ICER) was $7900 for general population screening and $4200 for screening by birth year, which dominated general population screening if cost, clinician uptake, and median age of diagnoses were assumed equivalent. General population screening remained cost-effective in all one-way sensitivity analyses, 30 000 Monte Carlo simulations, and scenarios in which background mortality was doubled, all genotype 1 patients were treated with protease inhibitors, and most parameters were set unfavorable to increased screening. ICER was lowest if screening was applied to a population with liver fibrosis similar to 2010 estimates. Approximately 1% of liver-related deaths would be averted per 15% of the general population screened; the impact would be greater with improved referral, treatment uptake, and cure. Broader screening for HCV would likely be cost-effective, but significantly reducing HCV-related morbidity and mortality would also require improved rates of referral, treatment, and cure.

  13. Using experimental design and spatial analyses to improve the precision of NDVI estimates in upland cotton field trials

    USDA-ARS?s Scientific Manuscript database

    Controlling for spatial variability is important in high-throughput phenotyping studies that enable large numbers of genotypes to be evaluated across time and space. In the current study, we compared the efficacy of different experimental designs and spatial models in the analysis of canopy spectral...

  14. Generating Multiple Imputations for Matrix Sampling Data Analyzed with Item Response Models.

    ERIC Educational Resources Information Center

    Thomas, Neal; Gan, Nianci

    1997-01-01

    Describes and assesses missing data methods currently used to analyze data from matrix sampling designs implemented by the National Assessment of Educational Progress. Several improved methods are developed, and these models are evaluated using an EM algorithm to obtain maximum likelihood estimates followed by multiple imputation of complete data…

  15. The role of photosynthesis in improving maize tolerance to ozone pollution

    USDA-ARS?s Scientific Manuscript database

    Ground-level ozone pollution has more than doubled since pre-industrial times, and is currently estimated to cause up to 10% reductions in U.S. maize yields annually. Maize productivity is reduced by exposure to ozone as it diffuses through stomatal pores and reacts to form damaging reactive oxygen ...

  16. Magnetic field feature extraction and selection for indoor location estimation.

    PubMed

    Galván-Tejada, Carlos E; García-Vázquez, Juan Pablo; Brena, Ramon F

    2014-06-20

    User indoor positioning has been under constant improvement especially with the availability of new sensors integrated into the modern mobile devices, which allows us to exploit not only infrastructures made for everyday use, such as WiFi, but also natural infrastructure, as is the case of natural magnetic field. In this paper we present an extension and improvement of our current indoor localization model based on the feature extraction of 46 magnetic field signal features. The extension adds a feature selection phase to our methodology, which is performed through Genetic Algorithm (GA) with the aim of optimizing the fitness of our current model. In addition, we present an evaluation of the final model in two different scenarios: home and office building. The results indicate that performing a feature selection process allows us to reduce the number of signal features of the model from 46 to 5 regardless the scenario and room location distribution. Further, we verified that reducing the number of features increases the probability of our estimator correctly detecting the user's location (sensitivity) and its capacity to detect false positives (specificity) in both scenarios.

  17. Estimation of Random Medium Parameters from 2D Post-Stack Seismic Data and Its Application in Seismic Inversion

    NASA Astrophysics Data System (ADS)

    Yang, X.; Zhu, P.; Gu, Y.; Xu, Z.

    2015-12-01

    Small scale heterogeneities of subsurface medium can be characterized conveniently and effectively using a few simple random medium parameters (RMP), such as autocorrelation length, angle and roughness factor, etc. The estimation of these parameters is significant in both oil reservoir prediction and metallic mine exploration. Poor accuracy and low stability existed in current estimation approaches limit the application of random medium theory in seismic exploration. This study focuses on improving the accuracy and stability of RMP estimation from post-stacked seismic data and its application in the seismic inversion. Experiment and theory analysis indicate that, although the autocorrelation of random medium is related to those of corresponding post-stacked seismic data, the relationship is obviously affected by the seismic dominant frequency, the autocorrelation length, roughness factor and so on. Also the error of calculation of autocorrelation in the case of finite and discrete model decreases the accuracy. In order to improve the precision of estimation of RMP, we design two improved approaches. Firstly, we apply region growing algorithm, which often used in image processing, to reduce the influence of noise in the autocorrelation calculated by the power spectrum method. Secondly, the orientation of autocorrelation is used as a new constraint in the estimation algorithm. The numerical experiments proved that it is feasible. In addition, in post-stack seismic inversion of random medium, the estimated RMP may be used to constrain inverse procedure and to construct the initial model. The experiment results indicate that taking inversed model as random medium and using relatively accurate estimated RMP to construct initial model can get better inversion result, which contained more details conformed to the actual underground medium.

  18. Probable Maximum Precipitation in the U.S. Pacific Northwest in a Changing Climate: PMP UNDER CLIMATE CHANGE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xiaodong; Hossain, Faisal; Leung, L. Ruby

    The safety of large and aging water infrastructures is gaining attention in water management given the accelerated rate of change in landscape, climate and society. In current engineering practice, such safety is ensured by the design of infrastructure for the Probable Maximum Precipitation (PMP). Recently, several physics-based numerical modeling approaches have been proposed to modernize the conventional and ad hoc PMP estimation approach. However, the underlying physics has not been investigated and thus differing PMP estimates are obtained without clarity on their interpretation. In this study, we present a hybrid approach that takes advantage of both traditional engineering wisdom andmore » modern climate science to estimate PMP for current and future climate conditions. The traditional PMP approach is improved and applied to outputs from an ensemble of five CMIP5 models. This hybrid approach is applied in the Pacific Northwest (PNW) to produce ensemble PMP estimation for the historical (1970-2016) and future (2050-2099) time periods. The new historical PMP estimates are verified by comparing them with the traditional estimates. PMP in the PNW will increase by 50% of the current level by 2099 under the RCP8.5 scenario. Most of the increase is caused by warming, which mainly affects moisture availability, with minor contributions from changes in storm efficiency in the future. Moist track change tends to reduce the future PMP. Compared with extreme precipitation, ensemble PMP exhibits higher internal variation. Thus high-quality data of both precipitation and related meteorological fields (temperature, wind fields) are required to reduce uncertainties in the ensemble PMP estimates.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santhanagopalan, Shriram; White, Ralph E.

    Rotating ring disc electrode (RRDE) experiments are a classic tool for investigating kinetics of electrochemical reactions. Several standardized methods exist for extracting transport parameters and reaction rate constants using RRDE measurements. Here in this work, we compare some approximate solutions to the convective diffusion used popularly in the literature to a rigorous numerical solution of the Nernst-Planck equations coupled to the three dimensional flow problem. In light of these computational advancements, we explore design aspects of the RRDE that will help improve sensitivity of our parameter estimation procedure to experimental data. We use the oxygen reduction in acidic media involvingmore » three charge transfer reactions and a chemical reaction as an example, and identify ways to isolate reaction currents for the individual processes in order to accurately estimate the exchange current densities.« less

  20. Simple and accurate methods for quantifying deformation, disruption, and development in biological tissues

    PubMed Central

    Boyle, John J.; Kume, Maiko; Wyczalkowski, Matthew A.; Taber, Larry A.; Pless, Robert B.; Xia, Younan; Genin, Guy M.; Thomopoulos, Stavros

    2014-01-01

    When mechanical factors underlie growth, development, disease or healing, they often function through local regions of tissue where deformation is highly concentrated. Current optical techniques to estimate deformation can lack precision and accuracy in such regions due to challenges in distinguishing a region of concentrated deformation from an error in displacement tracking. Here, we present a simple and general technique for improving the accuracy and precision of strain estimation and an associated technique for distinguishing a concentrated deformation from a tracking error. The strain estimation technique improves accuracy relative to other state-of-the-art algorithms by directly estimating strain fields without first estimating displacements, resulting in a very simple method and low computational cost. The technique for identifying local elevation of strain enables for the first time the successful identification of the onset and consequences of local strain concentrating features such as cracks and tears in a highly strained tissue. We apply these new techniques to demonstrate a novel hypothesis in prenatal wound healing. More generally, the analytical methods we have developed provide a simple tool for quantifying the appearance and magnitude of localized deformation from a series of digital images across a broad range of disciplines. PMID:25165601

  1. Improved predictive ability of climate-human-behaviour interactions with modifications to the COMFA outdoor energy budget model.

    PubMed

    Vanos, J K; Warland, J S; Gillespie, T J; Kenny, N A

    2012-11-01

    The purpose of this paper is to implement current and novel research techniques in human energy budget estimations to give more accurate and efficient application of models by a variety of users. Using the COMFA model, the conditioning level of an individual is incorporated into overall energy budget predictions, giving more realistic estimations of the metabolism experienced at various fitness levels. Through the use of VO(2) reserve estimates, errors are found when an elite athlete is modelled as an unconditioned or a conditioned individual, giving budgets underpredicted significantly by -173 and -123 W m(-2), respectively. Such underprediction can result in critical errors regarding heat stress, particularly in highly motivated individuals; thus this revision is critical for athletic individuals. A further improvement in the COMFA model involves improved adaptation of clothing insulation (I (cl)), as well clothing non-uniformity, with changing air temperature (T (a)) and metabolic activity (M (act)). Equivalent T (a) values (for I (cl) estimation) are calculated in order to lower the I (cl) value with increasing M (act) at equal T (a). Furthermore, threshold T (a) values are calculated to predict the point at which an individual will change from a uniform I (cl) to a segmented I (cl) (full ensemble to shorts and a T-shirt). Lastly, improved relative velocity (v (r)) estimates were found with a refined equation accounting for the degree angle of wind to body movement. Differences between the original and improved v (r) equations increased with higher wind and activity speeds, and as the wind to body angle moved away from 90°. Under moderate microclimate conditions, and wind from behind a person, the convective heat loss and skin temperature estimates were 47 W m(-2) and 1.7°C higher when using the improved v (r) equation. These model revisions improve the applicability and usability of the COMFA energy budget model for subjects performing physical activity in outdoor environments. Application is possible for other similar energy budget models, and within various urban and rural environments.

  2. Improved predictive ability of climate-human-behaviour interactions with modifications to the COMFA outdoor energy budget model

    NASA Astrophysics Data System (ADS)

    Vanos, J. K.; Warland, J. S.; Gillespie, T. J.; Kenny, N. A.

    2012-11-01

    The purpose of this paper is to implement current and novel research techniques in human energy budget estimations to give more accurate and efficient application of models by a variety of users. Using the COMFA model, the conditioning level of an individual is incorporated into overall energy budget predictions, giving more realistic estimations of the metabolism experienced at various fitness levels. Through the use of VO2 reserve estimates, errors are found when an elite athlete is modelled as an unconditioned or a conditioned individual, giving budgets underpredicted significantly by -173 and -123 W m-2, respectively. Such underprediction can result in critical errors regarding heat stress, particularly in highly motivated individuals; thus this revision is critical for athletic individuals. A further improvement in the COMFA model involves improved adaptation of clothing insulation ( I cl), as well clothing non-uniformity, with changing air temperature ( T a) and metabolic activity ( M act). Equivalent T a values (for I cl estimation) are calculated in order to lower the I cl value with increasing M act at equal T a. Furthermore, threshold T a values are calculated to predict the point at which an individual will change from a uniform I cl to a segmented I cl (full ensemble to shorts and a T-shirt). Lastly, improved relative velocity ( v r) estimates were found with a refined equation accounting for the degree angle of wind to body movement. Differences between the original and improved v r equations increased with higher wind and activity speeds, and as the wind to body angle moved away from 90°. Under moderate microclimate conditions, and wind from behind a person, the convective heat loss and skin temperature estimates were 47 W m-2 and 1.7°C higher when using the improved v r equation. These model revisions improve the applicability and usability of the COMFA energy budget model for subjects performing physical activity in outdoor environments. Application is possible for other similar energy budget models, and within various urban and rural environments.

  3. Improved estimation of random vibration loads in launch vehicles

    NASA Technical Reports Server (NTRS)

    Mehta, R.; Erwin, E.; Suryanarayan, S.; Krishna, Murali M. R.

    1993-01-01

    Random vibration induced load is an important component of the total design load environment for payload and launch vehicle components and their support structures. The current approach to random vibration load estimation is based, particularly at the preliminary design stage, on the use of Miles' equation which assumes a single degree-of-freedom (DOF) system and white noise excitation. This paper examines the implications of the use of multi-DOF system models and response calculation based on numerical integration using the actual excitation spectra for random vibration load estimation. The analytical study presented considers a two-DOF system and brings out the effects of modal mass, damping and frequency ratios on the random vibration load factor. The results indicate that load estimates based on the Miles' equation can be significantly different from the more accurate estimates based on multi-DOF models.

  4. Comparative analysis for various redox flow batteries chemistries using a cost performance model

    NASA Astrophysics Data System (ADS)

    Crawford, Alasdair; Viswanathan, Vilayanur; Stephenson, David; Wang, Wei; Thomsen, Edwin; Reed, David; Li, Bin; Balducci, Patrick; Kintner-Meyer, Michael; Sprenkle, Vincent

    2015-10-01

    The total energy storage system cost is determined by means of a robust performance-based cost model for multiple flow battery chemistries. Systems aspects such as shunt current losses, pumping losses and various flow patterns through electrodes are accounted for. The system cost minimizing objective function determines stack design by optimizing the state of charge operating range, along with current density and current-normalized flow. The model cost estimates are validated using 2-kW stack performance data for the same size electrodes and operating conditions. Using our validated tool, it has been demonstrated that an optimized all-vanadium system has an estimated system cost of < 350 kWh-1 for 4-h application. With an anticipated decrease in component costs facilitated by economies of scale from larger production volumes, coupled with performance improvements enabled by technology development, the system cost is expected to decrease to 160 kWh-1 for a 4-h application, and to 100 kWh-1 for a 10-h application. This tool has been shared with the redox flow battery community to enable cost estimation using their stack data and guide future direction.

  5. Methods for Estimating Environmental Effects and Constraints on NexGen: High Density Case Study

    NASA Technical Reports Server (NTRS)

    Augustine, S.; Ermatinger, C.; Graham, M.; Thompson, T.

    2010-01-01

    This document provides a summary of the current methods developed by Metron Aviation for the estimate of environmental effects and constraints on the Next Generation Air Transportation System (NextGen). This body of work incorporates many of the key elements necessary to achieve such an estimate. Each section contains the background and motivation for the technical elements of the work, a description of the methods used, and possible next steps. The current methods described in this document were selected in an attempt to provide a good balance between accuracy and fairly rapid turn around times to best advance Joint Planning and Development Office (JPDO) System Modeling and Analysis Division (SMAD) objectives while also supporting the needs of the JPDO Environmental Working Group (EWG). In particular this document describes methods applied to support the High Density (HD) Case Study performed during the spring of 2008. A reference day (in 2006) is modeled to describe current system capabilities while the future demand is applied to multiple alternatives to analyze system performance. The major variables in the alternatives are operational/procedural capabilities for airport, terminal, and en route airspace along with projected improvements to airframe, engine and navigational equipment.

  6. A Method to Simultaneously Detect the Current Sensor Fault and Estimate the State of Energy for Batteries in Electric Vehicles

    PubMed Central

    Xu, Jun; Wang, Jing; Li, Shiying; Cao, Binggang

    2016-01-01

    Recently, State of energy (SOE) has become one of the most fundamental parameters for battery management systems in electric vehicles. However, current information is critical in SOE estimation and current sensor is usually utilized to obtain the latest current information. However, if the current sensor fails, the SOE estimation may be confronted with large error. Therefore, this paper attempts to make the following contributions: Current sensor fault detection and SOE estimation method is realized simultaneously. Through using the proportional integral observer (PIO) based method, the current sensor fault could be accurately estimated. By taking advantage of the accurate estimated current sensor fault, the influence caused by the current sensor fault can be eliminated and compensated. As a result, the results of the SOE estimation will be influenced little by the fault. In addition, the simulation and experimental workbench is established to verify the proposed method. The results indicate that the current sensor fault can be estimated accurately. Simultaneously, the SOE can also be estimated accurately and the estimation error is influenced little by the fault. The maximum SOE estimation error is less than 2%, even though the large current error caused by the current sensor fault still exists. PMID:27548183

  7. A Method to Simultaneously Detect the Current Sensor Fault and Estimate the State of Energy for Batteries in Electric Vehicles.

    PubMed

    Xu, Jun; Wang, Jing; Li, Shiying; Cao, Binggang

    2016-08-19

    Recently, State of energy (SOE) has become one of the most fundamental parameters for battery management systems in electric vehicles. However, current information is critical in SOE estimation and current sensor is usually utilized to obtain the latest current information. However, if the current sensor fails, the SOE estimation may be confronted with large error. Therefore, this paper attempts to make the following contributions: Current sensor fault detection and SOE estimation method is realized simultaneously. Through using the proportional integral observer (PIO) based method, the current sensor fault could be accurately estimated. By taking advantage of the accurate estimated current sensor fault, the influence caused by the current sensor fault can be eliminated and compensated. As a result, the results of the SOE estimation will be influenced little by the fault. In addition, the simulation and experimental workbench is established to verify the proposed method. The results indicate that the current sensor fault can be estimated accurately. Simultaneously, the SOE can also be estimated accurately and the estimation error is influenced little by the fault. The maximum SOE estimation error is less than 2%, even though the large current error caused by the current sensor fault still exists.

  8. Quantification of Microbial Phenotypes

    PubMed Central

    Martínez, Verónica S.; Krömer, Jens O.

    2016-01-01

    Metabolite profiling technologies have improved to generate close to quantitative metabolomics data, which can be employed to quantitatively describe the metabolic phenotype of an organism. Here, we review the current technologies available for quantitative metabolomics, present their advantages and drawbacks, and the current challenges to generate fully quantitative metabolomics data. Metabolomics data can be integrated into metabolic networks using thermodynamic principles to constrain the directionality of reactions. Here we explain how to estimate Gibbs energy under physiological conditions, including examples of the estimations, and the different methods for thermodynamics-based network analysis. The fundamentals of the methods and how to perform the analyses are described. Finally, an example applying quantitative metabolomics to a yeast model by 13C fluxomics and thermodynamics-based network analysis is presented. The example shows that (1) these two methods are complementary to each other; and (2) there is a need to take into account Gibbs energy errors. Better estimations of metabolic phenotypes will be obtained when further constraints are included in the analysis. PMID:27941694

  9. Estimation of potential maximum biomass of trout in Wyoming streams to assist management decisions

    USGS Publications Warehouse

    Hubert, W.A.; Marwitz, T.D.; Gerow, K.G.; Binns, N.A.; Wiley, R.W.

    1996-01-01

    Fishery managers can benefit from knowledge of the potential maximum biomass (PMB) of trout in streams when making decisions on the allocation of resources to improve fisheries. Resources are most likely to he expended on streams with high PMB and with large differences between PMB and currently measured biomass. We developed and tested a model that uses four easily measured habitat variables to estimate PMB (upper 90th percentile of predicted mean bid mass) of trout (Oncorhynchus spp., Salmo trutta, and Salvelinus fontinalis) in Wyoming streams. The habitat variables were proportion of cover, elevation, wetted width, and channel gradient. The PMB model was constructed from data on 166 stream reaches throughout Wyoming and validated on an independent data set of 50 stream reaches. Prediction of PMB in combination with estimation of current biomass and information on habitat quality can provide managers with insight into the extent to which management actions may enhance trout biomass.

  10. Estimating the Mass of the Milky Way Using the Ensemble of Classical Satellite Galaxies

    NASA Astrophysics Data System (ADS)

    Patel, Ekta; Besla, Gurtina; Sohn, Sangmo Tony; Mandel, Kaisey

    2018-06-01

    High precision proper motions are currently available for approximately 20% of the Milky Way's known satellite galaxies. Often, the 6D phase space information of each satellite is used separately to constrain the mass of the MW. In this talk, I will discuss the Bayesian framework outlined in Patel et al. 2017b to make inferences of the MW's mass using satellite properties such as specific orbital angular momentum, rather than just position and velocity. By extending this framework from one satellite to a population of satellites, we can now form simultaneous MW mass estimates using the Illustris-Dark cosmological simulation that are unbiased by high speed satellites such as Leo I (Patel et al., submitted). Our resulting MW mass estimates reduce the current factor of two uncertainty in the mass range of the MW and show promising signs for improvement as upcoming ground- and space-based observatories obtain proper motions for additional MW satellite galaxies.

  11. How Much Water is in That Snowpack? Improving Basin-wide Snow Water Equivalent Estimates from the Airborne Snow Observatory

    NASA Astrophysics Data System (ADS)

    Bormann, K.; Painter, T. H.; Marks, D. G.; Kirchner, P. B.; Winstral, A. H.; Ramirez, P.; Goodale, C. E.; Richardson, M.; Berisford, D. F.

    2014-12-01

    In the western US, snowmelt from the mountains contribute the vast majority of fresh water supply, in an otherwise dry region. With much of California currently experiencing extreme drought, it is critical for water managers to have accurate basin-wide estimations of snow water content during the spring melt season. At the forefront of basin-scale snow monitoring is the Jet Propulsion Laboratory's Airborne Snow Observatory (ASO). With combined LiDAR /spectrometer instruments and weekly flights over key basins throughout California, the ASO suite is capable of retrieving high-resolution basin-wide snow depth and albedo observations. To make best use of these high-resolution snow depths, spatially distributed snow density data are required to leverage snow water equivalent (SWE) from the measured depths. Snow density is a spatially and temporally variable property and is difficult to estimate at basin scales. Currently, ASO uses a physically based snow model (iSnobal) to resolve distributed snow density dynamics across the basin. However, there are issues with the density algorithms in iSnobal, particularly with snow depths below 0.50 m. This shortcoming limited the use of snow density fields from iSnobal during the poor snowfall year of 2014 in the Sierra Nevada, where snow depths were generally low. A deeper understanding of iSnobal model performance and uncertainty for snow density estimation is required. In this study, the model is compared to an existing climate-based statistical method for basin-wide snow density estimation in the Tuolumne basin in the Sierra Nevada and sparse field density measurements. The objective of this study is to improve the water resource information provided to water managers during ASO operation in the future by reducing the uncertainty introduced during the snow depth to SWE conversion.

  12. Replacing climatological potential evapotranspiration estimates with dynamic satellite-based observations in operational hydrologic prediction models

    NASA Astrophysics Data System (ADS)

    Franz, K. J.; Bowman, A. L.; Hogue, T. S.; Kim, J.; Spies, R.

    2011-12-01

    In the face of a changing climate, growing populations, and increased human habitation in hydrologically risky locations, both short- and long-range planners increasingly require robust and reliable streamflow forecast information. Current operational forecasting utilizes watershed-scale, conceptual models driven by ground-based (commonly point-scale) observations of precipitation and temperature and climatological potential evapotranspiration (PET) estimates. The PET values are derived from historic pan evaporation observations and remain static from year-to-year. The need for regional dynamic PET values is vital for improved operational forecasting. With the advent of satellite remote sensing and the adoption of a more flexible operational forecast system by the National Weather Service, incorporation of advanced data products is now more feasible than in years past. In this study, we will test a previously developed satellite-derived PET product (UCLA MODIS-PET) in the National Weather Service forecast models and compare the model results to current methods. The UCLA MODIS-PET method is based on the Priestley-Taylor formulation, is driven with MODIS satellite products, and produces a daily, 250m PET estimate. The focus area is eight headwater basins in the upper Midwest U.S. There is a need to develop improved forecasting methods for this region that are able to account for climatic and landscape changes more readily and effectively than current methods. This region is highly flood prone yet sensitive to prolonged dry periods in late summer and early fall, and is characterized by a highly managed landscape, which has drastically altered the natural hydrologic cycle. Our goal is to improve model simulations, and thereby, the initial conditions prior to the start of a forecast through the use of PET values that better reflect actual watershed conditions. The forecast models are being tested in both distributed and lumped mode.

  13. Estimating the NIH efficient frontier.

    PubMed

    Bisias, Dimitrios; Lo, Andrew W; Watkins, James F

    2012-01-01

    The National Institutes of Health (NIH) is among the world's largest investors in biomedical research, with a mandate to: "…lengthen life, and reduce the burdens of illness and disability." Its funding decisions have been criticized as insufficiently focused on disease burden. We hypothesize that modern portfolio theory can create a closer link between basic research and outcome, and offer insight into basic-science related improvements in public health. We propose portfolio theory as a systematic framework for making biomedical funding allocation decisions-one that is directly tied to the risk/reward trade-off of burden-of-disease outcomes. Using data from 1965 to 2007, we provide estimates of the NIH "efficient frontier", the set of funding allocations across 7 groups of disease-oriented NIH institutes that yield the greatest expected return on investment for a given level of risk, where return on investment is measured by subsequent impact on U.S. years of life lost (YLL). The results suggest that NIH may be actively managing its research risk, given that the volatility of its current allocation is 17% less than that of an equal-allocation portfolio with similar expected returns. The estimated efficient frontier suggests that further improvements in expected return (89% to 119% vs. current) or reduction in risk (22% to 35% vs. current) are available holding risk or expected return, respectively, constant, and that 28% to 89% greater decrease in average years-of-life-lost per unit risk may be achievable. However, these results also reflect the imprecision of YLL as a measure of disease burden, the noisy statistical link between basic research and YLL, and other known limitations of portfolio theory itself. Our analysis is intended to serve as a proof-of-concept and starting point for applying quantitative methods to allocating biomedical research funding that are objective, systematic, transparent, repeatable, and expressly designed to reduce the burden of disease. By approaching funding decisions in a more analytical fashion, it may be possible to improve their ultimate outcomes while reducing unintended consequences.

  14. Communications satellite system for Africa

    NASA Astrophysics Data System (ADS)

    Kriegl, W.; Laufenberg, W.

    1980-09-01

    Earlier established requirement estimations were improved upon by contacting African administrations and organizations. An enormous demand is shown to exist for telephony and teletype services in rural areas. It is shown that educational television broadcasting should be realized in the current African transport and communications decade (1978-1987). Radio broadcasting is proposed in order to overcome illiteracy and to improve educational levels. The technical and commercial feasibility of the system is provided by computer simulations which demonstrate how the required objectives can be fulfilled in conjunction with ground networks.

  15. Maximum Likelihood Estimation with Emphasis on Aircraft Flight Data

    NASA Technical Reports Server (NTRS)

    Iliff, K. W.; Maine, R. E.

    1985-01-01

    Accurate modeling of flexible space structures is an important field that is currently under investigation. Parameter estimation, using methods such as maximum likelihood, is one of the ways that the model can be improved. The maximum likelihood estimator has been used to extract stability and control derivatives from flight data for many years. Most of the literature on aircraft estimation concentrates on new developments and applications, assuming familiarity with basic estimation concepts. Some of these basic concepts are presented. The maximum likelihood estimator and the aircraft equations of motion that the estimator uses are briefly discussed. The basic concepts of minimization and estimation are examined for a simple computed aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to help illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Specific examples of estimation of structural dynamics are included. Some of the major conclusions for the computed example are also developed for the analysis of flight data.

  16. Valuing Insect Pollination Services with Cost of Replacement

    PubMed Central

    Allsopp, Mike H.; de Lange, Willem J.; Veldtman, Ruan

    2008-01-01

    Value estimates of ecosystem goods and services are useful to justify the allocation of resources towards conservation, but inconclusive estimates risk unsustainable resource allocations. Here we present replacement costs as a more accurate value estimate of insect pollination as an ecosystem service, although this method could also be applied to other services. The importance of insect pollination to agriculture is unequivocal. However, whether this service is largely provided by wild pollinators (genuine ecosystem service) or managed pollinators (commercial service), and which of these requires immediate action amidst reports of pollinator decline, remains contested. If crop pollination is used to argue for biodiversity conservation, clear distinction should be made between values of managed- and wild pollination services. Current methods either under-estimate or over-estimate the pollination service value, and make use of criticised general insect and managed pollinator dependence factors. We apply the theoretical concept of ascribing a value to a service by calculating the cost to replace it, as a novel way of valuing wild and managed pollination services. Adjusted insect and managed pollinator dependence factors were used to estimate the cost of replacing insect- and managed pollination services for the Western Cape deciduous fruit industry of South Africa. Using pollen dusting and hand pollination as suitable replacements, we value pollination services significantly higher than current market prices for commercial pollination, although lower than traditional proportional estimates. The complexity associated with inclusive value estimation of pollination services required several defendable assumptions, but made estimates more inclusive than previous attempts. Consequently this study provides the basis for continued improvement in context specific pollination service value estimates. PMID:18781196

  17. An innovative localisation algorithm for railway vehicles

    NASA Astrophysics Data System (ADS)

    Allotta, B.; D'Adamio, P.; Malvezzi, M.; Pugi, L.; Ridolfi, A.; Rindi, A.; Vettori, G.

    2014-11-01

    In modern railway automatic train protection and automatic train control systems, odometry is a safety relevant on-board subsystem which estimates the instantaneous speed and the travelled distance of the train; a high reliability of the odometry estimate is fundamental, since an error on the train position may lead to a potentially dangerous overestimation of the distance available for braking. To improve the odometry estimate accuracy, data fusion of different inputs coming from a redundant sensor layout may be used. The aim of this work has been developing an innovative localisation algorithm for railway vehicles able to enhance the performances, in terms of speed and position estimation accuracy, of the classical odometry algorithms, such as the Italian Sistema Controllo Marcia Treno (SCMT). The proposed strategy consists of a sensor fusion between the information coming from a tachometer and an Inertial Measurements Unit (IMU). The sensor outputs have been simulated through a 3D multibody model of a railway vehicle. The work has provided the development of a custom IMU, designed by ECM S.p.a, in order to meet their industrial and business requirements. The industrial requirements have to be compliant with the European Train Control System (ETCS) standards: the European Rail Traffic Management System (ERTMS), a project developed by the European Union to improve the interoperability among different countries, in particular as regards the train control and command systems, fixes some standard values for the odometric (ODO) performance, in terms of speed and travelled distance estimation. The reliability of the ODO estimation has to be taken into account basing on the allowed speed profiles. The results of the currently used ODO algorithms can be improved, especially in case of degraded adhesion conditions; it has been verified in the simulation environment that the results of the proposed localisation algorithm are always compliant with the ERTMS requirements. The estimation strategy has good performance also under degraded adhesion conditions and could be put on board of high-speed railway vehicles; it represents an accurate and reliable solution. The IMU board is tested via a dedicated Hardware in the Loop (HIL) test rig: it includes an industrial robot able to replicate the motion of the railway vehicle. Through the generated experimental outputs the performances of the innovative localisation algorithm have been evaluated: the HIL test rig permitted to test the proposed algorithm, avoiding expensive (in terms of time and cost) on-track tests, obtaining encouraging results. In fact, the preliminary results show a significant improvement of the position and speed estimation performances compared to those obtained with SCMT algorithms, currently in use on the Italian railway network.

  18. Non-linear effects of soda taxes on consumption and weight outcomes.

    PubMed

    Fletcher, Jason M; Frisvold, David E; Tefft, Nathan

    2015-05-01

    The potential health impacts of imposing large taxes on soda to improve population health have been of interest for over a decade. As estimates of the effects of existing soda taxes with low rates suggest little health improvements, recent proposals suggest that large taxes may be effective in reducing weight because of non-linear consumption responses or threshold effects. This paper tests this hypothesis in two ways. First, we estimate non-linear effects of taxes using the range of current rates. Second, we leverage the sudden, relatively large soda tax increase in two states during the early 1990s combined with new synthetic control methods useful for comparative case studies. Our findings suggest virtually no evidence of non-linear or threshold effects. Copyright © 2014 John Wiley & Sons, Ltd.

  19. Measuring premorbid IQ in traumatic brain injury: an examination of the validity of the Wechsler Test of Adult Reading (WTAR).

    PubMed

    Green, Robin E A; Melo, Brenda; Christensen, Bruce; Ngo, Le-Anh; Monette, Georges; Bradbury, Cheryl

    2008-02-01

    Estimation of premorbid IQ in traumatic brain injury (TBI) is clinically and scientifically valuable because it permits the quantification of the cognitive impact of injury. This is achieved by comparing performances on tests of current ability to estimates of premorbid IQ, thereby enabling current capacity to be interpreted in light of preinjury ability. However, the validity of premorbid IQ tests that are commonly used for TBI has been questioned. In the present study, we examined the psychometric properties of a recently developed test, the Wechsler Test of Adult Reading (WTAR), which has yet to be examined for TBI. The cognitive performance of a group of 24 patients recovering from TBI (with a mean Glasgow Coma Scale score in the severely impaired range) was measured at 2 and 5 months postinjury. On both occasions, patients were administered three tests that have been used to measure premorbid IQ (the WTAR and the Vocabulary and Matrix Reasoning subtests of the Wechsler Adult Intelligence Scale 3rd Edition, WAIS-III) and three tests of current ability (Symbol Digit Modalities Test-Oral and Similarities and Block Design subtests of the WAIS-III). We found that performance significantly improved on tests of current cognitive ability, confirming recovery. In contrast, stable performance was observed on the WTAR from Assessment 1 (M = 34.25/50) to Assessment 2 (M = 34.21/50; r = .970, p < .001). Mean improvement across assessments was negligible (t = -0.086, p = .47; Cohen's d = -.005), and minimal individual participant change was observed (modal scaled score change = 0). WTAR scores were also highly similar to scores on a demographic estimate of premorbid IQ. Thus, converging evidence--high stability during recovery from TBI and similar IQ estimates to those of a demographic equation suggests that the WTAR is a valid measure of premorbid IQ for TBI. Where word pronunciation tests are indicated (i.e., in patients for whom English is spoken and read fluently), these results endorse the use of the WTAR for patients with TBI.

  20. The feasibility of a regional CTDI{sub vol} to estimate organ dose from tube current modulated CT exams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khatonabadi, Maryam; Kim, Hyun J.; Lu, Peiyun

    Purpose: In AAPM Task Group 204, the size-specific dose estimate (SSDE) was developed by providing size adjustment factors which are applied to the Computed Tomography (CT) standardized dose metric, CTDI{sub vol}. However, that work focused on fixed tube current scans and did not specifically address tube current modulation (TCM) scans, which are currently the majority of clinical scans performed. The purpose of this study was to extend the SSDE concept to account for TCM by investigating the feasibility of using anatomic and organ specific regions of scanner output to improve accuracy of dose estimates. Methods: Thirty-nine adult abdomen/pelvis and 32more » chest scans from clinically indicated CT exams acquired on a multidetector CT using TCM were obtained with Institutional Review Board approval for generating voxelized models. Along with image data, raw projection data were obtained to extract TCM functions for use in Monte Carlo simulations. Patient size was calculated using the effective diameter described in TG 204. In addition, the scanner-reported CTDI{sub vol} (CTDI{sub vol,global}) was obtained for each patient, which is based on the average tube current across the entire scan. For the abdomen/pelvis scans, liver, spleen, and kidneys were manually segmented from the patient datasets; for the chest scans, lungs and for female models only, glandular breast tissue were segmented. For each patient organ doses were estimated using Monte Carlo Methods. To investigate the utility of regional measures of scanner output, regional and organ anatomic boundaries were identified from image data and used to calculate regional and organ-specific average tube current values. From these regional and organ-specific averages, CTDI{sub vol} values, referred to as regional and organ-specific CTDI{sub vol}, were calculated for each patient. Using an approach similar to TG 204, all CTDI{sub vol} values were used to normalize simulated organ doses; and the ability of each normalized dose to correlate with patient size was investigated. Results: For all five organs, the correlations with patient size increased when organ doses were normalized by regional and organ-specific CTDI{sub vol} values. For example, when estimating dose to the liver, CTDI{sub vol,global} yielded a R{sup 2} value of 0.26, which improved to 0.77 and 0.86, when using the regional and organ-specific CTDI{sub vol} for abdomen and liver, respectively. For breast dose, the global CTDI{sub vol} yielded a R{sup 2} value of 0.08, which improved to 0.58 and 0.83, when using the regional and organ-specific CTDI{sub vol} for chest and breasts, respectively. The R{sup 2} values also increased once the thoracic models were separated for the analysis into females and males, indicating differences between genders in this region not explained by a simple measure of effective diameter. Conclusions: This work demonstrated the utility of regional and organ-specific CTDI{sub vol} as normalization factors when using TCM. It was demonstrated that CTDI{sub vol,global} is not an effective normalization factor in TCM exams where attenuation (and therefore tube current) varies considerably throughout the scan, such as abdomen/pelvis and even thorax. These exams can be more accurately assessed for dose using regional CTDI{sub vol} descriptors that account for local variations in scanner output present when TCM is employed.« less

  1. Pediatric sepsis.

    PubMed

    Mathias, Brittany; Mira, Juan C; Larson, Shawn D

    2016-06-01

    Sepsis is the leading cause of pediatric death worldwide. In the United States alone, there are 72 000 children hospitalized for sepsis annually with a reported mortality rate of 25% and an economic cost estimated to be $4.8 billion. However, it is only recently that the definition and management of pediatric sepsis has been recognized as being distinct from adult sepsis. The definition of pediatric sepsis is currently in a state of evolution, and there is a large disconnect between the clinical and research definitions of sepsis which impacts the application of research findings into clinical practice. Despite this, it is the speed of diagnosis and the timely implementation of current treatment guidelines that has been shown to improve outcomes. However, adherence to treatment guidelines is currently low and it is only through the implementation of protocols that improved care and outcomes have been demonstrated. The current management of pediatric sepsis is largely based on adaptations from adult sepsis treatment; however, distinct physiology demands more prospective pediatric trials to tailor management to the pediatric population. Adherence to current and emerging practice guidelines will require that protocolized care pathways become a commonplace.

  2. 'How many calories are in my burrito?' Improving consumers' understanding of energy (calorie) range information.

    PubMed

    Liu, Peggy J; Bettman, James R; Uhalde, Arianna R; Ubel, Peter A

    2015-01-01

    Energy (calorie) ranges currently appear on menu boards for customized menu items and will likely appear throughout the USA when menu-labelling legislation is implemented. Consumer welfare advocates have questioned whether energy ranges enable accurate energy estimates. In four studies, we examined: (i) whether energy range information improves energy estimation accuracy; (ii) whether misestimates persist because consumers misinterpret the meaning of the energy range end points; and (iii) whether energy estimates can be made more accurate by providing explicit information about the contents of items at the end points. Four studies were conducted, all randomized experiments. Study 1 took place outside a Chipotle restaurant. Studies 2 to 4 took place online. Participants in study 1 were customers exiting a Chipotle restaurant (n 306). Participants in studies 2 (n 205), 3 (n 290) and 4 (n 874) were from an online panel. Energy ranges reduced energy misestimation across different menu items (studies 1-4). One cause of remaining misestimation was misinterpretation of the low end point's meaning (study 2). Providing explicit information about the contents of menu items associated with energy range end points further reduced energy misestimation (study 3) across different menu items (study 4). Energy range information improved energy estimation accuracy and defining the meaning of the end points further improved accuracy. We suggest that when restaurants present energy range information to consumers, they should explicitly define the meaning of the end points.

  3. Striking Distance Determined From High-Speed Videos and Measured Currents in Negative Cloud-to-Ground Lightning

    NASA Astrophysics Data System (ADS)

    Visacro, Silverio; Guimaraes, Miguel; Murta Vale, Maria Helena

    2017-12-01

    First and subsequent return strokes' striking distances (SDs) were determined for negative cloud-to-ground flashes from high-speed videos exhibiting the development of positive and negative leaders and the pre-return stroke phase of currents measured along a short tower. In order to improve the results, a new criterion was used for the initiation and propagation of the sustained upward connecting leader, consisting of a 4 A continuous current threshold. An advanced approach developed from the combined use of this criterion and a reverse propagation procedure, which considers the calculated propagation speeds of the leaders, was applied and revealed that SDs determined solely from the first video frame showing the upward leader can be significantly underestimated. An original approach was proposed for a rough estimate of first strokes' SD using solely records of current. This approach combines the 4 A criterion and a representative composite three-dimensional propagation speed of 0.34 × 106 m/s for the leaders in the last 300 m propagated distance. SDs determined under this approach showed to be consistent with those of the advanced procedure. This approach was applied to determine the SD of 17 first return strokes of negative flashes measured at MCS, covering a wide peak-current range, from 18 to 153 kA. The estimated SDs exhibit very high dispersion and reveal great differences in relation to the SDs estimated for subsequent return strokes and strokes in triggered lightning.

  4. Design and validation of a high-order weighted-frequency fourier linear combiner-based Kalman filter for parkinsonian tremor estimation.

    PubMed

    Zhou, Y; Jenkins, M E; Naish, M D; Trejos, A L

    2016-08-01

    The design of a tremor estimator is an important part of designing mechanical tremor suppression orthoses. A number of tremor estimators have been developed and applied with the assumption that tremor is a mono-frequency signal. However, recent experimental studies have shown that Parkinsonian tremor consists of multiple frequencies, and that the second and third harmonics make a large contribution to the tremor. Thus, the current estimators may have limited performance on estimation of the tremor harmonics. In this paper, a high-order tremor estimation algorithm is proposed and compared with its lower-order counterpart and a widely used estimator, the Weighted-frequency Fourier Linear Combiner (WFLC), using 18 Parkinsonian tremor data sets. The results show that the proposed estimator has better performance than its lower-order counterpart and the WFLC. The percentage estimation accuracy of the proposed estimator is 85±2.9%, an average improvement of 13% over the lower-order counterpart. The proposed algorithm holds promise for use in wearable tremor suppression devices.

  5. Sound management may sequester methane in grazed rangeland ecosystems

    PubMed Central

    Wang, Chengjie; Han, Guodong; Wang, Shiping; Zhai, Xiajie; Brown, Joel; Havstad, Kris M.; Ma, Xiuzhi; Wilkes, Andreas; Zhao, Mengli; Tang, Shiming; Zhou, Pei; Jiang, Yuanyuan; Lu, Tingting; Wang, Zhongwu; Li, Zhiguo

    2014-01-01

    Considering their contribution to global warming, the sources and sinks of methane (CH4) should be accounted when undertaking a greenhouse gas inventory for grazed rangeland ecosystems. The aim of this study was to evaluate the mitigation potential of current ecological management programs implemented in the main rangeland regions of China. The influences of rangeland improvement, utilization and livestock production on CH4 flux/emission were assessed to estimate CH4 reduction potential. Results indicate that the grazed rangeland ecosystem is currently a net source of atmospheric CH4. However, there is potential to convert the ecosystem to a net sink by improving management practices. Previous assessments of capacity for CH4 uptake in grazed rangeland ecosystems have not considered improved livestock management practices and thus underestimated potential for CH4 uptake. Optimal fertilization, rest and light grazing, and intensification of livestock management contribute mitigation potential significantly. PMID:24658176

  6. Sound management may sequester methane in grazed rangeland ecosystems.

    PubMed

    Wang, Chengjie; Han, Guodong; Wang, Shiping; Zhai, Xiajie; Brown, Joel; Havstad, Kris M; Ma, Xiuzhi; Wilkes, Andreas; Zhao, Mengli; Tang, Shiming; Zhou, Pei; Jiang, Yuanyuan; Lu, Tingting; Wang, Zhongwu; Li, Zhiguo

    2014-03-24

    Considering their contribution to global warming, the sources and sinks of methane (CH4) should be accounted when undertaking a greenhouse gas inventory for grazed rangeland ecosystems. The aim of this study was to evaluate the mitigation potential of current ecological management programs implemented in the main rangeland regions of China. The influences of rangeland improvement, utilization and livestock production on CH4 flux/emission were assessed to estimate CH4 reduction potential. Results indicate that the grazed rangeland ecosystem is currently a net source of atmospheric CH4. However, there is potential to convert the ecosystem to a net sink by improving management practices. Previous assessments of capacity for CH4 uptake in grazed rangeland ecosystems have not considered improved livestock management practices and thus underestimated potential for CH4 uptake. Optimal fertilization, rest and light grazing, and intensification of livestock management contribute mitigation potential significantly.

  7. Improving Water Balance Estimation in the Nile by Combining Remote Sensing and Hydrological Modelling: a Template for Ungauged Basins

    NASA Astrophysics Data System (ADS)

    Gleason, C. J.; Wada, Y.; Wang, J.

    2017-12-01

    Declining gauging infrastructure and fractious water politics have decreased available information about river flows globally, especially in international river basins. Remote sensing and water balance modelling are frequently cited as a potential solutions, but these techniques largely rely on the same in decline gauge data to constrain or parameterize discharge estimates, thus creating a circular approach to estimating discharge inapplicable to ungauged basins. To address this, we here combine a discontinued gauge, remotely sensed discharge estimates made via at-many-stations hydraulic geometry (AMHG) and Landsat data, and the PCR-GLOBWB hydrological model to estimate discharge for an ungauged time period for the Lower Nile (1978-present). Specifically, we first estimate initial discharges from 86 Landsat images and AMHG (1984-2015), and then use these flow estimates to tune the hydrologic model. Our tuning methodology is purposefully simple and can be easily applied to any model without the need for calibration/parameterization. The resulting tuned modelled hydrograph shows large improvement in flow magnitude over previous modelled hydrographs, and validation of tuned monthly model output flows against the historical gauge yields an RMSE of 343 m3/s (33.7%). By contrast, the original simulation had an order-of-magnitude flow error. This improvement is substantial but not perfect: modelled flows have a one-to two-month wet season lag and a negative bias. More sophisticated model calibration and training (e.g. data assimilation) is needed to improve upon our results, however, our results achieved by coupling physical models and remote sensing is a promising first step and proof of concept toward future modelling of ungauged flows. This is especially true as massive cloud computing via Google Earth Engine makes our method easily applicable to any basin without current gauges. Finally, we purposefully do not offer prescriptive solutions for Nile management, and rather hope that the methods demonstrated herein can prove useful to river stakeholders in managing their own water.

  8. Implementation of Hospital Computerized Physician Order Entry Systems in a Rural State: Feasibility and Financial Impact

    PubMed Central

    Ohsfeldt, Robert L.; Ward, Marcia M.; Schneider, John E.; Jaana, Mirou; Miller, Thomas R.; Lei, Yang; Wakefield, Douglas S.

    2005-01-01

    Objective The aim of this study was to estimate the costs of implementing computerized physician order entry (CPOE) systems in hospitals in a rural state and to evaluate the financial implications of statewide CPOE implementation. Methods A simulation model was constructed using estimates of initial and ongoing CPOE costs mapped onto all general hospitals in Iowa by bed quantity and current clinical information system (CIS) status. CPOE cost estimates were obtained from a leading CPOE vendor. Current CIS status was determined through mail survey of Iowa hospitals. Patient care revenue and operating cost data published by the Iowa Hospital Association were used to simulate the financial impact of CPOE adoption on hospitals. Results CPOE implementation would dramatically increase operating costs for rural and critical access hospitals in the absence of substantial costs savings associated with improved efficiency or improved patient safety. For urban and rural referral hospitals, the cost impact is less dramatic but still substantial. However, relatively modest benefits in the form of patient care cost savings or revenue enhancement would be sufficient to offset CPOE costs for these larger hospitals. Conclusion Implementation of CPOE in rural or critical access hospitals may depend on net increase in operating costs. Adoption of CPOE may be financially infeasible for these small hospitals in the absence of increases in hospital payments or ongoing subsidies from third parties. PMID:15492033

  9. Applications of the SWOT Mission to Reservoirs in the Mekong River Basin

    NASA Astrophysics Data System (ADS)

    Bonnema, M.; Hossain, F.

    2017-12-01

    The forthcoming Surface Water and Ocean Topography (SWOT) mission has the potential to significantly improve our ability to observe artificial reservoirs globally from a remote sensing perspective. By providing simultaneous estimates of reservoir water surface extent and elevation with near global coverage, reservoir storage changes can be estimated. Knowing how reservoir storage changes over time is critical for understanding reservoir impacts on river systems. In data limited regions, remote sensing is often the only viable method of retrieving such information about reservoir operations. When SWOT launches in 2021, it will join an array of satellite sensors with long histories of reservoir observation and monitoring capabilities. There are many potential synergies in the complimentary use of future SWOT observations with observations from current satellite sensors. The work presented here explores the potential benefits of utilizing SWOT observations over 20 reservoirs in the Mekong River Basin. The SWOT hydrologic simulator, developed by NASA Jet Propulsion Laboratory, is used to generate realistic SWOT observations, which are then inserted into a previously established remote sensing modeling framework of the 20 Mekong Basin reservoirs. This framework currently combines data from Landsat missions, Jason radar altimeters, and the Shuttle Radar and Topography Mission (SRTM), to provide monthly estimates of reservoir storage change. The incorporation of SWOT derived reservoir surface area and elevation into the model is explored in an effort to improve both accuracy and temporal resolution of observed reservoir operations.

  10. Who, What, When, Where? Determining the Health Implications of Wildfire Smoke Exposure

    NASA Astrophysics Data System (ADS)

    Ford, B.; Lassman, W.; Gan, R.; Burke, M.; Pfister, G.; Magzamen, S.; Fischer, E. V.; Volckens, J.; Pierce, J. R.

    2016-12-01

    Exposure to poor air quality is associated with negative impacts on human health. A large natural source of PM in the western U.S. is from wildland fires. Accurately attributing health endpoints to wildland-fire smoke requires a determination of the exposed population. This is a difficult endeavor because most current methods for monitoring air quality are not at high temporal and spatial resolutions. Therefore, there is a growing effort to include multiple datasets and create blended products of smoke exposure that can exploit the strengths of each dataset. In this work, we combine model (WRF-Chem) simulations, NASA satellite (MODIS) observations, and in-situ surface monitors to improve exposure estimates. We will also introduce a social-media dataset of self-reported smoke/haze/pollution to improve population-level exposure estimates for the summer of 2015. Finally, we use these detailed exposure estimates in different epidemiologic study designs to provide an in-depth understanding of the role wildfire exposure plays on health outcomes.

  11. Improved method for retinotopy constrained source estimation of visual evoked responses

    PubMed Central

    Hagler, Donald J.; Dale, Anders M.

    2011-01-01

    Retinotopy constrained source estimation (RCSE) is a method for non-invasively measuring the time courses of activation in early visual areas using magnetoencephalography (MEG) or electroencephalography (EEG). Unlike conventional equivalent current dipole or distributed source models, the use of multiple, retinotopically-mapped stimulus locations to simultaneously constrain the solutions allows for the estimation of independent waveforms for visual areas V1, V2, and V3, despite their close proximity to each other. We describe modifications that improve the reliability and efficiency of this method. First, we find that increasing the number and size of visual stimuli results in source estimates that are less susceptible to noise. Second, to create a more accurate forward solution, we have explicitly modeled the cortical point spread of individual visual stimuli. Dipoles are represented as extended patches on the cortical surface, which take into account the estimated receptive field size at each location in V1, V2, and V3 as well as the contributions from contralateral, ipsilateral, dorsal, and ventral portions of the visual areas. Third, we implemented a map fitting procedure to deform a template to match individual subject retinotopic maps derived from functional magnetic resonance imaging (fMRI). This improves the efficiency of the overall method by allowing automated dipole selection, and it makes the results less sensitive to physiological noise in fMRI retinotopy data. Finally, the iteratively reweighted least squares (IRLS) method was used to reduce the contribution from stimulus locations with high residual error for robust estimation of visual evoked responses. PMID:22102418

  12. RETROFIT COSTS FOR SO2 AND NOX CONTROL OPTIONS AT 200 COAL-FIRED PLANTS, VOLUME I - INTRODUCTION AND METHODOLOGY

    EPA Science Inventory

    The report gives results of a study, the objective of which was to significantly improve engineering cost estimates currently being used to evaluate the economic effects of applying SO2 and NOx controls at 200 large SO2-emitting coal-fired utility plants. To accomplish the object...

  13. Cost of Meeting House and Senate Proposed Head Start Teacher Qualification Requirements

    ERIC Educational Resources Information Center

    Center for Law and Social Policy, Inc. (CLASP), 2005

    2005-01-01

    This analysis provides a preliminary estimate of the necessary level of funding needed to raise the degree qualifications to meet the requirements in the Head Start reauthorization legislation currently proposed in the House and Senate. While each bill is designed to improve the quality of Head Start programs by requiring an increase in the number…

  14. Modeling GHG Emissions and Carbon Changes in Agricultural and Forest Systems to Guide Mitigation and Adaptation: Synthesis and Future Needs

    USDA-ARS?s Scientific Manuscript database

    Agricultural production systems and land use change for agriculture and forestry are important sources of anthropogenic greenhouse gas (GHG) emissions. Recent commitments by the European Union, the United States, and China to reduce GHG emissions highlight the need to improve estimates of current em...

  15. 78 FR 49274 - Agency Information Collection Activities: Submission to OMB for Review and Approval; Public...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-13

    ... lung disease which improves the miners' quality of life and reduces economic costs associated with... measures. HRSA is currently working on revising the measures for the entire program, and, as the grantee... disclose the information. The total annual burden hours estimated for this ICR are summarized in the table...

  16. The November 15, 2006 Kuril Islands-Generated Tsunami in Crescent City, California

    NASA Astrophysics Data System (ADS)

    Dengler, L.; Uslu, B.; Barberopoulou, A.; Yim, S. C.; Kelly, A.

    2009-02-01

    On November 15, 2006, Crescent City in Del Norte County, California was hit by a tsunami generated by a M w 8.3 earthquake in the central Kuril Islands. Strong currents that persisted over an eight-hour period damaged floating docks and several boats and caused an estimated 9.2 million in losses. Initial tsunami alert bulletins issued by the West Coast Alaska Tsunami Warning Center (WCATWC) in Palmer, Alaska were cancelled about three and a half hours after the earthquake, nearly five hours before the first surges reached Crescent City. The largest amplitude wave, 1.76-meter peak to trough, was the sixth cycle and arrived over two hours after the first wave. Strong currents estimated at over 10 knots, damaged or destroyed three docks and caused cracks in most of the remaining docks. As a result of the November 15 event, WCATWC changed the definition of Advisory from a region-wide alert bulletin meaning that a potential tsunami is 6 hours or further away to a localized alert that tsunami water heights may approach warning- level thresholds in specific, vulnerable locations like Crescent City. On January 13, 2007 a similar Kuril event occurred and hourly conferences between the warning center and regional weather forecasts were held with a considerable improvement in the flow of information to local coastal jurisdictions. The event highlighted the vulnerability of harbors from a relatively modest tsunami and underscored the need to improve public education regarding the duration of the tsunami hazards, improve dialog between tsunami warning centers and local jurisdictions, and better understand the currents produced by tsunamis in harbors.

  17. Gravitational wave searches using the DSN (Deep Space Network)

    NASA Technical Reports Server (NTRS)

    Nelson, S. J.; Armstrong, J. W.

    1988-01-01

    The Deep Space Network Doppler spacecraft link is currently the only method available for broadband gravitational wave searches in the 0.01 to 0.001 Hz frequency range. The DSN's role in the worldwide search for gravitational waves is described by first summarizing from the literature current theoretical estimates of gravitational wave strengths and time scales from various astrophysical sources. Current and future detection schemes for ground based and space based detectors are then discussed. Past, present, and future planned or proposed gravitational wave experiments using DSN Doppler tracking are described. Lastly, some major technical challenges to improve gravitational wave sensitivities using the DSN are discussed.

  18. Bearing failure detection of micro wind turbine via power spectral density analysis for stator current signals spectrum

    NASA Astrophysics Data System (ADS)

    Mahmood, Faleh H.; Kadhim, Hussein T.; Resen, Ali K.; Shaban, Auday H.

    2018-05-01

    The failure such as air gap weirdness, rubbing, and scrapping between stator and rotor generator arise unavoidably and may cause extremely terrible results for a wind turbine. Therefore, we should pay more attention to detect and identify its cause-bearing failure in wind turbine to improve the operational reliability. The current paper tends to use of power spectral density analysis method of detecting internal race and external race bearing failure in micro wind turbine by estimation stator current signal of the generator. The failure detector method shows that it is well suited and effective for bearing failure detection.

  19. An Innovative Method for Estimating Soil Retention at a ...

    EPA Pesticide Factsheets

    Planning for a sustainable future should include an accounting of services currently provided by ecosystems such as erosion control. Retention of soil improves fertility, increases water retention, and decreases sedimentation in streams and rivers. Landscapes patterns that facilitate these services could help reduce costs for flood control, dredging of reservoirs and waterways, while maintaining habitat for fish and other species important to recreational and tourism industries. Landscape scale geospatial data available for the continental United States was leveraged to estimate sediment erosion (RUSLE-based, Renard, et al. 1997) employing recent geospatial techniques of sediment delivery ratio (SDR) estimation (Cavalli, et al. 2013). The approach was designed to derive a quantitative approximation of the ecological services provided by vegetative cover, management practices, and other surface features with respect to protecting soils from the erosion processes of detachment, transport, and deposition. Quantities of soil retained on the landscape and potential erosion for multiple land cover scenarios relative to current (NLCD 2011) conditions were calculated for each calendar month, and summed to yield annual estimations at a 30-meter grid cell. Continental-scale data used included MODIS NDVI data (2000-2014) to estimate monthly USLE C-factors, gridded soil survey geographic (gSSURGO) soils data (annual USLE K factor), PRISM rainfall data (monthly USLE

  20. [IR spectral-analysis-based range estimation for an object with small temperature difference from background].

    PubMed

    Fu, Xiao-Ning; Wang, Jie; Yang, Lin

    2013-01-01

    It is a typical passive ranging technology that estimation of distance of an object is based on transmission characteristic of infrared radiation, it is also a hotspot in electro-optic countermeasures. Because of avoiding transmitting energy in the detection, this ranging technology will significantly enhance the penetration capability and infrared conceal capability of the missiles or unmanned aerial vehicles. With the current situation in existing passive ranging system, for overcoming the shortage in ranging an oncoming target object with small temperature difference from background, an improved distance estimation scheme was proposed. This article begins with introducing the concept of signal transfer function, makes clear the working curve of current algorithm, and points out that the estimated distance is not unique due to inherent nonlinearity of the working curve. A new distance calculation algorithm was obtained through nonlinear correction technique. It is a ranging formula by using sensing information at 3-5 and 8-12 microm combined with background temperature and field meteorological conditions. The authors' study has shown that the ranging error could be mainly kept around the level of 10% under the condition of the target and background apparent temperature difference equal to +/- 5 K, and the error in estimating background temperature is no more than +/- 15 K.

  1. Magnetospheric Multiscale (MMS) Mission Attitude Ground System Design

    NASA Technical Reports Server (NTRS)

    Sedlak, Joseph E.; Superfin, Emil; Raymond, Juan C.

    2011-01-01

    This paper presents an overview of the attitude ground system (AGS) currently under development for the Magnetospheric Multiscale (MMS) mission. The primary responsibilities for the MMS AGS are definitive attitude determination, validation of the onboard attitude filter, and computation of certain parameters needed to improve maneuver performance. For these purposes, the ground support utilities include attitude and rate estimation for validation of the onboard estimates, sensor calibration, inertia tensor calibration, accelerometer bias estimation, center of mass estimation, and production of a definitive attitude history for use by the science teams. Much of the AGS functionality already exists in utilities used at NASA's Goddard Space Flight Center with support heritage from many other missions, but new utilities are being created specifically for the MMS mission, such as for the inertia tensor, accelerometer bias, and center of mass estimation. Algorithms and test results for all the major AGS subsystems are presented here.

  2. Current Pressure Transducer Application of Model-based Prognostics Using Steady State Conditions

    NASA Technical Reports Server (NTRS)

    Teubert, Christopher; Daigle, Matthew J.

    2014-01-01

    Prognostics is the process of predicting a system's future states, health degradation/wear, and remaining useful life (RUL). This information plays an important role in preventing failure, reducing downtime, scheduling maintenance, and improving system utility. Prognostics relies heavily on wear estimation. In some components, the sensors used to estimate wear may not be fast enough to capture brief transient states that are indicative of wear. For this reason it is beneficial to be capable of detecting and estimating the extent of component wear using steady-state measurements. This paper details a method for estimating component wear using steady-state measurements, describes how this is used to predict future states, and presents a case study of a current/pressure (I/P) Transducer. I/P Transducer nominal and off-nominal behaviors are characterized using a physics-based model, and validated against expected and observed component behavior. This model is used to map observed steady-state responses to corresponding fault parameter values in the form of a lookup table. This method was chosen because of its fast, efficient nature, and its ability to be applied to both linear and non-linear systems. Using measurements of the steady state output, and the lookup table, wear is estimated. A regression is used to estimate the wear propagation parameter and characterize the damage progression function, which are used to predict future states and the remaining useful life of the system.

  3. A Dynamical Model of Pitch Memory Provides an Improved Basis for Implied Harmony Estimation.

    PubMed

    Kim, Ji Chul

    2017-01-01

    Tonal melody can imply vertical harmony through a sequence of tones. Current methods for automatic chord estimation commonly use chroma-based features extracted from audio signals. However, the implied harmony of unaccompanied melodies can be difficult to estimate on the basis of chroma content in the presence of frequent nonchord tones. Here we present a novel approach to automatic chord estimation based on the human perception of pitch sequences. We use cohesion and inhibition between pitches in auditory short-term memory to differentiate chord tones and nonchord tones in tonal melodies. We model short-term pitch memory as a gradient frequency neural network, which is a biologically realistic model of auditory neural processing. The model is a dynamical system consisting of a network of tonotopically tuned nonlinear oscillators driven by audio signals. The oscillators interact with each other through nonlinear resonance and lateral inhibition, and the pattern of oscillatory traces emerging from the interactions is taken as a measure of pitch salience. We test the model with a collection of unaccompanied tonal melodies to evaluate it as a feature extractor for chord estimation. We show that chord tones are selectively enhanced in the response of the model, thereby increasing the accuracy of implied harmony estimation. We also find that, like other existing features for chord estimation, the performance of the model can be improved by using segmented input signals. We discuss possible ways to expand the present model into a full chord estimation system within the dynamical systems framework.

  4. Using Remotely Sensed Information for Near Real-Time Landslide Hazard Assessment

    NASA Technical Reports Server (NTRS)

    Kirschbaum, Dalia; Adler, Robert; Peters-Lidard, Christa

    2013-01-01

    The increasing availability of remotely sensed precipitation and surface products provides a unique opportunity to explore how landslide susceptibility and hazard assessment may be approached at larger spatial scales with higher resolution remote sensing products. A prototype global landslide hazard assessment framework has been developed to evaluate how landslide susceptibility and satellite-derived precipitation estimates can be used to identify potential landslide conditions in near-real time. Preliminary analysis of this algorithm suggests that forecasting errors are geographically variable due to the resolution and accuracy of the current susceptibility map and the application of satellite-based rainfall estimates. This research is currently working to improve the algorithm through considering higher spatial and temporal resolution landslide susceptibility information and testing different rainfall triggering thresholds, antecedent rainfall scenarios, and various surface products at regional and global scales.

  5. Merging Satellite Precipitation Products for Improved Streamflow Simulations

    NASA Astrophysics Data System (ADS)

    Maggioni, V.; Massari, C.; Barbetta, S.; Camici, S.; Brocca, L.

    2017-12-01

    Accurate quantitative precipitation estimation is of great importance for water resources management, agricultural planning and forecasting and monitoring of natural hazards such as flash floods and landslides. In situ observations are limited around the Earth, especially in remote areas (e.g., complex terrain, dense vegetation), but currently available satellite precipitation products are able to provide global precipitation estimates with an accuracy that depends upon many factors (e.g., type of storms, temporal sampling, season, etc.). The recent SM2RAIN approach proposes to estimate rainfall by using satellite soil moisture observations. As opposed to traditional satellite precipitation methods, which sense cloud properties to retrieve instantaneous estimates, this new bottom-up approach makes use of two consecutive soil moisture measurements for obtaining an estimate of the fallen precipitation within the interval between two satellite overpasses. As a result, the nature of the measurement is different and complementary to the one of classical precipitation products and could provide a different valid perspective to substitute or improve current rainfall estimates. Therefore, we propose to merge SM2RAIN and the widely used TMPA 3B42RT product across Italy for a 6-year period (2010-2015) at daily/0.25deg temporal/spatial scale. Two conceptually different merging techniques are compared to each other and evaluated in terms of different statistical metrics, including hit bias, threat score, false alarm rates, and missed rainfall volumes. The first is based on the maximization of the temporal correlation with a reference dataset, while the second is based on a Bayesian approach, which provides a probabilistic satellite precipitation estimate derived from the joint probability distribution of observations and satellite estimates. The merged precipitation products show a better performance with respect to the parental satellite-based products in terms of categorical statistics, as well as bias reduction and correlation coefficient, with the Bayesian approach being superior to other methods. A study case in the Tiber river basin is also presented to discuss the performance of forcing a hydrological model with the merged satellite precipitation product to simulate streamflow time series.

  6. Estimating flow rates to optimize winter habitat for centrarchid fish in Mississippi River (USA) backwaters

    USGS Publications Warehouse

    Johnson, Barry L.; Knights, Brent C.; Barko, John W.; Gaugush, Robert F.; Soballe, David M.; James, William F.

    1998-01-01

    The backwaters of large rivers provide winter refuge for many riverine fish, but they often exhibit low dissolved oxygen levels due to high biological oxygen demand and low flows. Introducing water from the main channel can increase oxygen levels in backwaters, but can also increase current velocity and reduce temperature during winter, which may reduce habitat suitability for fish. In 1993, culverts were installed to introduce flow to the Finger Lakes, a system of six backwater lakes on the Mississippi River, about 160 km downstream from Minneapolis, Minnesota. The goal was to improve habitat for bluegills and black crappies during winter by providing dissolved oxygen concentrations >3 mg/L, current velocities <1 cm/s, and temperatures >1°C. To achieve these conditions, we used data on lake volume and oxygen demand to estimate the minimum flow required to maintain 3 mg/L of dissolved oxygen in each lake. Estimated flows ranged from 0.02 to 0.14 m3/s among lakes. Data gathered in winter 1994 after the culverts were opened, indicated that the estimated flows met habitat goals, but that thermal stratification and lake morphometry can reduce the volume of optimal habitat created.

  7. Quantifying and Mitigating the Effect of Preferential Sampling on Phylodynamic Inference

    PubMed Central

    Karcher, Michael D.; Palacios, Julia A.; Bedford, Trevor; Suchard, Marc A.; Minin, Vladimir N.

    2016-01-01

    Phylodynamics seeks to estimate effective population size fluctuations from molecular sequences of individuals sampled from a population of interest. One way to accomplish this task formulates an observed sequence data likelihood exploiting a coalescent model for the sampled individuals’ genealogy and then integrating over all possible genealogies via Monte Carlo or, less efficiently, by conditioning on one genealogy estimated from the sequence data. However, when analyzing sequences sampled serially through time, current methods implicitly assume either that sampling times are fixed deterministically by the data collection protocol or that their distribution does not depend on the size of the population. Through simulation, we first show that, when sampling times do probabilistically depend on effective population size, estimation methods may be systematically biased. To correct for this deficiency, we propose a new model that explicitly accounts for preferential sampling by modeling the sampling times as an inhomogeneous Poisson process dependent on effective population size. We demonstrate that in the presence of preferential sampling our new model not only reduces bias, but also improves estimation precision. Finally, we compare the performance of the currently used phylodynamic methods with our proposed model through clinically-relevant, seasonal human influenza examples. PMID:26938243

  8. Improving satellite-based post-fire evapotranspiration estimates in semi-arid regions

    NASA Astrophysics Data System (ADS)

    Poon, P.; Kinoshita, A. M.

    2017-12-01

    Climate change and anthropogenic factors contribute to the increased frequency, duration, and size of wildfires, which can alter ecosystem and hydrological processes. The loss of vegetation canopy and ground cover reduces interception and alters evapotranspiration (ET) dynamics in riparian areas, which can impact rainfall-runoff partitioning. Previous research evaluated the spatial and temporal trends of ET based on burn severity and observed an annual decrease of 120 mm on average for three years after fire. Building upon these results, this research focuses on the Coyote Fire in San Diego, California (USA), which burned a total of 76 km2 in 2003 to calibrate and improve satellite-based ET estimates in semi-arid regions affected by wildfire. The current work utilizes satellite-based products and techniques such as the Google Earth Engine Application programming interface (API). Various ET models (ie. Operational Simplified Surface Energy Balance Model (SSEBop)) are compared to the latent heat flux from two AmeriFlux eddy covariance towers, Sky Oaks Young (US-SO3), and Old Stand (US-SO2), from 2000 - 2015. The Old Stand tower has a low burn severity and the Young Stand tower has a moderate to high burn severity. Both towers are used to validate spatial ET estimates. Furthermore, variables and indices, such as Enhanced Vegetation Index (EVI), Normalized Difference Moisture Index (NDMI), and the Normalized Burn Ratio (NBR) are utilized to evaluate satellite-based ET through a multivariate statistical analysis at both sites. This point-scale study will able to improve ET estimates in spatially diverse regions. Results from this research will contribute to the development of a post-wildfire ET model for semi-arid regions. Accurate estimates of post-fire ET will provide a better representation of vegetation and hydrologic recovery, which can be used to improve hydrologic models and predictions.

  9. Snowpack Estimates Improve Water Resources Climate-Change Adaptation Strategies

    NASA Astrophysics Data System (ADS)

    Lestak, L.; Molotch, N. P.; Guan, B.; Granger, S. L.; Nemeth, S.; Rizzardo, D.; Gehrke, F.; Franz, K. J.; Karsten, L. R.; Margulis, S. A.; Case, K.; Anderson, M.; Painter, T. H.; Dozier, J.

    2010-12-01

    Observed climate trends over the past 50 years indicate a reduction in snowpack water storage across the Western U.S. As the primary water source for the region, the loss in snowpack water storage presents significant challenges for managing water deliveries to meet agricultural, municipal, and hydropower demands. Improved snowpack information via remote sensing shows promise for improving seasonal water supply forecasts and for informing decadal scale infrastructure planning. An ongoing project in the California Sierra Nevada and examples from the Rocky Mountains indicate the tractability of estimating snowpack water storage on daily time steps using a distributed snowpack reconstruction model. Fractional snow covered area (FSCA) derived from Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data were used with modeled snowmelt from the snowpack model to estimate snow water equivalent (SWE) in the Sierra Nevada (64,515 km2). Spatially distributed daily SWE estimates were calculated for 10 years, 2000-2009, with detailed analysis for two anamolous years, 2006, a wet year and 2009, an over-forecasted year. Sierra-wide mean SWE was 0.8 cm for 01 April 2006 versus 0.4 cm for 01 April 2009, comparing favorably with known outflow. Modeled SWE was compared to in-situ (observed) SWE for 01 April 2006 for the Feather (northern Sierra, lower-elevation) and Merced (central Sierra, higher-elevation) basins, with mean modeled SWE 80% of observed SWE. Integration of spatial SWE estimates into forecasting operations will allow for better visualization and analysis of high-altitude late-season snow missed by in-situ snow sensors and inter-annual anomalies associated with extreme precipitation events/atmospheric rivers. Collaborations with state and local entities establish protocols on how to meet current and future information needs and improve climate-change adaptation strategies.

  10. Survival Gains from First‐Line Systemic Therapy in Metastatic Non‐Small Cell Lung Cancer in the U.S., 1990–2015: Progress and Opportunities

    PubMed Central

    Goulart, Bernardo H. L.; Ravelo, Arliene; Kolkey, Holli; Ramsey, Scott D.

    2017-01-01

    Abstract Background. Approximately 190,000 Americans are diagnosed with non‐small cell lung cancer (NSCLC) annually, and about half have metastatic (Stage IV) disease. These patients have historically had poor survival prognosis, but several new therapies introduced since 2000 provide options for improved outcomes. The objectives of this study were to quantify survival gains from 1990, when best supportive care (BSC) only was standard, to 2015 and to estimate the impact of expanded use of systemic therapies in clinically appropriate patients. Materials and Methods. We developed a simulation model to estimate survival gains for patients with metastatic NSCLC from 1990–2015. Survival estimates were derived from major clinical trials and extrapolated to a lifetime horizon. Proportions of patients receiving available therapies were derived from the Surveillance, Epidemiology, and End Results database and a commercial treatment registry. We also estimated gains in overall survival (OS) in scenarios in which systemic therapy use increased by 10% and 30% relative to current use. Results. From 1990–2015, one‐year survival proportion increased by 14.1% and mean per‐patient survival improved by 4.2 months (32,700 population life years). Increasing treated patients by 10% or 30% increased OS by 5.1 months (39,700 population life years) and 6.9 months (53,800 population life years), respectively. Conclusion. Although survival remains poor in metastatic NSCLC relative to other common cancers, meaningful progress in per‐patient and population‐level outcomes has been realized over the past 25 years. These advances can be improved even further by increasing use of systemic therapies in the substantial proportion of patients who are suitable for treatment yet who currently receive BSC only. Implications for Practice. Approximately 93,500 Americans are diagnosed with metastatic non‐small cell lung cancer (NSCLC) annually. Historically, these patients have had poor survival prognosis, but newer therapies provide options for improved outcomes. This simulation modeling study quantified metastatic NSCLC survival gains from 1990–2015. Over this period, the one‐year survival proportion and mean per‐patient survival increased by 14.1% and 4.2 months, respectively. Though metastatic NSCLC survival remains poor, the past 25 years have brought meaningful gains. Additional gains could be realized by increasing systemic therapy use in the substantial proportion of patients who are suitable for treatment, yet currently receive only supportive care. PMID:28242792

  11. The BlueSky Smoke Modeling Framework: Recent Developments

    NASA Astrophysics Data System (ADS)

    Sullivan, D. C.; Larkin, N.; Raffuse, S. M.; Strand, T.; ONeill, S. M.; Leung, F. T.; Qu, J. J.; Hao, X.

    2012-12-01

    BlueSky systems—a set of decision support tools including SmartFire and the BlueSky Framework—aid public policy decision makers and scientific researchers in evaluating the air quality impacts of fires. Smoke and fire managers use BlueSky systems in decisions about prescribed burns and wildland firefighting. Air quality agencies use BlueSky systems to support decisions related to air quality regulations. We will discuss a range of recent improvements to the BlueSky systems, as well as examples of applications and future plans. BlueSky systems have the flexibility to accept basic fire information from virtually any source and can reconcile multiple information sources so that duplication of fire records is eliminated. BlueSky systems currently apply information from (1) the National Oceanic and Atmospheric Administration's (NOAA) Hazard Mapping System (HMS), which represents remotely sensed data from the Moderate Resolution Imaging Spectroradiometer (MODIS), Advanced Very High Resolution Radiometer (AVHRR), and Geostationary Operational Environmental Satellites (GOES); (2) the Monitoring Trends in Burn Severity (MTBS) interagency project, which derives fire perimeters from Landsat 30-meter burn scars; (3) the Geospatial Multi-Agency Coordination Group (GeoMAC), which produces helicopter-flown burn perimeters; and (4) ground-based fire reports, such as the ICS-209 reports managed by the National Wildfire Coordinating Group. Efforts are currently underway to streamline the use of additional ground-based systems, such as states' prescribed burn databases. BlueSky systems were recently modified to address known uncertainties in smoke modeling associated with (1) estimates of biomass consumption derived from sparse fuel moisture data, and (2) models of plume injection heights. Additional sources of remotely sensed data are being applied to address these issues as follows: - The National Aeronautics and Space Administration's (NASA) Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis Real-Time (TMPA-RT) data set is being used to improve dead fuel moisture estimates. - EastFire live fuel moisture estimates, which are derived from NASA's MODIS direct broadcast, are being used to improve live fuel moisture estimates. - NASA's Multi-angle Imaging Spectroradiometer (MISR) stereo heights are being used to improve estimates of plume injection heights. Further, the Fire Location and Modeling of Burning Emissions (FLAMBÉ) model was incorporated into the BlueSky Framework as an alternative means of calculating fire emissions. FLAMBÉ directly estimates emissions on the basis of fire detections and radiance measures from NASA's MODIS and NOAA's GOES satellites. (The authors gratefully acknowledge NASA's Applied Sciences Program [Grant Nos. NN506AB52A and NNX09AV76G)], the USDA Forest Service, and the Joint Fire Science Program for their support.)

  12. Estimating Energy Consumption of Mobile Fluid Power in the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lynch, Lauren; Zigler, Bradley T.

    This report estimates the market size and energy consumption of mobile off-road applications utilizing hydraulic fluid power, and summarizes technology gaps and implementation barriers. Mobile fluid power is the use of hydraulic fluids under pressure to transmit power in mobile equipment applications. The mobile off-road fluid power sector includes various uses of hydraulic fluid power equipment with fundamentally diverse end-use application and operational requirements, such as a skid steer loader, a wheel loader or an agriculture tractor. The agriculture and construction segments dominate the mobile off-road fluid power market in component unit sales volume. An estimated range of energy consumedmore » by the mobile off-road fluid power sector is 0.36 - 1.8 quads per year, which was 1.3 percent - 6.5 percent of the total energy consumed in 2016 by the transportation sector. Opportunities for efficiency improvements within the fluid power system result from needs to level and reduce the peak system load requirements and develop new technologies to reduce fluid power system level losses, both of which may be facilitated by characterizing duty cycles to define standardized performance test methods. There are currently no commonly accepted standardized test methods for evaluating equipment level efficiency over a duty cycle. The off-road transportation sector currently meets criteria emissions requirements, and there are no efficiency regulations requiring original equipment manufacturers (OEM) to invest in new architecture development to improve the fuel economy of mobile off-road fluid power systems. In addition, the end-user efficiency interests are outweighed by low equipment purchase or lease price concerns, required payback periods, and reliability and durability requirements of new architecture. Current economics, low market volumes with high product diversity, and regulation compliance challenge OEM investment in commercialization of new architecture development.« less

  13. A new DMSP magnetometer and auroral boundary data set and estimates of field-aligned currents in dynamic auroral boundary coordinates

    NASA Astrophysics Data System (ADS)

    Kilcommons, Liam M.; Redmon, Robert J.; Knipp, Delores J.

    2017-08-01

    We have developed a method for reprocessing the multidecadal, multispacecraft Defense Meteorological Satellite Program Special Sensor Magnetometer (DMSP SSM) data set and have applied it to 15 spacecraft years of data (DMSP Flight 16-18, 2010-2014). This Level-2 data set improves on other available SSM data sets with recalculated spacecraft locations and magnetic perturbations, artifact signal removal, representations of the observations in geomagnetic coordinates, and in situ auroral boundaries. Spacecraft locations have been recalculated using ground-tracking information. Magnetic perturbations (measured field minus modeled main field) are recomputed. The updated locations ensure the appropriate model field is used. We characterize and remove a slow-varying signal in the magnetic field measurements. This signal is a combination of ring current and measurement artifacts. A final artifact remains after processing: step discontinuities in the baseline caused by activation/deactivation of spacecraft electronics. Using coincident data from the DMSP precipitating electrons and ions instrument (SSJ4/5), we detect the in situ auroral boundaries with an improvement to the Redmon et al. (2010) algorithm. We embed the location of the aurora and an accompanying figure of merit in the Level-2 SSM data product. Finally, we demonstrate the potential of this new data set by estimating field-aligned current (FAC) density using the Minimum Variance Analysis technique. The FAC estimates are then expressed in dynamic auroral boundary coordinates using the SSJ-derived boundaries, demonstrating a dawn-dusk asymmetry in average FAC location relative to the equatorward edge of the aurora. The new SSM data set is now available in several public repositories.

  14. Estimated infiltration, percolation, and recharge rates at the Rillito Creek focused recharge investigation site, Pima County, Arizona: Chapter H in Ground-water recharge in the arid and semiarid southwestern United States (Professional Paper 1703)

    USGS Publications Warehouse

    Hoffmann, John P.; Blasch, Kyle W.; Pool, Don R.; Bailey, Matthew A.; Callegary, James B.; Stonestrom, David A.; Constantz, Jim; Ferré, Ty P.A.; Leake, Stanley A.

    2007-01-01

    A large fraction of ground water stored in the alluvial aquifers in the Southwest is recharged by water that percolates through ephemeral stream-channel deposits. The amount of water currently recharging many of these aquifers is insufficient to meet current and future demands. Improving the understanding of streambed infiltration and the subsequent redistribution of water within the unsaturated zone is fundamental to quantifying and forming an accurate description of streambed recharge. In addition, improved estimates of recharge from ephemeral-stream channels will reduce uncertainties in water-budget components used in current ground-water models.This chapter presents a summary of findings related to a focused recharge investigation along Rillito Creek in Tucson, Arizona. A variety of approaches used to estimate infiltration, percolation, and recharge fluxes are presented that provide a wide range of temporal- and spatial-scale measurements of recharge beneath Rillito Creek. The approaches discussed include analyses of (1) cores and cuttings for hydraulic and textural properties, (2) environmental tracers from the water extracted from the cores and cuttings, (3) seepage measurements made during sustained streamflow, (4) heat as a tracer and numerical simulations of the movement of heat through the streambed sediments, (5) water-content variations, (6) water-level responses to streamflow in piezometers within the stream channel, and (7) gravity changes in response to recharge events. Hydraulic properties of the materials underlying Rillito Creek were used to estimate long-term potential recharge rates. Seepage measurements and analyses of temperature and water content were used to estimate infiltration rates, and environmental tracers were used to estimate percolation rates through the thick unsaturated zone. The presence or lack of tritium in the water was used to determine whether or not water in the unsaturated zone infiltrated within the past 40 years. Analysis of water-level and temporal-gravity data were used to estimate recharge volumes. Data presented in this chapter were collected from 1999 though 2002. Precipitation and streamflow during this period were less than the long-term average; however, two periods of significant streamflow resulted in recharge—one in the summer of 1999 and the other in the fall/winter of 2000.Flux estimates of infiltration and recharge vary from less than 0.1 to 1.0 cubic meter per second per kilometer of streamflow. Recharge-flux estimates are larger than infiltration estimates. Larger recharge fluxes than infiltration fluxes are explained by the scale of measurements. Methods used to estimate recharge rates incorporate the largest volumetric and temporal scales and are likely to have fluxes from other nearby sources, such as unmeasured tributaries, whereas the methods used to estimate infiltration incorporate the smallest scales, reflecting infiltration rates at individual measurement sites.

  15. Solutions for the diurnally forced advection-diffusion equation to estimate bulk fluid velocity and diffusivity in streambeds from temperature time series

    NASA Astrophysics Data System (ADS)

    Luce, C.; Tonina, D.; Gariglio, F. P.; Applebee, R.

    2012-12-01

    Differences in the diurnal variations of temperature at different depths in streambed sediments are commonly used for estimating vertical fluxes of water in the streambed. We applied spatial and temporal rescaling of the advection-diffusion equation to derive two new relationships that greatly extend the kinds of information that can be derived from streambed temperature measurements. The first equation provides a direct estimate of the Peclet number from the amplitude decay and phase delay information. The analytical equation is explicit (e.g. no numerical root-finding is necessary), and invertable. The thermal front velocity can be estimated from the Peclet number when the thermal diffusivity is known. The second equation allows for an independent estimate of the thermal diffusivity directly from the amplitude decay and phase delay information. Several improvements are available with the new information. The first equation uses a ratio of the amplitude decay and phase delay information; thus Peclet number calculations are independent of depth. The explicit form also makes it somewhat faster and easier to calculate estimates from a large number of sensors or multiple positions along one sensor. Where current practice requires a priori estimation of streambed thermal diffusivity, the new approach allows an independent calculation, improving precision of estimates. Furthermore, when many measurements are made over space and time, expectations of the spatial correlation and temporal invariance of thermal diffusivity are valuable for validation of measurements. Finally, the closed-form explicit solution allows for direct calculation of propagation of uncertainties in error measurements and parameter estimates, providing insight about error expectations for sensors placed at different depths in different environments as a function of surface temperature variation amplitudes. The improvements are expected to increase the utility of temperature measurement methods for studying groundwater-surface water interactions across space and time scales. We discuss the theoretical implications of the new solutions supported by examples with data for illustration and validation.

  16. New Method for Estimating Landslide Losses for Major Winter Storms in California.

    NASA Astrophysics Data System (ADS)

    Wills, C. J.; Perez, F. G.; Branum, D.

    2014-12-01

    We have developed a prototype system for estimating the economic costs of landslides due to winter storms in California. This system uses some of the basic concepts and estimates of the value of structures from the HAZUS program developed for FEMA. Using the only relatively complete landslide loss data set that we could obtain, data gathered by the City of Los Angeles in 1978, we have developed relations between landslide susceptibility and loss ratio for private property (represented as the value of wood frame structures from HAZUS). The landslide loss ratios estimated from the Los Angeles data are calibrated using more generalized data from the 1982 storms in the San Francisco Bay area to develop relationships that can be used to estimate loss for any value of 2-day or 30-day rainfall averaged over a county. The current estimates for major storms are long projections from very small data sets, subject to very large uncertainties, so provide a very rough estimate of the landslide damage to structures and infrastructure on hill slopes. More importantly, the system can be extended and improved with additional data and used to project landslide losses in future major winter storms. The key features of this system—the landslide susceptibility map, the relationship between susceptibility and loss ratio, and the calibration of estimates against losses in past storms—can all be improved with additional data. Most importantly, this study highlights the importance of comprehensive studies of landslide damage. Detailed surveys of landslide damage following future storms that include locations and amounts of damage for all landslides within an area are critical for building a well-calibrated system to project future landslide losses. Without an investment in post-storm landslide damage surveys, it will not be possible to improve estimates of the magnitude or distribution of landslide damage, which can range up to billions of dollars.

  17. Effect of Al-trace dimension on Joule heating and current crowding in flip-chip solder joints under accelerated electromigration

    NASA Astrophysics Data System (ADS)

    Liang, S. W.; Chang, Y. W.; Chen, Chih

    2006-04-01

    Three-dimensional thermoelectrical simulation was conducted to investigate the influence of Al-trace dimension on Joule heating and current crowding in flip-chip solder joints. It is found that the dimension of the Al-trace effects significantly on the Joule heating, and thus directly determines the mean time to failure (MTTF). Simulated at a stressing current of 0.6A at 70°C, we estimate that the MTTF of the joints with Al traces in 100μm width was 6.1 times longer than that of joints with Al traces in 34μm width. Lower current crowding effect and reduced hot-spot temperature are responsible for the improved MTTF.

  18. Side-information-dependent correlation channel estimation in hash-based distributed video coding.

    PubMed

    Deligiannis, Nikos; Barbarien, Joeri; Jacobs, Marc; Munteanu, Adrian; Skodras, Athanassios; Schelkens, Peter

    2012-04-01

    In the context of low-cost video encoding, distributed video coding (DVC) has recently emerged as a potential candidate for uplink-oriented applications. This paper builds on a concept of correlation channel (CC) modeling, which expresses the correlation noise as being statistically dependent on the side information (SI). Compared with classical side-information-independent (SII) noise modeling adopted in current DVC solutions, it is theoretically proven that side-information-dependent (SID) modeling improves the Wyner-Ziv coding performance. Anchored in this finding, this paper proposes a novel algorithm for online estimation of the SID CC parameters based on already decoded information. The proposed algorithm enables bit-plane-by-bit-plane successive refinement of the channel estimation leading to progressively improved accuracy. Additionally, the proposed algorithm is included in a novel DVC architecture that employs a competitive hash-based motion estimation technique to generate high-quality SI at the decoder. Experimental results corroborate our theoretical gains and validate the accuracy of the channel estimation algorithm. The performance assessment of the proposed architecture shows remarkable and consistent coding gains over a germane group of state-of-the-art distributed and standard video codecs, even under strenuous conditions, i.e., large groups of pictures and highly irregular motion content.

  19. Experiments for improved positioning by means of integrated Doppler satellite observations and the NNSS broadcast ephemeris

    NASA Technical Reports Server (NTRS)

    Arur, M. G.

    1977-01-01

    An effort to improve station position recovery using broadcast ephemeris in Doppler data reduction was studied. A comparison of precise and broadcast ephemerides, treating the former as the standard, yielded information about the state disturbance that can be associated with the broadcast ephemeris. Statistical information about the state disturbance was used with current observational data for improved position recovery. The rank deficiency problem encountered in the short arc geodetic adjustment procedure was analysed and it was deduced that the fundamental rank deficiency is six, scale information being derivable from the wavelength of transmission. Coordinate differences between stations coobserving a pass are estimable. The uncertainty of the broadcast ephemeris, now in the WGS72 system, was assessed. It was conservatively estimated that its positional uncertainty may vary between 19 to 26 m in-track, 15 to 20 m cross-track and 9 to 10 m in radial directions depending on the incidence of the epoch of observations in the interinjection period.

  20. Laser, light, and energy devices for cellulite and lipodystrophy.

    PubMed

    Peterson, Jennifer D; Goldman, Mitchel P

    2011-07-01

    Cellulite affects all races, and it is estimated that 85% of women older than 20 years have some degree of cellulite. Many currently accepted cellulite therapies target deficiencies in lymphatic drainage and microvascular circulation. Devices using radiofrequency, laser, and light-based energies, alone or in combination and coupled frequently with tissue manipulation, are available for improving cellulite. Laser assisted liposuction may improve cellulite appearance. Although improvement using these devices is temporary, it may last several months. Patients who want smoother skin with less visible cellulite can undergo a series of treatments and then return for additional treatments as necessary. Copyright © 2011 Elsevier Inc. All rights reserved.

  1. A novel shape from focus method based on 3D steerable filters for improved performance on treating textureless region

    NASA Astrophysics Data System (ADS)

    Fan, Tiantian; Yu, Hongbin

    2018-03-01

    A novel shape from focus method combining 3D steerable filter for improved performance on treating textureless region was proposed in this paper. Different from conventional spatial methods focusing on the search of maximum edges' response to estimate the depth map, the currently proposed method took both of the edges' response and the axial imaging blur degree into consideration during treatment. As a result, more robust and accurate identification for the focused location can be achieved, especially when treating textureless objects. Improved performance in depth measurement has been successfully demonstrated from both of the simulation and experiment results.

  2. Investigation for improving Global Positioning System (GPS) orbits using a discrete sequential estimator and stochastic models of selected physical processes

    NASA Technical Reports Server (NTRS)

    Goad, Clyde C.; Chadwell, C. David

    1993-01-01

    GEODYNII is a conventional batch least-squares differential corrector computer program with deterministic models of the physical environment. Conventional algorithms were used to process differenced phase and pseudorange data to determine eight-day Global Positioning system (GPS) orbits with several meter accuracy. However, random physical processes drive the errors whose magnitudes prevent improving the GPS orbit accuracy. To improve the orbit accuracy, these random processes should be modeled stochastically. The conventional batch least-squares algorithm cannot accommodate stochastic models, only a stochastic estimation algorithm is suitable, such as a sequential filter/smoother. Also, GEODYNII cannot currently model the correlation among data values. Differenced pseudorange, and especially differenced phase, are precise data types that can be used to improve the GPS orbit precision. To overcome these limitations and improve the accuracy of GPS orbits computed using GEODYNII, we proposed to develop a sequential stochastic filter/smoother processor by using GEODYNII as a type of trajectory preprocessor. Our proposed processor is now completed. It contains a correlated double difference range processing capability, first order Gauss Markov models for the solar radiation pressure scale coefficient and y-bias acceleration, and a random walk model for the tropospheric refraction correction. The development approach was to interface the standard GEODYNII output files (measurement partials and variationals) with software modules containing the stochastic estimator, the stochastic models, and a double differenced phase range processing routine. Thus, no modifications to the original GEODYNII software were required. A schematic of the development is shown. The observational data are edited in the preprocessor and the data are passed to GEODYNII as one of its standard data types. A reference orbit is determined using GEODYNII as a batch least-squares processor and the GEODYNII measurement partial (FTN90) and variational (FTN80, V-matrix) files are generated. These two files along with a control statement file and a satellite identification and mass file are passed to the filter/smoother to estimate time-varying parameter states at each epoch, improved satellite initial elements, and improved estimates of constant parameters.

  3. An audit strategy for progression-free survival

    PubMed Central

    Dodd, Lori E.; Korn, Edward L.; Freidlin, Boris; Gray, Robert; Bhattacharya, Suman

    2010-01-01

    Summary In randomized clinical trials, the use of potentially subjective endpoints has led to frequent use of blinded independent central review (BICR) and event adjudication committees to reduce possible bias in treatment effect estimators based on local evaluations (LE). In oncology trials, progression-free survival (PFS) is one such endpoint. PFS requires image interpretation to determine whether a patient’s cancer has progressed, and BICR has been advocated to reduce the potential for endpoints to be biased by knowledge of treatment assignment. There is current debate, however, about the value of such reviews with time-to-event outcomes like PFS. We propose a BICR audit strategy as an alternative to a complete-case BICR to provide assurance of the presence of a treatment effect. We develop an auxiliary-variable estimator of the log-hazard ratio that is more efficient than simply using the audited (i.e., sampled) BICR data for estimation. Our estimator incorporates information from the LE on all the cases and the audited BICR cases, and is an asymptotically unbiased estimator of the log-hazard ratio from BICR. The estimator offers considerable efficiency gains that improve as the correlation between LE and BICR increases. A two-stage auditing strategy is also proposed and evaluated through simulation studies. The method is applied retrospectively to a large oncology trial that had a complete-case BICR, showing the potential for efficiency improvements. PMID:21210772

  4. Evaluation of a method using survey counts and tag data to estimate the number of Pacific walruses (Odobenus rosmarus divergens) using a coastal haulout in northwestern Alaska

    USGS Publications Warehouse

    Battaile, Brian; Jay, Chadwick V.; Udevitz, Mark S.; Fischbach, Anthony S.

    2017-01-01

    Increased periods of sparse sea ice over the continental shelf of the Chukchi Sea in late summer have reduced offshore haulout habitat for Pacific walruses (Odobenus rosmarus divergens) and increased opportunities for human activities in the region. Knowing how many walruses could be affected by human activities would be useful to conservation decisions. Currently, there are no adequate estimates of walrus abundance in the northeastern Chukchi Sea during summer–early autumn. Estimating abundance in autumn might be possible from coastal surveys of hauled out walruses during periods when offshore sea ice is unavailable to walruses. We evaluated methods to estimate the size of the walrus population that was using a haulout on the coast of northwestern Alaska in autumn by using aerial photography to count the number of hauled out walruses (herd size) and data from 37 tagged walruses to estimate availability (proportion of population hauled out). We used two methods to estimate availability, direct proportions of hauled out tagged walruses and smoothed proportions using local polynomial regression. Point estimates of herd size (4200–38,000 walruses) and total population size (76,000–287,000 walruses) ranged widely among days and between the two methods of estimating availability. Estimates of population size were influenced most by variation in estimates of availability. Coastal surveys might be improved most by counting walruses when the greatest numbers are hauled out, thereby reducing the influence of availability on population size estimates. The chance of collecting data during peak haulout periods would be improved by conducting multiple surveys.

  5. Filling the Gaps: The Synergistic Application of Satellite Data for the Volcanic Ash Threat to Aviation

    NASA Technical Reports Server (NTRS)

    Murray, John; Vernier, Jean-Paul; Fairlie, T. Duncan; Pavolonis, Michael; Krotkov, Nickolay A.; Lindsay, Francis; Haynes, John

    2013-01-01

    Although significant progress has been made in recent years, estimating volcanic ash concentration for the full extent of the airspace affected by volcanic ash remains a challenge. No single satellite, airborne or ground observing system currently exists which can sufficiently inform dispersion models to provide the degree of accuracy required to use them with a high degree of confidence for routing aircraft in and near volcanic ash. Toward this end, the detection and characterization of volcanic ash in the atmosphere may be substantially improved by integrating a wider array of observing systems and advancements in trajectory and dispersion modeling to help solve this problem. The qualitative aspect of this effort has advanced significantly in the past decade due to the increase of highly complementary observational and model data currently available. Satellite observations, especially when coupled with trajectory and dispersion models can provide a very accurate picture of the 3-dimensional location of ash clouds. The accurate estimate of the mass loading at various locations throughout the entire plume, however improving, remains elusive. This paper examines the capabilities of various satellite observation systems and postulates that model-based volcanic ash concentration maps and forecasts might be significantly improved if the various extant satellite capabilities are used together with independent, accurate mass loading data from other observing systems available to calibrate (tune) ash concentration retrievals from the satellite systems.

  6. TOMS UV Algorithm: Problems and Enhancements. 2

    NASA Technical Reports Server (NTRS)

    Krotkov, Nickolay; Herman, Jay; Bhartia, P. K.; Seftor, Colin; Arola, Antti; Kaurola, Jussi; Kroskinen, Lasse; Kalliskota, S.; Taalas, Petteri; Geogdzhaev, I.

    2002-01-01

    Satellite instruments provide global maps of surface ultraviolet (UV) irradiance by combining backscattered radiance measurements with radiative transfer models. The models are limited by uncertainties in input parameters of the atmosphere and the surface. We evaluate the effects of possible enhancements of the current Total Ozone Mapping Spectrometer (TOMS) surface UV irradiance algorithm focusing on effects of diurnal variation of cloudiness and improved treatment of snow/ice. The emphasis is on comparison between the results of the current (version 1) TOMS UV algorithm and each of the changes proposed. We evaluate different approaches for improved treatment of pixel average cloud attenuation, with and without snow/ice on the ground. In addition to treating clouds based only on the measurements at the local time of the TOMS observations, the results from other satellites and weather assimilation models can be used to estimate attenuation of the incident UV irradiance throughout the day. A new method is proposed to obtain a more realistic treatment of snow covered terrain. The method is based on a statistical relation between UV reflectivity and snow depth. The new method reduced the bias between the TOMS UV estimations and ground-based UV measurements for snow periods. The improved (version 2) algorithm will be applied to re-process the existing TOMS UV data record (since 1978) and to the future satellite sensors (e.g., Quik/TOMS, GOME, OMI on EOS/Aura and Triana/EPIC).

  7. Salmon escapement estimates into the Togiak River using sonar, Togiak National Wildlife Refuge, Alaska, 1987, 1988, and 1990

    USGS Publications Warehouse

    Irving, David B.; Finn, James E.; Larson, James P.

    1995-01-01

    We began a three year study in 1987 to test the feasibility of using sonar in the Togiak River to estimate salmon escapements. Current methods rely on periodic aerial surveys and a counting tower at river kilometer 97. Escapement estimates are not available until 10 to 14 days after the salmon enter the river. Water depth and turbidity preclude relocating the tower to the lower river and affect the reliability of aerial surveys. To determine whether an alternative method could be developed to improve the timeliness and accuracy of current escapement monitoring, Bendix sonar units were operated during 1987, 1988, and 1990. Two sonar stations were set up opposite each other at river kilometer 30 and were operated 24 hours per day, seven days per week. Catches from gill nets with 12, 14, and 20 cm stretch mesh, a beach seine, and visual observations were used to estimate species composition. Length and sex data were collected from salmon caught in the nets to assess sampling bias.In 1987, sonar was used to select optimal sites and enumerate coho salmon. In 1988 and 1990, the sites identified in 1987 were used to estimate the escapement of five salmon species. Sockeye salmon escapement was estimated at 512,581 and 589,321, chinook at 7,698 and 15,098, chum at 246,144 and 134,958, coho at 78,588 and 28,290, and pink at 96,167 and 131,484. Sonar estimates of sockeye salmon were two to three times the Alaska Department of Fish and Game's escapement estimate based on aerial surveys and tower counts. The source of error was probably a combination of over-estimating the total number of targets counted by the sonar and by incorrectly estimating species composition.Total salmon escapement estimates using sonar may be feasible but several more years of development are needed. Because of the overlapped salmon run timing, estimating species composition appears the most difficult aspect of using sonar for management. Possible improvements include using a larger beach seine or selecting gill net mesh sizes evenly spaced between 10 and 20 cm stretch mesh.Salmon counts at river kilometer 30 would reduce the lag time between salmon river entry and the escapement estimate to 2-5 days. Any further decrease in lag time, however, would require moving the sonar operations downriver into less desirable braided portions of the river.

  8. Improving aircraft energy efficiency

    NASA Technical Reports Server (NTRS)

    Povinelli, F. P.; Klineberg, J. M.; Kramer, J. J.

    1976-01-01

    Investigations conducted by a NASA task force concerning the development of aeronautical fuel-conservation technology are considered. The task force estimated the fuel savings potential, prospects for implementation in the civil air-transport fleet, and the impact of the technology on air-transport fuel use. Propulsion advances are related to existing engines in the fleet, to new production of current engine types, and to new engine designs. Studies aimed at the evolutionary improvement of aerodynamic design and a laminar flow control program are discussed and possibilities concerning the use of composite structural materials are examined.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    G. CANAVAN

    Space Based Interceptor (SBI) have ranges that are adequate to address rogue ICBMs. They are not overly sensitive to 30-60 s delay times. Current technologies would support boost phase intercept with about 150 interceptors. Higher acceleration and velocity could reduce than number by about a factor of 3 at the cost of heavier and more expensive Kinetic Kill Vehicles (KKVs). 6g SBI would reduce optimal constellation costs by about 35%; 8g SBI would reduce them another 20%. Interceptor ranges fall rapidly with theater missile range. Constellations increase significantly for ranges under 3,000 km, even with advanced interceptor technology. For distributedmore » launches, these estimates recover earlier strategic scalings, which demonstrate the improved absentee ratio for larger or multiple launch areas. Constellations increase with the number of missiles and the number of interceptors launched at each. The economic estimates above suggest that two SBI per missile with a modest midcourse underlay is appropriate. The SBI KKV technology would appear to be common for space- and surface-based boost phase systems, and could have synergisms with improved midcourse intercept and discrimination systems. While advanced technology could be helpful in reducing costs, particularly for short range theater missiles, current technology appears adequate for pressing rogue ICBM, accidental, and unauthorized launches.« less

  10. Drying of Durum Wheat Pasta and Enriched Pasta: A Review of Modeling Approaches.

    PubMed

    Mercier, Samuel; Mondor, Martin; Moresoli, Christine; Villeneuve, Sébastien; Marcos, Bernard

    2016-05-18

    Models on drying of durum wheat pasta and enriched pasta were reviewed to identify avenues for improvement according to consumer needs, product formulation and processing conditions. This review first summarized the fundamental phenomena of pasta drying, mass transfer, heat transfer, momentum, chemical changes, shrinkage and crack formation. The basic equations of the current models were then presented, along with methods for the estimation of pasta transport and thermodynamic properties. The experimental validation of these models was also presented and highlighted the need for further model validation for drying at high temperatures (>-100°C) and for more accurate estimation of the pasta diffusion and mass transfer coefficients. This review indicates the need for the development of mechanistic models to improve our understanding of the mass and heat transfer mechanisms involved in pasta drying, and to consider the local changes in pasta transport properties and relaxation time for more accurate description of the moisture transport near glass transition conditions. The ability of current models to describe dried pasta quality according to the consumers expectations or to predict the impact of incorporating ingredients high in nutritional value on the drying of these enriched pasta was also discussed.

  11. Improved localisation of neoclassical tearing modes by combining multiple diagnostic estimates

    NASA Astrophysics Data System (ADS)

    Rapson, C. J.; Fischer, R.; Giannone, L.; Maraschek, M.; Reich, M.; Treutterer, W.; The ASDEX Upgrade Team

    2017-07-01

    Neoclassical tearing modes (NTMs) strongly degrade confinement in tokamaks, and are a leading cause of disruptions. They can be stabilised by targeted electron cyclotron current drive (ECCD), however the effectiveness of ECCD depends strongly on the accuracy or misalignment between ECCD and the NTM. The first step to ensure minimal misalignment is a good estimate of the NTM location. In previous NTM control experiments, three methods have been used independently to estimate the NTM location: the magnetic equilibrium, correlation between magnetic and spatially-resolved temperature fluctuations, and the amplitude response of the NTM to nearby ECCD. This submission describes an algorithm which has been designed to fuse these three estimates into one, taking into account many of the characteristics of each diagnostic. Although the method diverges from standard data fusion methods, results from simulation and experiment confirm that the algorithm achieves its stated goal of providing an estimate that is more reliable and accurate than any of the individual estimates.

  12. Use of Flood Seasonality in Pooling-Group Formation and Quantile Estimation: An Application in Great Britain

    NASA Astrophysics Data System (ADS)

    Formetta, Giuseppe; Bell, Victoria; Stewart, Elizabeth

    2018-02-01

    Regional flood frequency analysis is one of the most commonly applied methods for estimating extreme flood events at ungauged sites or locations with short measurement records. It is based on: (i) the definition of a homogeneous group (pooling-group) of catchments, and on (ii) the use of the pooling-group data to estimate flood quantiles. Although many methods to define a pooling-group (pooling schemes, PS) are based on catchment physiographic similarity measures, in the last decade methods based on flood seasonality similarity have been contemplated. In this paper, two seasonality-based PS are proposed and tested both in terms of the homogeneity of the pooling-groups they generate and in terms of the accuracy in estimating extreme flood events. The method has been applied in 420 catchments in Great Britain (considered as both gauged and ungauged) and compared against the current Flood Estimation Handbook (FEH) PS. Results for gauged sites show that, compared to the current PS, the seasonality-based PS performs better both in terms of homogeneity of the pooling-group and in terms of the accuracy of flood quantile estimates. For ungauged locations, a national-scale hydrological model has been used for the first time to quantify flood seasonality. Results show that in 75% of the tested locations the seasonality-based PS provides an improvement in the accuracy of the flood quantile estimates. The remaining 25% were located in highly urbanized, groundwater-dependent catchments. The promising results support the aspiration that large-scale hydrological models complement traditional methods for estimating design floods.

  13. Improving Evapotranspiration Estimates Using Multi-Platform Remote Sensing

    NASA Astrophysics Data System (ADS)

    Knipper, Kyle; Hogue, Terri; Franz, Kristie; Scott, Russell

    2016-04-01

    Understanding the linkages between energy and water cycles through evapotranspiration (ET) is uniquely challenging given its dependence on a range of climatological parameters and surface/atmospheric heterogeneity. A number of methods have been developed to estimate ET either from primarily remote-sensing observations, in-situ measurements, or a combination of the two. However, the scale of many of these methods may be too large to provide needed information about the spatial and temporal variability of ET that can occur over regions with acute or chronic land cover change and precipitation driven fluxes. The current study aims to improve the spatial and temporal variability of ET utilizing only satellite-based observations by incorporating a potential evapotranspiration (PET) methodology with satellite-based down-scaled soil moisture estimates in southern Arizona, USA. Initially, soil moisture estimates from AMSR2 and SMOS are downscaled to 1km through a triangular relationship between MODIS land surface temperature (MYD11A1), vegetation indices (MOD13Q1/MYD13Q1), and brightness temperature. Downscaled soil moisture values are then used to scale PET to actual ET (AET) at a daily, 1km resolution. Derived AET estimates are compared to observed flux tower estimates, the North American Land Data Assimilation System (NLDAS) model output (i.e. Variable Infiltration Capacity (VIC) Macroscale Hydrologic Model, Mosiac Model, and Noah Model simulations), the Operational Simplified Surface Energy Balance Model (SSEBop), and a calibrated empirical ET model created specifically for the region. Preliminary results indicate a strong increase in correlation when incorporating the downscaling technique to original AMSR2 and SMOS soil moisture values, with the added benefit of being able to decipher small scale heterogeneity in soil moisture (riparian versus desert grassland). AET results show strong correlations with relatively low error and bias when compared to flux tower estimates. In addition, AET results show improved bias to those reported by SSEBop, with similar correlations and errors when compared to the empirical ET model. Spatial patterns of estimated AET display patterns representative of the basin's elevation and vegetation characteristics, with improved spatial resolution and temporal heterogeneity when compared to previous models.

  14. Using Smartphone Sensors for Improving Energy Expenditure Estimation

    PubMed Central

    Zhu, Jindan; Das, Aveek K.; Zeng, Yunze; Mohapatra, Prasant; Han, Jay J.

    2015-01-01

    Energy expenditure (EE) estimation is an important factor in tracking personal activity and preventing chronic diseases, such as obesity and diabetes. Accurate and real-time EE estimation utilizing small wearable sensors is a difficult task, primarily because the most existing schemes work offline or use heuristics. In this paper, we focus on accurate EE estimation for tracking ambulatory activities (walking, standing, climbing upstairs, or downstairs) of a typical smartphone user. We used built-in smartphone sensors (accelerometer and barometer sensor), sampled at low frequency, to accurately estimate EE. Using a barometer sensor, in addition to an accelerometer sensor, greatly increases the accuracy of EE estimation. Using bagged regression trees, a machine learning technique, we developed a generic regression model for EE estimation that yields upto 96% correlation with actual EE. We compare our results against the state-of-the-art calorimetry equations and consumer electronics devices (Fitbit and Nike+ FuelBand). The newly developed EE estimation algorithm demonstrated superior accuracy compared with currently available methods. The results were calibrated against COSMED K4b2 calorimeter readings. PMID:27170901

  15. Using Smartphone Sensors for Improving Energy Expenditure Estimation.

    PubMed

    Pande, Amit; Zhu, Jindan; Das, Aveek K; Zeng, Yunze; Mohapatra, Prasant; Han, Jay J

    2015-01-01

    Energy expenditure (EE) estimation is an important factor in tracking personal activity and preventing chronic diseases, such as obesity and diabetes. Accurate and real-time EE estimation utilizing small wearable sensors is a difficult task, primarily because the most existing schemes work offline or use heuristics. In this paper, we focus on accurate EE estimation for tracking ambulatory activities (walking, standing, climbing upstairs, or downstairs) of a typical smartphone user. We used built-in smartphone sensors (accelerometer and barometer sensor), sampled at low frequency, to accurately estimate EE. Using a barometer sensor, in addition to an accelerometer sensor, greatly increases the accuracy of EE estimation. Using bagged regression trees, a machine learning technique, we developed a generic regression model for EE estimation that yields upto 96% correlation with actual EE. We compare our results against the state-of-the-art calorimetry equations and consumer electronics devices (Fitbit and Nike+ FuelBand). The newly developed EE estimation algorithm demonstrated superior accuracy compared with currently available methods. The results were calibrated against COSMED K4b2 calorimeter readings.

  16. Estimation of absolute solvent and solvation shell entropies via permutation reduction

    NASA Astrophysics Data System (ADS)

    Reinhard, Friedemann; Grubmüller, Helmut

    2007-01-01

    Despite its prominent contribution to the free energy of solvated macromolecules such as proteins or DNA, and although principally contained within molecular dynamics simulations, the entropy of the solvation shell is inaccessible to straightforward application of established entropy estimation methods. The complication is twofold. First, the configurational space density of such systems is too complex for a sufficiently accurate fit. Second, and in contrast to the internal macromolecular dynamics, the configurational space volume explored by the diffusive motion of the solvent molecules is too large to be exhaustively sampled by current simulation techniques. Here, we develop a method to overcome the second problem and to significantly alleviate the first one. We propose to exploit the permutation symmetry of the solvent by transforming the trajectory in a way that renders established estimation methods applicable, such as the quasiharmonic approximation or principal component analysis. Our permutation-reduced approach involves a combinatorial problem, which is solved through its equivalence with the linear assignment problem, for which O(N3) methods exist. From test simulations of dense Lennard-Jones gases, enhanced convergence and improved entropy estimates are obtained. Moreover, our approach renders diffusive systems accessible to improved fit functions.

  17. NASA Space Radiation Protection Strategies: Risk Assessment and Permissible Exposure Limits

    NASA Technical Reports Server (NTRS)

    Huff, J. L.; Patel, Z. S.; Simonsen, L. C.

    2017-01-01

    Permissible exposure limits (PELs) for short-term and career astronaut exposures to space radiation have been set and approved by NASA with the goal of protecting astronauts against health risks associated with ionizing radiation exposure. Short term PELs are intended to prevent clinically significant deterministic health effects, including performance decrements, which could threaten astronaut health and jeopardize mission success. Career PELs are implemented to control late occurring health effects, including a 3% risk of exposure induced death (REID) from cancer, and dose limits are used to prevent cardiovascular and central nervous system diseases. For radiation protection, meeting the cancer PEL is currently the design driver for galactic cosmic ray and solar particle event shielding, mission duration, and crew certification (e.g., 1-year ISS missions). The risk of cancer development is the largest known long-term health consequence following radiation exposure, and current estimates for long-term health risks due to cardiovascular diseases are approximately 30% to 40% of the cancer risk for exposures above an estimated threshold (Deep Space one-year and Mars missions). Large uncertainties currently exist in estimating the health risks of space radiation exposure. Improved understanding through radiobiology and physics research allows increased accuracy in risk estimation and is essential for ensuring astronaut health as well as for controlling mission costs, optimization of mission operations, vehicle design, and countermeasure assessment. We will review the Space Radiation Program Element's research strategies to increase accuracy in risk models and to inform development and validation of the permissible exposure limits.

  18. Modified wind chill temperatures determined by a whole body thermoregulation model and human-based facial convective coefficients.

    PubMed

    Shabat, Yael Ben; Shitzer, Avraham; Fiala, Dusan

    2014-08-01

    Wind chill equivalent temperatures (WCETs) were estimated by a modified Fiala's whole body thermoregulation model of a clothed person. Facial convective heat exchange coefficients applied in the computations concurrently with environmental radiation effects were taken from a recently derived human-based correlation. Apart from these, the analysis followed the methodology used in the derivation of the currently used wind chill charts. WCET values are summarized by the following equation:[Formula: see text]Results indicate consistently lower estimated facial skin temperatures and consequently higher WCETs than those listed in the literature and used by the North American weather services. Calculated dynamic facial skin temperatures were additionally applied in the estimation of probabilities for the occurrence of risks of frostbite. Predicted weather combinations for probabilities of "Practically no risk of frostbite for most people," for less than 5 % risk at wind speeds above 40 km h(-1), were shown to occur at air temperatures above -10 °C compared to the currently published air temperature of -15 °C. At air temperatures below -35 °C, the presently calculated weather combination of 40 km h(-1)/-35 °C, at which the transition for risks to incur a frostbite in less than 2 min, is less conservative than that published: 60 km h(-1)/-40 °C. The present results introduce a fundamentally improved scientific basis for estimating facial skin temperatures, wind chill temperatures and risk probabilities for frostbites over those currently practiced.

  19. Modified wind chill temperatures determined by a whole body thermoregulation model and human-based facial convective coefficients

    NASA Astrophysics Data System (ADS)

    Shabat, Yael Ben; Shitzer, Avraham; Fiala, Dusan

    2014-08-01

    Wind chill equivalent temperatures (WCETs) were estimated by a modified Fiala's whole body thermoregulation model of a clothed person. Facial convective heat exchange coefficients applied in the computations concurrently with environmental radiation effects were taken from a recently derived human-based correlation. Apart from these, the analysis followed the methodology used in the derivation of the currently used wind chill charts. WCET values are summarized by the following equation: Results indicate consistently lower estimated facial skin temperatures and consequently higher WCETs than those listed in the literature and used by the North American weather services. Calculated dynamic facial skin temperatures were additionally applied in the estimation of probabilities for the occurrence of risks of frostbite. Predicted weather combinations for probabilities of "Practically no risk of frostbite for most people," for less than 5 % risk at wind speeds above 40 km h-1, were shown to occur at air temperatures above -10 °C compared to the currently published air temperature of -15 °C. At air temperatures below -35 °C, the presently calculated weather combination of 40 km h-1/-35 °C, at which the transition for risks to incur a frostbite in less than 2 min, is less conservative than that published: 60 km h-1/-40 °C. The present results introduce a fundamentally improved scientific basis for estimating facial skin temperatures, wind chill temperatures and risk probabilities for frostbites over those currently practiced.

  20. Cost of improving Access to Psychological Therapies (IAPT) programme: an analysis of cost of session, treatment and recovery in selected Primary Care Trusts in the East of England region.

    PubMed

    Radhakrishnan, Muralikrishnan; Hammond, Geoffrey; Jones, Peter B; Watson, Alison; McMillan-Shields, Fiona; Lafortune, Louise

    2013-01-01

    Recent literature on Improving Access to Psychological Therapies (IAPT) has reported on improvements in clinical outcomes, changes in employment status and the concept of recovery attributable to IAPT treatment, but not on the costs of the programme. This article reports the costs associated with a single session, completed course of treatment and recovery for four treatment courses (i.e., remaining in low or high intensity treatment, stepping up or down) in IAPT services in 5 East of England region Primary Care Trusts. Costs were estimated using treatment activity data and gross financial information, along with assumptions about how these financial data could be broken down. The estimated average cost of a high intensity session was £177 and the average cost for a low intensity session was £99. The average cost of treatment was £493 (low intensity), £1416 (high intensity), £699 (stepped down), £1514 (stepped up) and £877 (All). The cost per recovered patient was £1043 (low intensity), £2895 (high intensity), £1653 (stepped down), £2914 (stepped up) and £1766 (All). Sensitivity analysis revealed that the costs are sensitive to cost ratio assumptions, indicating that inaccurate ratios are likely to influence overall estimates. Results indicate the cost per session exceeds previously reported estimates, but cost of treatment is only marginally higher. The current cost estimates are supportive of the originally proposed IAPT model on cost-benefit grounds. The study also provides a framework to estimate costs using financial data, especially when programmes have block contract arrangements. Replication and additional analyses along with evidence-based discussion regarding alternative, cost-effective methods of intervention is recommended. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. THYROID CANCER STUDY AMONG UKRAINIAN CHILDREN EXPOSED TO RADIATION AFTER THE CHORNOBYL ACCIDENT: IMPROVED ESTIMATES OF THE THYROID DOSES TO THE COHORT MEMBERS

    PubMed Central

    Likhtarov, Ilya; Kovgan, Lina; Masiuk, Sergii; Talerko, Mykola; Chepurny, Mykola; Ivanova, Olga; Gerasymenko, Valentina; Boyko, Zulfira; Voillequé, Paul; Drozdovitch, Vladimir; Bouville, André

    2013-01-01

    In collaboration with the Ukrainian Research Center for Radiation Medicine, the U.S. National Cancer Institute initiated a cohort study of children and adolescents exposed to Chornobyl fallout in Ukraine to better understand the long-term health effects of exposure to radioactive iodines. All 13,204 cohort members were subjected to at least one direct thyroid measurement between 30 April and 30 June 1986 and resided at the time of the accident in the northern part of Kyiv, Zhytomyr, or Chernihiv Oblasts, which were the most contaminated territories of Ukraine as a result of radioactive fallout from the Chornobyl accident. Thyroid doses for the cohort members, which had been estimated following the first round of interviews, were re-evaluated following the second round of interviews. The revised thyroid doses range from 0.35 mGy to 42 Gy, with 95 percent of the doses between 1 mGy and 4.2 Gy, an arithmetic mean of 0.65 Gy, and a geometric mean of 0.19 Gy. These means are 70% of the previous estimates, mainly because of the use of country-specific thyroid masses. Many of the individual thyroid dose estimates show substantial differences because of the use of an improved questionnaire for the second round of interviews. Limitations of the current set of thyroid dose estimates are discussed. For the epidemiologic study, the most notable improvement is a revised assessment of the uncertainties, as shared and unshared uncertainties in the parameter values were considered in the calculation of the 1,000 stochastic estimates of thyroid dose for each cohort member. This procedure makes it possible to perform a more realistic risk analysis. PMID:25208014

  2. Effect of roughness formulation on the performance of a coupled wave, hydrodynamic, and sediment transport model

    USGS Publications Warehouse

    Ganju, Neil K.; Sherwood, Christopher R.

    2010-01-01

    A variety of algorithms are available for parameterizing the hydrodynamic bottom roughness associated with grain size, saltation, bedforms, and wave–current interaction in coastal ocean models. These parameterizations give rise to spatially and temporally variable bottom-drag coefficients that ostensibly provide better representations of physical processes than uniform and constant coefficients. However, few studies have been performed to determine whether improved representation of these variable bottom roughness components translates into measurable improvements in model skill. We test the hypothesis that improved representation of variable bottom roughness improves performance with respect to near-bed circulation, bottom stresses, or turbulence dissipation. The inner shelf south of Martha’s Vineyard, Massachusetts, is the site of sorted grain-size features which exhibit sharp alongshore variations in grain size and ripple geometry over gentle bathymetric relief; this area provides a suitable testing ground for roughness parameterizations. We first establish the skill of a nested regional model for currents, waves, stresses, and turbulent quantities using a uniform and constant roughness; we then gauge model skill with various parameterization of roughness, which account for the influence of the wave-boundary layer, grain size, saltation, and rippled bedforms. We find that commonly used representations of ripple-induced roughness, when combined with a wave–current interaction routine, do not significantly improve skill for circulation, and significantly decrease skill with respect to stresses and turbulence dissipation. Ripple orientation with respect to dominant currents and ripple shape may be responsible for complicating a straightforward estimate of the roughness contribution from ripples. In addition, sediment-induced stratification may be responsible for lower stresses than predicted by the wave–current interaction model.

  3. Injecting drug users in Scotland, 2006: Listing, number, demography, and opiate-related death-rates.

    PubMed

    King, Ruth; Bird, Sheila M; Overstall, Antony; Hay, Gordon; Hutchinson, Sharon J

    2013-06-01

    Using Bayesian capture-recapture analysis, we estimated the number of current injecting drug users (IDUs) in Scotland in 2006 from the cross-counts of 5670 IDUs listed on four data-sources: social enquiry reports (901 IDUs listed), hospital records (953), drug treatment agencies (3504), and recent Hepatitis C virus (HCV) diagnoses (827 listed as IDU-risk). Further, we accessed exact numbers of opiate-related drugs-related deaths (DRDs) in 2006 and 2007 to improve estimation of Scotland's DRD rates per 100 current IDUs. Using all four data-sources, and model-averaging of standard hierarchical log-linear models to allow for pairwise interactions between data-sources and/or demographic classifications, Scotland had an estimated 31700 IDUs in 2006 (95% credible interval: 24900-38700); but 25000 IDUs (95% CI: 20700-35000) by excluding recent HCV diagnoses whose IDU-risk can refer to past injecting. Only in the younger age-group (15-34 years) were Scotland's opiate-related DRD rates significantly lower for females than males. Older males' opiate-related DRD rate was 1.9 (1.24-2.40) per 100 current IDUs without or 1.3 (0.94-1.64) with inclusion of recent HCV diagnoses. If, indeed, Scotland had only 25000 current IDUs in 2006, with only 8200 of them aged 35+ years, the opiate-related DRD rate is higher among this older age group than has been appreciated hitherto. There is counter-balancing good news for the public health: the hitherto sharp increase in older current IDUs had stalled by 2006.

  4. Assessing Probabilistic Risk Assessment Approaches for Insect Biological Control Introductions.

    PubMed

    Kaufman, Leyla V; Wright, Mark G

    2017-07-07

    The introduction of biological control agents to new environments requires host specificity tests to estimate potential non-target impacts of a prospective agent. Currently, the approach is conservative, and is based on physiological host ranges determined under captive rearing conditions, without consideration for ecological factors that may influence realized host range. We use historical data and current field data from introduced parasitoids that attack an endemic Lepidoptera species in Hawaii to validate a probabilistic risk assessment (PRA) procedure for non-target impacts. We use data on known host range and habitat use in the place of origin of the parasitoids to determine whether contemporary levels of non-target parasitism could have been predicted using PRA. Our results show that reasonable predictions of potential non-target impacts may be made if comprehensive data are available from places of origin of biological control agents, but scant data produce poor predictions. Using apparent mortality data rather than marginal attack rate estimates in PRA resulted in over-estimates of predicted non-target impact. Incorporating ecological data into PRA models improved the predictive power of the risk assessments.

  5. Parameter estimation of anisotropic Manning's n coefficient for advanced circulation (ADCIRC) modeling of estuarine river currents (lower St. Johns River)

    NASA Astrophysics Data System (ADS)

    Demissie, Henok K.; Bacopoulos, Peter

    2017-05-01

    A rich dataset of time- and space-varying velocity measurements for a macrotidal estuary was used in the development of a vector-based formulation of bottom roughness in the Advanced Circulation (ADCIRC) model. The updates to the parallel code of ADCIRC to include directionally based drag coefficient are briefly discussed in the paper, followed by an application of the data assimilation (nudging analysis) to the lower St. Johns River (northeastern Florida) for parameter estimation of anisotropic Manning's n coefficient. The method produced converging estimates of Manning's n values for ebb (0.0290) and flood (0.0219) when initialized with uniform and isotropic setting of 0.0200. Modeled currents, water levels and flows were improved at observation locations where data were assimilated as well as at monitoring locations where data were not assimilated, such that the method increases model skill locally and non-locally with regard to the data locations. The methodology is readily transferrable to other circulation/estuary models, given pre-developed quality mesh/grid and adequate data available for assimilation.

  6. Assessing Probabilistic Risk Assessment Approaches for Insect Biological Control Introductions

    PubMed Central

    Kaufman, Leyla V.; Wright, Mark G.

    2017-01-01

    The introduction of biological control agents to new environments requires host specificity tests to estimate potential non-target impacts of a prospective agent. Currently, the approach is conservative, and is based on physiological host ranges determined under captive rearing conditions, without consideration for ecological factors that may influence realized host range. We use historical data and current field data from introduced parasitoids that attack an endemic Lepidoptera species in Hawaii to validate a probabilistic risk assessment (PRA) procedure for non-target impacts. We use data on known host range and habitat use in the place of origin of the parasitoids to determine whether contemporary levels of non-target parasitism could have been predicted using PRA. Our results show that reasonable predictions of potential non-target impacts may be made if comprehensive data are available from places of origin of biological control agents, but scant data produce poor predictions. Using apparent mortality data rather than marginal attack rate estimates in PRA resulted in over-estimates of predicted non-target impact. Incorporating ecological data into PRA models improved the predictive power of the risk assessments. PMID:28686180

  7. Learning to select useful landmarks.

    PubMed

    Greiner, R; Isukapalli, R

    1996-01-01

    To navigate effectively, an autonomous agent must be able to quickly and accurately determine its current location. Given an initial estimate of its position (perhaps based on dead-reckoning) and an image taken of a known environment, our agent first attempts to locate a set of landmarks (real-world objects at known locations), then uses their angular separation to obtain an improved estimate of its current position. Unfortunately, some landmarks may not be visible, or worse, may be confused with other landmarks, resulting in both time wasted in searching for the undetected landmarks, and in further errors in the agent's estimate of its position. To address these problems, we propose a method that uses previous experiences to learn a selection function that, given the set of landmarks that might be visible, returns the subset that can be used to reliably provide an accurate registration of the agent's position. We use statistical techniques to prove that the learned selection function is, with high probability, effectively at a local optimum in the space of such functions. This paper also presents empirical evidence, using real-world data, that demonstrate the effectiveness of our approach.

  8. YBCO microbolometer operating below Tc - A modelization based on critical current-temperature dependence

    NASA Astrophysics Data System (ADS)

    Robbes, D.; Langlois, P.; Dolabdjian, C.; Bloyet, D.; Hamet, J. F.; Murray, H.

    1993-03-01

    Using careful measurements of the I-V curve of a YBCO thin-film microbridge under light irradiation at 780 nm and temperature close to 77 K, it is shown that the critical current versus temperature dependence is a good thermometer for estimating bolometric effects in the film. A novel dynamic voltage bias is introduced which directly gives the device current responsitivity and greatly reduces risks of thermal runaway. Detectivity is very low but it is predicted that a noise equivalent temperature of less than 10 exp -7 K/sq rt Hz would be achievable in a wide temperature range (10-80 K), which is an improvement over thermometry at the resistive transition.

  9. Plasma characteristics in the discharge region of a 20 A emission current hollow cathode

    NASA Astrophysics Data System (ADS)

    Mingming, SUN; Tianping, ZHANG; Xiaodong, WEN; Weilong, GUO; Jiayao, SONG

    2018-02-01

    Numerical calculation and fluid simulation methods were used to obtain the plasma characteristics in the discharge region of the LIPS-300 ion thruster’s 20 A emission current hollow cathode and to verify the structural design of the emitter. The results of the two methods indicated that the highest plasma density and electron temperature, which improved significantly in the orifice region, were located in the discharge region of the hollow cathode. The magnitude of plasma density was about 1021 m-3 in the emitter and orifice regions, as obtained by numerical calculations, but decreased exponentially in the plume region with the distance from the orifice exit. Meanwhile, compared to the emitter region, the electron temperature and current improved by about 36% in the orifice region. The hollow cathode performance test results were in good agreement with the numerical calculation results, which proved that that the structural design of the emitter and the orifice met the requirements of a 20 A emission current. The numerical calculation method can be used to estimate plasma characteristics in the preliminary design stage of hollow cathodes.

  10. Assessing the Importance of Prior Biospheric Fluxes on Inverse Model Estimates of CO2

    NASA Astrophysics Data System (ADS)

    Philip, S.; Johnson, M. S.; Potter, C. S.; Genovese, V. B.

    2017-12-01

    Atmospheric mixing ratios of carbon dioxide (CO2) are largely controlled by anthropogenic emissions and biospheric sources/sinks. The processes controlling terrestrial biosphere-atmosphere carbon exchange are currently not fully understood, resulting in models having significant differences in the quantification of biospheric CO2 fluxes. Currently, atmospheric chemical transport models (CTM) and global climate models (GCM) use multiple different biospheric CO2 flux models resulting in large differences in simulating the global carbon cycle. The Orbiting Carbon Observatory 2 (OCO-2) satellite mission was designed to allow for the improved understanding of the processes involved in the exchange of carbon between terrestrial ecosystems and the atmosphere, and therefore allowing for more accurate assessment of the seasonal/inter-annual variability of CO2. OCO-2 provides much-needed CO2 observations in data-limited regions allowing for the evaluation of model simulations of greenhouse gases (GHG) and facilitating global/regional estimates of "top-down" CO2 fluxes. We conduct a 4-D Variation (4D-Var) data assimilation with the GEOS-Chem (Goddard Earth Observation System-Chemistry) CTM using 1) OCO-2 land nadir and land glint retrievals and 2) global in situ surface flask observations to constrain biospheric CO2 fluxes. We apply different state-of-the-science year-specific CO2 flux models (e.g., NASA-CASA (NASA-Carnegie Ames Stanford Approach), CASA-GFED (Global Fire Emissions Database), Simple Biosphere Model version 4 (SiB-4), and LPJ (Lund-Postdam-Jena)) to assess the impact of "a priori" flux predictions to "a posteriori" estimates. We will present the "top-down" CO2 flux estimates for the year 2015 using OCO-2 and in situ observations, and a complete indirect evaluation of the a priori and a posteriori flux estimates using independent in situ observations. We will also present our assessment of the variability of "top-down" CO2 flux estimates when using different biospheric CO2 flux models. This work will improve our understanding of the global carbon cycle, specifically, how OCO-2 observations can be used to constrain biospheric CO2 flux model estimates.

  11. Systems engineering and integration: Cost estimation and benefits analysis

    NASA Technical Reports Server (NTRS)

    Dean, ED; Fridge, Ernie; Hamaker, Joe

    1990-01-01

    Space Transportation Avionics hardware and software cost has traditionally been estimated in Phase A and B using cost techniques which predict cost as a function of various cost predictive variables such as weight, lines of code, functions to be performed, quantities of test hardware, quantities of flight hardware, design and development heritage, complexity, etc. The output of such analyses has been life cycle costs, economic benefits and related data. The major objectives of Cost Estimation and Benefits analysis are twofold: (1) to play a role in the evaluation of potential new space transportation avionics technologies, and (2) to benefit from emerging technological innovations. Both aspects of cost estimation and technology are discussed here. The role of cost analysis in the evaluation of potential technologies should be one of offering additional quantitative and qualitative information to aid decision-making. The cost analyses process needs to be fully integrated into the design process in such a way that cost trades, optimizations and sensitivities are understood. Current hardware cost models tend to primarily use weights, functional specifications, quantities, design heritage and complexity as metrics to predict cost. Software models mostly use functionality, volume of code, heritage and complexity as cost descriptive variables. Basic research needs to be initiated to develop metrics more responsive to the trades which are required for future launch vehicle avionics systems. These would include cost estimating capabilities that are sensitive to technological innovations such as improved materials and fabrication processes, computer aided design and manufacturing, self checkout and many others. In addition to basic cost estimating improvements, the process must be sensitive to the fact that no cost estimate can be quoted without also quoting a confidence associated with the estimate. In order to achieve this, better cost risk evaluation techniques are needed as well as improved usage of risk data by decision-makers. More and better ways to display and communicate cost and cost risk to management are required.

  12. A Survey of Cost Estimating Methodologies for Distributed Spacecraft Missions

    NASA Technical Reports Server (NTRS)

    Foreman, Veronica; Le Moigne, Jacqueline; de Weck, Oliver

    2016-01-01

    Satellite constellations present unique capabilities and opportunities to Earth orbiting and near-Earth scientific and communications missions, but also present new challenges to cost estimators. An effective and adaptive cost model is essential to successful mission design and implementation, and as Distributed Spacecraft Missions (DSM) become more common, cost estimating tools must become more representative of these types of designs. Existing cost models often focus on a single spacecraft and require extensive design knowledge to produce high fidelity estimates. Previous research has examined the shortcomings of existing cost practices as they pertain to the early stages of mission formulation, for both individual satellites and small satellite constellations. Recommendations have been made for how to improve the cost models for individual satellites one-at-a-time, but much of the complexity in constellation and DSM cost modeling arises from constellation systems level considerations that have not yet been examined. This paper constitutes a survey of the current state-of-the-art in cost estimating techniques with recommendations for improvements to increase the fidelity of future constellation cost estimates. To enable our investigation, we have developed a cost estimating tool for constellation missions. The development of this tool has revealed three high-priority weaknesses within existing parametric cost estimating capabilities as they pertain to DSM architectures: design iteration, integration and test, and mission operations. Within this paper we offer illustrative examples of these discrepancies and make preliminary recommendations for addressing them. DSM and satellite constellation missions are shifting the paradigm of space-based remote sensing, showing promise in the realms of Earth science, planetary observation, and various heliophysical applications. To fully reap the benefits of DSM technology, accurate and relevant cost estimating capabilities must exist; this paper offers insights critical to the future development and implementation of DSM cost estimating tools.

  13. A Survey of Cost Estimating Methodologies for Distributed Spacecraft Missions

    NASA Technical Reports Server (NTRS)

    Foreman, Veronica L.; Le Moigne, Jacqueline; de Weck, Oliver

    2016-01-01

    Satellite constellations present unique capabilities and opportunities to Earth orbiting and near-Earth scientific and communications missions, but also present new challenges to cost estimators. An effective and adaptive cost model is essential to successful mission design and implementation, and as Distributed Spacecraft Missions (DSM) become more common, cost estimating tools must become more representative of these types of designs. Existing cost models often focus on a single spacecraft and require extensive design knowledge to produce high fidelity estimates. Previous research has examined the limitations of existing cost practices as they pertain to the early stages of mission formulation, for both individual satellites and small satellite constellations. Recommendations have been made for how to improve the cost models for individual satellites one-at-a-time, but much of the complexity in constellation and DSM cost modeling arises from constellation systems level considerations that have not yet been examined. This paper constitutes a survey of the current state-of-theart in cost estimating techniques with recommendations for improvements to increase the fidelity of future constellation cost estimates. To enable our investigation, we have developed a cost estimating tool for constellation missions. The development of this tool has revealed three high-priority shortcomings within existing parametric cost estimating capabilities as they pertain to DSM architectures: design iteration, integration and test, and mission operations. Within this paper we offer illustrative examples of these discrepancies and make preliminary recommendations for addressing them. DSM and satellite constellation missions are shifting the paradigm of space-based remote sensing, showing promise in the realms of Earth science, planetary observation, and various heliophysical applications. To fully reap the benefits of DSM technology, accurate and relevant cost estimating capabilities must exist; this paper offers insights critical to the future development and implementation of DSM cost estimating tools.

  14. Improved gap size estimation for scaffolding algorithms.

    PubMed

    Sahlin, Kristoffer; Street, Nathaniel; Lundeberg, Joakim; Arvestad, Lars

    2012-09-01

    One of the important steps of genome assembly is scaffolding, in which contigs are linked using information from read-pairs. Scaffolding provides estimates about the order, relative orientation and distance between contigs. We have found that contig distance estimates are generally strongly biased and based on false assumptions. Since erroneous distance estimates can mislead in subsequent analysis, it is important to provide unbiased estimation of contig distance. In this article, we show that state-of-the-art programs for scaffolding are using an incorrect model of gap size estimation. We discuss why current maximum likelihood estimators are biased and describe what different cases of bias we are facing. Furthermore, we provide a model for the distribution of reads that span a gap and derive the maximum likelihood equation for the gap length. We motivate why this estimate is sound and show empirically that it outperforms gap estimators in popular scaffolding programs. Our results have consequences both for scaffolding software, structural variation detection and for library insert-size estimation as is commonly performed by read aligners. A reference implementation is provided at https://github.com/SciLifeLab/gapest. Supplementary data are availible at Bioinformatics online.

  15. dPIRPLE: a joint estimation framework for deformable registration and penalized-likelihood CT image reconstruction using prior images

    NASA Astrophysics Data System (ADS)

    Dang, H.; Wang, A. S.; Sussman, Marc S.; Siewerdsen, J. H.; Stayman, J. W.

    2014-09-01

    Sequential imaging studies are conducted in many clinical scenarios. Prior images from previous studies contain a great deal of patient-specific anatomical information and can be used in conjunction with subsequent imaging acquisitions to maintain image quality while enabling radiation dose reduction (e.g., through sparse angular sampling, reduction in fluence, etc). However, patient motion between images in such sequences results in misregistration between the prior image and current anatomy. Existing prior-image-based approaches often include only a simple rigid registration step that can be insufficient for capturing complex anatomical motion, introducing detrimental effects in subsequent image reconstruction. In this work, we propose a joint framework that estimates the 3D deformation between an unregistered prior image and the current anatomy (based on a subsequent data acquisition) and reconstructs the current anatomical image using a model-based reconstruction approach that includes regularization based on the deformed prior image. This framework is referred to as deformable prior image registration, penalized-likelihood estimation (dPIRPLE). Central to this framework is the inclusion of a 3D B-spline-based free-form-deformation model into the joint registration-reconstruction objective function. The proposed framework is solved using a maximization strategy whereby alternating updates to the registration parameters and image estimates are applied allowing for improvements in both the registration and reconstruction throughout the optimization process. Cadaver experiments were conducted on a cone-beam CT testbench emulating a lung nodule surveillance scenario. Superior reconstruction accuracy and image quality were demonstrated using the dPIRPLE algorithm as compared to more traditional reconstruction methods including filtered backprojection, penalized-likelihood estimation (PLE), prior image penalized-likelihood estimation (PIPLE) without registration, and prior image penalized-likelihood estimation with rigid registration of a prior image (PIRPLE) over a wide range of sampling sparsity and exposure levels.

  16. Profile modification computations for LHCD experiments on PBX-M using the TSC/LSC model

    NASA Astrophysics Data System (ADS)

    Kaita, R.; Ignat, D. W.; Jardin, S. C.; Okabayashi, M.; Sun, Y. C.

    1996-02-01

    The TSC-LSC computational model of the dynamics of lower hybrid current drive has been exercised extensively in comparison with data from a Princeton Beta Experiment-Modification (PBX-M) discharge where the measured q(0) attained values slightly above unity. Several significant, but plausible, assumptions had to be introduced to keep the computation from behaving pathologically over time, producing singular profiles of plasma current density and q. Addition of a heuristic current diffusion estimate, or more exactly, a smoothing of the rf-driven current with a diffusion-like equation, greatly improved the behavior of the computation, and brought theory and measurement into reasonable agreement. The model was then extended to longer pulse lengths and higher powers to investigate performance to be expected in future PBX-M current profile modification experiments.

  17. Assessment of Current Global and Regional Mean Sea Level Estimates Based on the TOPEX/Poseidon Jason-1 and 2 Climate Data Record

    NASA Technical Reports Server (NTRS)

    Beckley, B. D.; Lemoine, F. G.; Zelensky, N. P.; Yang, X.; Holmes, S.; Ray, R. D.; Mitchum, G. T.; Desai, S.; Brown, S.; Haines, B.

    2011-01-01

    Recent developments in Precise Orbit Determinations (POD) due to in particular to revisions to the terrestrial reference frame realization and the time variable gravity (TVG) continues to provide improvements to the accuracy and stability of the PO directly affecting mean sea level (MSL) estimates. Long-term credible MSL estimates require the development and continued maintenance of a stable reference frame, along with vigilant monitoring of the performance of the independent tracking systems used to calculate the orbits for altimeter spacecrafts. The stringent MSL accuracy requirements of a few tenths of an mm/yr are particularly essential for mass budget closure analysis over the relative short time period of Jason-l &2, GRACE, and Argo coincident measurements. In an effort to adhere to cross mission consistency, we have generated a full time series of experimental orbits (GSFC stdlllO) for TOPEX/Poseidon (TP), Jason-I, and OSTM based on an improved terrestrial reference frame (TRF) realization (ITRF2008), revised static (GGM03s), and time variable gravity field (Eigen6s). In this presentation we assess the impact of the revised precision orbits on inter-mission bias estimates, and resultant global and regional MSL trends. Tide gauge verification results are shown to assess the current stability of the Jason-2 sea surface height time series that suggests a possible discontinuity initiated in early 2010. Although the Jason-2 time series is relatively short (approximately 3 years), a thorough review of the entire suite of geophysical and environmental range corrections is warranted and is underway to maintain the fidelity of the record.

  18. Indirect Estimation of Radioactivity in Containerized Cargo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarman, Kenneth D.; Scherrer, Chad; Smith, Eric L.

    Detecting illicit nuclear and radiological material in containerized cargo challenges the state of the art in detection systems. Current systems are being evaluated and new systems envisioned to address the need for the high probability of detection and extremely low false alarm rates necessary to thwart potential threats and extremely low nuisance and false alarm rates while maintaining necessary to maintain the flow of commerce impacted by the enormous volume of commodities imported in shipping containers. Maintaining flow of commerce also means that primary inspection must be rapid, requiring relatively indirect measurements of cargo from outside the containers. With increasingmore » information content in such indirect measurements, it is natural to ask how the information might be combined to improved detection. Toward this end, we present an approach to estimating isotopic activity of naturally occurring radioactive material in cargo grouped by commodity type, combining container manifest data with radiography and gamma spectroscopy aligned to location along the container. The heart of this approach is our statistical model of gamma counts within peak regions of interest, which captures the effects of background suppression, counting noise, convolution of neighboring cargo contributions, and down-scattered photons to provide physically constrained estimates of counts due to decay of specific radioisotopes in cargo alone. Coupled to that model, we use a mechanistic model of self-attenuated radiation flux to estimate the isotopic activity within cargo, segmented by location within each container, that produces those counts. We demonstrate our approach by applying it to a set of measurements taken at the Port of Seattle in 2006. This approach to synthesizing disparate available data streams and extraction of cargo characteristics holds the potential to improve primary inspection using current detection capabilities and to enable simulation-based evaluation of new candidate detection systems.« less

  19. Valuing the commons: An international study on the recreational benefits of the Baltic Sea.

    PubMed

    Czajkowski, Mikołaj; Ahtiainen, Heini; Artell, Janne; Budziński, Wiktor; Hasler, Berit; Hasselström, Linus; Meyerhoff, Jürgen; Nõmmann, Tea; Semeniene, Daiva; Söderqvist, Tore; Tuhkanen, Heidi; Lankia, Tuija; Vanags, Alf; Zandersen, Marianne; Żylicz, Tomasz; Hanley, Nick

    2015-06-01

    The Baltic Sea provides benefits to all of the nine nations along its coastline, with some 85 million people living within the catchment area. Achieving improvements in water quality requires international cooperation. The likelihood of effective cooperation is known to depend on the distribution across countries of the benefits and costs of actions needed to improve water quality. In this paper, we estimate the benefits associated with recreational use of the Baltic Sea in current environmental conditions using a travel cost approach, based on data from a large, standardized survey of households in each of the 9 Baltic Sea states. Both the probability of engaging in recreation (participation) and the number of visits people make are modeled. A large variation in the number of trips and the extent of participation is found, along with large differences in current annual economic benefits from Baltic Sea recreation. The total annual recreation benefits are close to 15 billion EUR. Under a water quality improvement scenario, the proportional increases in benefits range from 7 to 18% of the current annual benefits across countries. Depending on how the costs of actions are distributed, this could imply difficulties in achieving more international cooperation to achieve such improvements. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Identifying Optimal Temporal Scale for the Correlation of AOD and Ground Measurements of PM2.5 to Improve the Model Performance in a Real-time Air Quality Estimation System

    NASA Technical Reports Server (NTRS)

    Li, Hui; Faruque, Fazlay; Williams, Worth; Al-Hamdan, Mohammad; Luvall, Jeffrey C.; Crosson, William; Rickman, Douglas; Limaye, Ashutosh

    2009-01-01

    Aerosol optical depth (AOD), an indirect estimate of particle matter using satellite observations, has shown great promise in improving estimates of PM 2.5 air quality surface. Currently, few studies have been conducted to explore the optimal way to apply AOD data to improve the model accuracy of PM 2.5 surface estimation in a real-time air quality system. We believe that two major aspects may be worthy of consideration in that area: 1) the approach to integrate satellite measurements with ground measurements in the pollution estimation, and 2) identification of an optimal temporal scale to calculate the correlation of AOD and ground measurements. This paper is focused on the second aspect on the identifying the optimal temporal scale to correlate AOD with PM2.5. Five following different temporal scales were chosen to evaluate their impact on the model performance: 1) within the last 3 days, 2) within the last 10 days, 3) within the last 30 days, 4) within the last 90 days, and 5) the time period with the highest correlation in a year. The model performance is evaluated for its accuracy, bias, and errors based on the following selected statistics: the Mean Bias, the Normalized Mean Bias, the Root Mean Square Error, Normalized Mean Error, and the Index of Agreement. This research shows that the model with the temporal scale of within the last 30 days displays the best model performance in this study area using 2004 and 2005 data sets.

  1. Accurate Realization of GPS Vertical Global Reference Frame

    NASA Technical Reports Server (NTRS)

    Elosegui, Pedro

    2004-01-01

    The few millimeter per year level accuracy of radial global velocity estimates with the Global Positioning System (GPS) is at least an order of magnitude poorer than the accuracy of horizontal global motions. An improvement in the accuracy of radial global velocities would have a very positive impact on a number of geophysical studies of current general interest such as global sea-level and climate change, coastal hazards, glacial isostatic adjustment, atmospheric and oceanic loading, glaciology and ice mass variability, tectonic deformation and volcanic inflation, and geoid variability. The goal of this project is to improve our current understanding of GPS error sources associated with estimates of radial velocities at global scales. GPS error sources relevant to this project can be classified in two broad categories: (1) those related to the analysis of the GPS phase observable, and (2) those related to the combination of the positions and velocities of a set of globally distributed stations as determined from the analysis of GPS data important aspect in the first category include the effect on vertical rate estimates due to standard analysis choices, such as orbit modeling, network geometry, ambiguity resolution, as well as errors in models (or simply the lack of models) for clocks, multipath, phase-center variations, atmosphere, and solid-Earth tides. The second category includes the possible methods of combining and defining terrestrial reference flames for determining vertical velocities in a global scale. The latter has been the subject of our research activities during this reporting period.

  2. Pediatric Sepsis

    PubMed Central

    Mathias, Brittany; Mira, Juan; Larson, Shawn D.

    2016-01-01

    Purpose of Review Sepsis is the leading cause of pediatric death worldwide. In the United States alone, there are 72,000 children hospitalized for sepsis annually with a reported mortality rate of 25% and an economic cost estimated to be $4.8 billion. However, it is only recently that the definition and management of pediatric sepsis has been recognized as being distinct from adult sepsis. Recent Findings The definition of pediatric sepsis is currently in a state of evolution and there is a large disconnect between the clinical and research definitions of sepsis which impacts the application of research findings into clinical practice. Despite this, it is the speed of diagnosis and the timely implementation of current treatment guidelines that has been shown to improve outcomes. However, adherence to treatment guidelines is currently low and it is only through the implementation of protocols that improved care and outcomes have been demonstrated. Summary Current management of pediatric sepsis is largely based on adaptations from adult sepsis treatment; however, distinct physiology demands more prospective pediatric trials to tailor management to the pediatric population. Adherence to current and emerging practice guidelines will require that protocolized care pathways become commonplace. PMID:26983000

  3. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering

    PubMed Central

    Carmena, Jose M.

    2016-01-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain’s behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user’s motor intention during CLDA—a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to parameter initialization. Finally, the architecture extended control to tasks beyond those used for CLDA training. These results have significant implications towards the development of clinically-viable neuroprosthetics. PMID:27035820

  4. Improving the representation of Arctic photosynthesis in Earth System Models

    NASA Astrophysics Data System (ADS)

    Rogers, A.; Serbin, S.; Sloan, V. L.; Norby, R. J.; Wullschleger, S. D.

    2014-12-01

    The primary goal of Earth System Models (ESMs) is to improve understanding and projection of future global change. In order to do this models must accurately represent the terrestrial carbon cycle. Although Arctic carbon fluxes are small relative to global carbon fluxes, uncertainty is large. Photosynthetic CO2 uptake is well described by the Farquhar, von Caemmerer and Berry (FvCB) model of photosynthesis and most ESMs use a derivation of the FvCB model to calculate gross primary productivity. Two key parameters required by the FvCB model are an estimate of the maximum rate of carboxylation by the enzyme Rubisco (Vc,max) and the maximum rate of electron transport (Jmax). In ESMs the parameter Vc,max is typically fixed for a given plant functional type (PFT). Only four ESMs currently have an explicit Arctic PFT and the data used to derive Vc,max in these models relies on small data sets and unjustified assumptions. We examined the derivation of Vc,max and Jmax in current Arctic PFTs and estimated Vc,max and Jmax for a range of Arctic PFTs growing on the Barrow Environmental Observatory, Barrow, AK. We found that the values of Vc,max currently used to represent Arctic plants in ESMs are 70% lower than the values we measured, and contemporary temperature response functions for Vc,max also appear to underestimate Vc,max at low temperature. ESMs typically use a single multiplier (JVratio) to convert Vc,max to Jmax, however we found that the JVratio of Arctic plants is higher than current estimates suggesting that Arctic PFTs will be more responsive to rising carbon dioxide than currently projected. In addition we are exploring remotely sensed methods to scale up key biochemical (e.g. leaf N, leaf mass area) and physiological (e.g. Vc,max and Jmax) properties that drive model representation of photosynthesis in the Arctic. Our data suggest that the Arctic tundra has a much greater capacity for CO2 uptake, particularly at low temperature, and will be more CO2 responsive than is currently represented in ESMs. As we build robust relationships between physiology and spectral signatures we hope to provide spatially and temporally resolved trait maps of key model parameters that can be ingested by new model frameworks, or used to validate emergent model properties.

  5. Induction machine bearing faults detection based on a multi-dimensional MUSIC algorithm and maximum likelihood estimation.

    PubMed

    Elbouchikhi, Elhoussin; Choqueuse, Vincent; Benbouzid, Mohamed

    2016-07-01

    Condition monitoring of electric drives is of paramount importance since it contributes to enhance the system reliability and availability. Moreover, the knowledge about the fault mode behavior is extremely important in order to improve system protection and fault-tolerant control. Fault detection and diagnosis in squirrel cage induction machines based on motor current signature analysis (MCSA) has been widely investigated. Several high resolution spectral estimation techniques have been developed and used to detect induction machine abnormal operating conditions. This paper focuses on the application of MCSA for the detection of abnormal mechanical conditions that may lead to induction machines failure. In fact, this paper is devoted to the detection of single-point defects in bearings based on parametric spectral estimation. A multi-dimensional MUSIC (MD MUSIC) algorithm has been developed for bearing faults detection based on bearing faults characteristic frequencies. This method has been used to estimate the fundamental frequency and the fault related frequency. Then, an amplitude estimator of the fault characteristic frequencies has been proposed and fault indicator has been derived for fault severity measurement. The proposed bearing faults detection approach is assessed using simulated stator currents data, issued from a coupled electromagnetic circuits approach for air-gap eccentricity emulating bearing faults. Then, experimental data are used for validation purposes. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  6. A Readout Integrated Circuit (ROIC) employing self-adaptive background current compensation technique for Infrared Focal Plane Array (IRFPA)

    NASA Astrophysics Data System (ADS)

    Zhou, Tong; Zhao, Jian; He, Yong; Jiang, Bo; Su, Yan

    2018-05-01

    A novel self-adaptive background current compensation circuit applied to infrared focal plane array is proposed in this paper, which can compensate the background current generated in different conditions. Designed double-threshold detection strategy is to estimate and eliminate the background currents, which could significantly reduce the hardware overhead and improve the uniformity among different pixels. In addition, the circuit is well compatible to various categories of infrared thermo-sensitive materials. The testing results of a 4 × 4 experimental chip showed that the proposed circuit achieves high precision, wide application and high intelligence. Tape-out of the 320 × 240 readout circuit, as well as the bonding, encapsulation and imaging verification of uncooled infrared focal plane array, have also been completed.

  7. Quantitative CT: technique dependence of volume estimation on pulmonary nodules

    NASA Astrophysics Data System (ADS)

    Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Colsher, James; Amurao, Maxwell; Samei, Ehsan

    2012-03-01

    Current estimation of lung nodule size typically relies on uni- or bi-dimensional techniques. While new three-dimensional volume estimation techniques using MDCT have improved size estimation of nodules with irregular shapes, the effect of acquisition and reconstruction parameters on accuracy (bias) and precision (variance) of the new techniques has not been fully investigated. To characterize the volume estimation performance dependence on these parameters, an anthropomorphic chest phantom containing synthetic nodules was scanned and reconstructed with protocols across various acquisition and reconstruction parameters. Nodule volumes were estimated by a clinical lung analysis software package, LungVCAR. Precision and accuracy of the volume assessment were calculated across the nodules and compared between protocols via a generalized estimating equation analysis. Results showed that the precision and accuracy of nodule volume quantifications were dependent on slice thickness, with different dependences for different nodule characteristics. Other parameters including kVp, pitch, and reconstruction kernel had lower impact. Determining these technique dependences enables better volume quantification via protocol optimization and highlights the importance of consistent imaging parameters in sequential examinations.

  8. Transient interaction model of electromagnetic field generated by lightning current pulses and human body

    NASA Astrophysics Data System (ADS)

    Iváncsy, T.; Kiss, I.; Szücs, L.; Tamus, Z. Á.

    2015-10-01

    The lightning current generates time-varying magnetic field near the down- conductor and the down-conductors are mounted on the wall of the buildings where residential places might be situated. It is well known that the rapidly changing magnetic fields can generate dangerous eddy currents in the human body.The higher duration and gradient of the magnetic field can cause potentially life threatening cardiac stimulation. The coupling mechanism between the electromagnetic field and the human body is based on a well-known physical phenomena (e.g. Faradays law of induction). However, the calculation of the induced current is very complicated because the shape of the organs is complex and the determination of the material properties of living tissues is difficult, as well. Our previous study revealed that the cardiac stimulation is independent of the rising time of the lightning current and only the peak of the current counts. In this study, the authors introduce an improved model of the interaction of electromagnetic fields of lighting current near down-conductor and human body. Our previous models are based on the quasi stationer field calculations, the new improved model is a transient model. This is because the magnetic field around the down-conductor and in the human body can be determined more precisely, therefore the dangerous currents in the body can be estimated.

  9. RETROFIT COSTS FOR SO2 AND NOX CONTROL OPTIONS AT 200 COAL-FIRED PLANTS, VOLUME IV - SITE SPECIFIC STUDIES FOR MO, MS, NC, NH, NJ, NY, OH

    EPA Science Inventory

    The report gives results of a study, the objective of which was to significantly improve engineering cost estimates currently being used to evaluate the economic effects of applying SO2 and NOx controls at 200 large SO2-emitting coal-fired utility plants. To accomplish the object...

  10. RETROFIT COSTS FOR SO2 AND NOX CONTROL OPTIONS AT 200 COAL-FIRED PLANTS, VOLUME V - SITE SPECIFIC STUDIES FOR PA, SC, TN, VA, WI, WV

    EPA Science Inventory

    The report gives results of a study, the objective of which was to significantly improve engineering cost estimates currently being used to evaluate the economic effects of applying SO2 and NOx controls at 200 large SO2-emitting coal-fired utility plants. To accomplish the object...

  11. RETROFIT COSTS FOR SO2 AND NOX CONTROL OPTIONS AT 200 COAL-FIRED PLANTS, VOLUME II - SITE SPECIFIC STUDIES FOR AL, DE. FL, GA, IL

    EPA Science Inventory

    The report gives results of a study, the objective of which was to significantly improve engineering cost estimates currently being used to evaluate the economic effects of applying SO2 and NOx controls at 200 large SO2-emitting coal-fired utility plants. To accomplish the object...

  12. RETROFIT COSTS FOR SO2 AND NOX CONTROL OPTIONS AT 200 COAL-FIRED PLANTS, VOLUME III - SITE SPECIFIC STUDIES FOR IN, KY, MA, MD, MI, MN

    EPA Science Inventory

    The report gives results of a study, the objective of which was to significantly improve engineering cost estimates currently being used to evaluate the economic effects of applying SO2 and NOx controls at 200 large SO2-emitting coal-fired utility plants. To accomplish the object...

  13. When Measurement Benefits the Measured

    DTIC Science & Technology

    2014-04-23

    manage how you estimate your work and how you manage the quality of our work. Knowledge workers manage themselves with data. Software engineers...development, coaching, and training. His current research and development interests include data quality assessment and improvement, project...was an engineer and manager at Boeing in Seattle. He has a Masters Degree in Systems Engineering and is a senior member of IEEE. Mark is a certified

  14. Inventory of greenhouse gas emissions from on-road vehicles in Midwestern USA States and integrated approach to achieving environmental sustainability in transportation : USDOT Region V Regional University Transportation Center final report : technical su

    DOT National Transportation Integrated Search

    2016-12-29

    Two project objectives one technical and one educational- were laid out in this project. The technical objective was to assess current inventory of greenhouse gases (GHG) in the six Midwestern states of the nation and to estimate improvements as ...

  15. Crowdsourcing urban air temperature measurements using smartphones

    NASA Astrophysics Data System (ADS)

    Balcerak, Ernie

    2013-10-01

    Crowdsourced data from cell phone battery temperature sensors could be used to contribute to improved real-time, high-resolution air temperature estimates in urban areas, a new study shows. Temperature observations in cities are in some cases currently limited to a few weather stations, but there are millions of smartphone users in many cities. The batteries in cell phones have temperature sensors to avoid damage to the phone.

  16. Estimating the NIH Efficient Frontier

    PubMed Central

    2012-01-01

    Background The National Institutes of Health (NIH) is among the world’s largest investors in biomedical research, with a mandate to: “…lengthen life, and reduce the burdens of illness and disability.” Its funding decisions have been criticized as insufficiently focused on disease burden. We hypothesize that modern portfolio theory can create a closer link between basic research and outcome, and offer insight into basic-science related improvements in public health. We propose portfolio theory as a systematic framework for making biomedical funding allocation decisions–one that is directly tied to the risk/reward trade-off of burden-of-disease outcomes. Methods and Findings Using data from 1965 to 2007, we provide estimates of the NIH “efficient frontier”, the set of funding allocations across 7 groups of disease-oriented NIH institutes that yield the greatest expected return on investment for a given level of risk, where return on investment is measured by subsequent impact on U.S. years of life lost (YLL). The results suggest that NIH may be actively managing its research risk, given that the volatility of its current allocation is 17% less than that of an equal-allocation portfolio with similar expected returns. The estimated efficient frontier suggests that further improvements in expected return (89% to 119% vs. current) or reduction in risk (22% to 35% vs. current) are available holding risk or expected return, respectively, constant, and that 28% to 89% greater decrease in average years-of-life-lost per unit risk may be achievable. However, these results also reflect the imprecision of YLL as a measure of disease burden, the noisy statistical link between basic research and YLL, and other known limitations of portfolio theory itself. Conclusions Our analysis is intended to serve as a proof-of-concept and starting point for applying quantitative methods to allocating biomedical research funding that are objective, systematic, transparent, repeatable, and expressly designed to reduce the burden of disease. By approaching funding decisions in a more analytical fashion, it may be possible to improve their ultimate outcomes while reducing unintended consequences. PMID:22567087

  17. Estimating turbidity current conditions from channel morphology: A Froude number approach

    NASA Astrophysics Data System (ADS)

    Sequeiros, Octavio E.

    2012-04-01

    There is a growing need across different disciplines to develop better predictive tools for flow conditions of density and turbidity currents. Apart from resorting to complex numerical modeling or expensive field measurements, little is known about how to estimate gravity flow parameters from scarce available data and how they relate to each other. This study presents a new method to estimate normal flow conditions of gravity flows from channel morphology based on an extensive data set of laboratory and field measurements. The compilation consists of 78 published works containing 1092 combined measurements of velocity and concentration of gravity flows dating as far back as the early 1950s. Because the available data do not span all ranges of the critical parameters, such as bottom slope, a validated Reynolds-averaged Navier-Stokes (RANS)κ-ɛnumerical model is used to cover the gaps. It is shown that gravity flows fall within a range of Froude numbers spanning 1 order of magnitude centered on unity, as opposed to rivers and open-channel flows which extend to a much wider range. It is also observed that the transition from subcritical to supercritical flow regime occurs around a slope of 1%, with a spread caused by parameters other than the bed slope, like friction and suspended sediment settling velocity. The method is based on a set of equations relating Froude number to bed slope, combined friction, suspended material, and other flow parameters. The applications range from quick estimations of gravity flow conditions to improved numerical modeling and back calculation of missing parameters. A real case scenario of turbidity current estimation from a submarine canyon off the Nigerian coast is provided as an example.

  18. Environmental and societal consequences of a possible CO/sub 2/-induced climate change. Volume II, Part 14. Research needed to determine the present carbon balance of northern ecosystems and the potential effect of carbon-dioxide-induced climate change

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, P.C.

    1982-10-01

    Given the potential significance of northern ecosystems to the global carbon budget it is critical to estimate the current carbon balance of these ecosystems as precisely as possible, to improve estimates of the future carbon balance if world climates change, and to assess the range of certainty associated with these estimates. As a first step toward quantifying some of the potential changes, a workshop with tundra and taiga ecologists and soil scientists was held in San Diego in March 1980. The first part of this report summarizes the conclusions of this workshop with regard to the estimate of the currentmore » areal extent and carbon content of the circumpolar arctic and the taiga, current rates of carbon accumulation in the peat in the arctic and the taiga, and predicted future carbon accumulation rates based on the present understanding of controlling processes and on the understanding of past climates and vegetation. This report presents a finer resolution of areal extents, standing crops, and production rates than was possible previously because of recent syntheses of data from the International Biological Program and current studies in the northern ecosystems, some of which have not yet been published. This recent information changes most of the earlier estimates of carbon content and affects predictions of the effect of climate change. The second part of this report outlines research needed to fill major gaps in the understanding of the role of northern ecosystems in global climate change.« less

  19. Malaria prevalence metrics in low- and middle-income countries: an assessment of precision in nationally-representative surveys.

    PubMed

    Alegana, Victor A; Wright, Jim; Bosco, Claudio; Okiro, Emelda A; Atkinson, Peter M; Snow, Robert W; Tatem, Andrew J; Noor, Abdisalan M

    2017-11-21

    One pillar to monitoring progress towards the Sustainable Development Goals is the investment in high quality data to strengthen the scientific basis for decision-making. At present, nationally-representative surveys are the main source of data for establishing a scientific evidence base, monitoring, and evaluation of health metrics. However, little is known about the optimal precisions of various population-level health and development indicators that remains unquantified in nationally-representative household surveys. Here, a retrospective analysis of the precision of prevalence from these surveys was conducted. Using malaria indicators, data were assembled in nine sub-Saharan African countries with at least two nationally-representative surveys. A Bayesian statistical model was used to estimate between- and within-cluster variability for fever and malaria prevalence, and insecticide-treated bed nets (ITNs) use in children under the age of 5 years. The intra-class correlation coefficient was estimated along with the optimal sample size for each indicator with associated uncertainty. Results suggest that the estimated sample sizes for the current nationally-representative surveys increases with declining malaria prevalence. Comparison between the actual sample size and the modelled estimate showed a requirement to increase the sample size for parasite prevalence by up to 77.7% (95% Bayesian credible intervals 74.7-79.4) for the 2015 Kenya MIS (estimated sample size of children 0-4 years 7218 [7099-7288]), and 54.1% [50.1-56.5] for the 2014-2015 Rwanda DHS (12,220 [11,950-12,410]). This study highlights the importance of defining indicator-relevant sample sizes to achieve the required precision in the current national surveys. While expanding the current surveys would need additional investment, the study highlights the need for improved approaches to cost effective sampling.

  20. A Review of Issues Related to Data Acquisition and Analysis in EEG/MEG Studies.

    PubMed

    Puce, Aina; Hämäläinen, Matti S

    2017-05-31

    Electroencephalography (EEG) and magnetoencephalography (MEG) are non-invasive electrophysiological methods, which record electric potentials and magnetic fields due to electric currents in synchronously-active neurons. With MEG being more sensitive to neural activity from tangential currents and EEG being able to detect both radial and tangential sources, the two methods are complementary. Over the years, neurophysiological studies have changed considerably: high-density recordings are becoming de rigueur; there is interest in both spontaneous and evoked activity; and sophisticated artifact detection and removal methods are available. Improved head models for source estimation have also increased the precision of the current estimates, particularly for EEG and combined EEG/MEG. Because of their complementarity, more investigators are beginning to perform simultaneous EEG/MEG studies to gain more complete information about neural activity. Given the increase in methodological complexity in EEG/MEG, it is important to gather data that are of high quality and that are as artifact free as possible. Here, we discuss some issues in data acquisition and analysis of EEG and MEG data. Practical considerations for different types of EEG and MEG studies are also discussed.

  1. Inertial sensor-based methods in walking speed estimation: a systematic review.

    PubMed

    Yang, Shuozhi; Li, Qingguo

    2012-01-01

    Self-selected walking speed is an important measure of ambulation ability used in various clinical gait experiments. Inertial sensors, i.e., accelerometers and gyroscopes, have been gradually introduced to estimate walking speed. This research area has attracted a lot of attention for the past two decades, and the trend is continuing due to the improvement of performance and decrease in cost of the miniature inertial sensors. With the intention of understanding the state of the art of current development in this area, a systematic review on the exiting methods was done in the following electronic engines/databases: PubMed, ISI Web of Knowledge, SportDiscus and IEEE Xplore. Sixteen journal articles and papers in proceedings focusing on inertial sensor based walking speed estimation were fully reviewed. The existing methods were categorized by sensor specification, sensor attachment location, experimental design, and walking speed estimation algorithm.

  2. Inertial Sensor-Based Methods in Walking Speed Estimation: A Systematic Review

    PubMed Central

    Yang, Shuozhi; Li, Qingguo

    2012-01-01

    Self-selected walking speed is an important measure of ambulation ability used in various clinical gait experiments. Inertial sensors, i.e., accelerometers and gyroscopes, have been gradually introduced to estimate walking speed. This research area has attracted a lot of attention for the past two decades, and the trend is continuing due to the improvement of performance and decrease in cost of the miniature inertial sensors. With the intention of understanding the state of the art of current development in this area, a systematic review on the exiting methods was done in the following electronic engines/databases: PubMed, ISI Web of Knowledge, SportDiscus and IEEE Xplore. Sixteen journal articles and papers in proceedings focusing on inertial sensor based walking speed estimation were fully reviewed. The existing methods were categorized by sensor specification, sensor attachment location, experimental design, and walking speed estimation algorithm. PMID:22778632

  3. Correction for photobleaching in dynamic fluorescence microscopy: application in the assessment of pharmacokinetic parameters in ultrasound-mediated drug delivery

    NASA Astrophysics Data System (ADS)

    Derieppe, M.; Bos, C.; de Greef, M.; Moonen, C.; de Senneville, B. Denis

    2016-01-01

    We have previously demonstrated the feasibility of monitoring ultrasound-mediated uptake of a hydrophilic model drug in real time with dynamic confocal fluorescence microscopy. In this study, we evaluate and correct the impact of photobleaching to improve the accuracy of pharmacokinetic parameter estimates. To model photobleaching of the fluorescent model drug SYTOX Green, a photobleaching process was added to the current two-compartment model describing cell uptake. After collection of the uptake profile, a second acquisition was performed when SYTOX Green was equilibrated, to evaluate the photobleaching rate experimentally. Photobleaching rates up to 5.0 10-3 s-1 were measured when applying power densities up to 0.2 W.cm-2. By applying the three-compartment model, the model drug uptake rate of 6.0 10-3 s-1 was measured independent of the applied laser power. The impact of photobleaching on uptake rate estimates measured by dynamic fluorescence microscopy was evaluated. Subsequent compensation improved the accuracy of pharmacokinetic parameter estimates in the cell population subjected to sonopermeabilization.

  4. Towards disparity joint upsampling for robust stereoscopic endoscopic scene reconstruction in robotic prostatectomy

    NASA Astrophysics Data System (ADS)

    Luo, Xiongbiao; McLeod, A. Jonathan; Jayarathne, Uditha L.; Pautler, Stephen E.; Schlacta, Christopher M.; Peters, Terry M.

    2016-03-01

    Three-dimensional (3-D) scene reconstruction from stereoscopic binocular laparoscopic videos is an effective way to expand the limited surgical field and augment the structure visualization of the organ being operated in minimally invasive surgery. However, currently available reconstruction approaches are limited by image noise, occlusions, textureless and blurred structures. In particular, an endoscope inside the body only has the limited light source resulting in illumination non-uniformities in the visualized field. These limitations unavoidably deteriorate the stereo image quality and hence lead to low-resolution and inaccurate disparity maps, resulting in blurred edge structures in 3-D scene reconstruction. This paper proposes an improved stereo correspondence framework that integrates cost-volume filtering with joint upsampling for robust disparity estimation. Joint bilateral upsampling, joint geodesic upsampling, and tree filtering upsampling were compared to enhance the disparity accuracy. The experimental results demonstrate that joint upsampling provides an effective way to boost the disparity estimation and hence to improve the surgical endoscopic scene 3-D reconstruction. Moreover, the bilateral upsampling generally outperforms the other two upsampling methods in disparity estimation.

  5. Towards Personal Exposures: How Technology Is Changing Air Pollution and Health Research.

    PubMed

    Larkin, A; Hystad, P

    2017-12-01

    We present a review of emerging technologies and how these can transform personal air pollution exposure assessment and subsequent health research. Estimating personal air pollution exposures is currently split broadly into methods for modeling exposures for large populations versus measuring exposures for small populations. Air pollution sensors, smartphones, and air pollution models capitalizing on big/new data sources offer tremendous opportunity for unifying these approaches and improving long-term personal exposure prediction at scales needed for population-based research. A multi-disciplinary approach is needed to combine these technologies to not only estimate personal exposures for epidemiological research but also determine drivers of these exposures and new prevention opportunities. While available technologies can revolutionize air pollution exposure research, ethical, privacy, logistical, and data science challenges must be met before widespread implementations occur. Available technologies and related advances in data science can improve long-term personal air pollution exposure estimates at scales needed for population-based research. This will advance our ability to evaluate the impacts of air pollution on human health and develop effective prevention strategies.

  6. Integrating High-Resolution Datasets to Target Mitigation Efforts for Improving Air Quality and Public Health in Urban Neighborhoods

    PubMed Central

    Shandas, Vivek; Voelkel, Jackson; Rao, Meenakshi; George, Linda

    2016-01-01

    Reducing exposure to degraded air quality is essential for building healthy cities. Although air quality and population vary at fine spatial scales, current regulatory and public health frameworks assess human exposures using county- or city-scales. We build on a spatial analysis technique, dasymetric mapping, for allocating urban populations that, together with emerging fine-scale measurements of air pollution, addresses three objectives: (1) evaluate the role of spatial scale in estimating exposure; (2) identify urban communities that are disproportionately burdened by poor air quality; and (3) estimate reduction in mobile sources of pollutants due to local tree-planting efforts using nitrogen dioxide. Our results show a maximum value of 197% difference between cadastrally-informed dasymetric system (CIDS) and standard estimations of population exposure to degraded air quality for small spatial extent analyses, and a lack of substantial difference for large spatial extent analyses. These results provide the foundation for improving policies for managing air quality, and targeting mitigation efforts to address challenges of environmental justice. PMID:27527205

  7. A Bayesian evidence synthesis approach to estimate disease prevalence in hard-to-reach populations: hepatitis C in New York City.

    PubMed

    Tan, Sarah; Makela, Susanna; Heller, Daliah; Konty, Kevin; Balter, Sharon; Zheng, Tian; Stark, James H

    2018-06-01

    Existing methods to estimate the prevalence of chronic hepatitis C (HCV) in New York City (NYC) are limited in scope and fail to assess hard-to-reach subpopulations with highest risk such as injecting drug users (IDUs). To address these limitations, we employ a Bayesian multi-parameter evidence synthesis model to systematically combine multiple sources of data, account for bias in certain data sources, and provide unbiased HCV prevalence estimates with associated uncertainty. Our approach improves on previous estimates by explicitly accounting for injecting drug use and including data from high-risk subpopulations such as the incarcerated, and is more inclusive, utilizing ten NYC data sources. In addition, we derive two new equations to allow age at first injecting drug use data for former and current IDUs to be incorporated into the Bayesian evidence synthesis, a first for this type of model. Our estimated overall HCV prevalence as of 2012 among NYC adults aged 20-59 years is 2.78% (95% CI 2.61-2.94%), which represents between 124,900 and 140,000 chronic HCV cases. These estimates suggest that HCV prevalence in NYC is higher than previously indicated from household surveys (2.2%) and the surveillance system (2.37%), and that HCV transmission is increasing among young injecting adults in NYC. An ancillary benefit from our results is an estimate of current IDUs aged 20-59 in NYC: 0.58% or 27,600 individuals. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Estimating the Velocity and Transport of the East Australian Current using Argo, XBT, and Altimetry

    NASA Astrophysics Data System (ADS)

    Zilberman, N. V.; Roemmich, D. H.; Gille, S. T.

    2016-02-01

    Western Boundary Currents (WBCs) are the strongest ocean currents in the subtropics, and constitute the main pathway through which warm water-masses transit from low to mid-latitudes in the subtropical gyres of the Atlantic, Pacific, and Indian Oceans. Heat advection by WBCs has a significant impact on heat storage in subtropical mode waters formation regions and at high latitudes. The possibility that the magnitude of WBCs might change under greenhouse gas forcing has raised significant concerns. Improving our knowledge of WBC circulation is essential to accurately monitor the oceanic heat budget. Because of the narrowness and strong mesoscale variability of WBCs, estimation of WBC velocity and transport places heavy demands on any potential sampling scheme. One strategy for studying WBCs is to combine complementary data sources. High-resolution bathythermograph (HRX) profiles to 800-m have been collected along transects crossing the East Australian Current (EAC) system at 3-month nominal sampling intervals since 1991. EAC transects, with spatial sampling as fine as 10-15 km, are obtained off Brisbane (27°S) and Sydney (34°S), and crossing the related East Auckland Current north of Auckland. Here, HRX profiles collected since 2004 off Brisbane are merged with Argo float profiles and 1000 m trajectory-based velocities to expand HRX shear estimates to 2000-m and to estimate absolute geostrophic velocity and transport. A method for combining altimetric data with HRX and Argo profiles to mitigate temporal aliasing by the HRX transects and to reduce sampling errors in the HRX/Argo datasets is described. The HRX/Argo/altimetry-based estimate of the time-mean poleward alongshore transport of the EAC off Brisbane is 18.3 Sv, with a width of about 180 km, and of which 3.7 Sv recirculates equatorward on a similar spatial scale farther offshore. Geostrophic transport anomalies in the EAC at 27°S show variability of ± 1.3 Sv at interannual time scale related to ENSO. The present calculation is a case study that will be extended to other subtropical WBCs.

  9. Life expectancy living with HIV: recent estimates and future implications.

    PubMed

    Nakagawa, Fumiyo; May, Margaret; Phillips, Andrew

    2013-02-01

    The life expectancy of people living with HIV has dramatically increased since effective antiretroviral therapy has been available, and still continues to improve. Here, we review the latest literature on estimates of life expectancy and consider the implications for future research. With timely diagnosis, access to a variety of current drugs and good lifelong adherence, people with recently acquired infections can expect to have a life expectancy which is nearly the same as that of HIV-negative individuals. Modelling studies suggest that life expectancy could improve further if there were increased uptake of HIV testing, better antiretroviral regimens and treatment strategies, and the adoption of healthier lifestyles by those living with HIV. In particular, earlier diagnosis is one of the most important factors associated with better life expectancy. A consequence of improved survival is the increasing number of people with HIV who are aged over 50 years old, and further research into the impact of ageing on HIV-positive people will therefore become crucial. The development of age-specific HIV treatment and management guidelines is now called for. Analyses on cohort studies and mathematical modelling studies have been used to estimate life expectancy of those with HIV, providing useful insights of importance to individuals and healthcare planning.

  10. Estimating tuberculosis incidence from primary survey data: a mathematical modeling approach.

    PubMed

    Pandey, S; Chadha, V K; Laxminarayan, R; Arinaminpathy, N

    2017-04-01

    There is an urgent need for improved estimations of the burden of tuberculosis (TB). To develop a new quantitative method based on mathematical modelling, and to demonstrate its application to TB in India. We developed a simple model of TB transmission dynamics to estimate the annual incidence of TB disease from the annual risk of tuberculous infection and prevalence of smear-positive TB. We first compared model estimates for annual infections per smear-positive TB case using previous empirical estimates from China, Korea and the Philippines. We then applied the model to estimate TB incidence in India, stratified by urban and rural settings. Study model estimates show agreement with previous empirical estimates. Applied to India, the model suggests an annual incidence of smear-positive TB of 89.8 per 100 000 population (95%CI 56.8-156.3). Results show differences in urban and rural TB: while an urban TB case infects more individuals per year, a rural TB case remains infectious for appreciably longer, suggesting the need for interventions tailored to these different settings. Simple models of TB transmission, in conjunction with necessary data, can offer approaches to burden estimation that complement those currently being used.

  11. The Novel Nonlinear Adaptive Doppler Shift Estimation Technique and the Coherent Doppler Lidar System Validation Lidar

    NASA Technical Reports Server (NTRS)

    Beyon, Jeffrey Y.; Koch, Grady J.

    2006-01-01

    The signal processing aspect of a 2-m wavelength coherent Doppler lidar system under development at NASA Langley Research Center in Virginia is investigated in this paper. The lidar system is named VALIDAR (validation lidar) and its signal processing program estimates and displays various wind parameters in real-time as data acquisition occurs. The goal is to improve the quality of the current estimates such as power, Doppler shift, wind speed, and wind direction, especially in low signal-to-noise-ratio (SNR) regime. A novel Nonlinear Adaptive Doppler Shift Estimation Technique (NADSET) is developed on such behalf and its performance is analyzed using the wind data acquired over a long period of time by VALIDAR. The quality of Doppler shift and power estimations by conventional Fourier-transform-based spectrum estimation methods deteriorates rapidly as SNR decreases. NADSET compensates such deterioration in the quality of wind parameter estimates by adaptively utilizing the statistics of Doppler shift estimate in a strong SNR range and identifying sporadic range bins where good Doppler shift estimates are found. The authenticity of NADSET is established by comparing the trend of wind parameters with and without NADSET applied to the long-period lidar return data.

  12. Performance Study of Earth Networks Total Lightning Network using Rocket-Triggered Lightning Data in 2014

    NASA Astrophysics Data System (ADS)

    Heckman, S.

    2015-12-01

    Modern lightning locating systems (LLS) provide real-time monitoring and early warning of lightningactivities. In addition, LLS provide valuable data for statistical analysis in lightning research. It isimportant to know the performance of such LLS. In the present study, the performance of the EarthNetworks Total Lightning Network (ENTLN) is studied using rocket-triggered lightning data acquired atthe International Center for Lightning Research and Testing (ICLRT), Camp Blanding, Florida.In the present study, 18 flashes triggered at ICLRT in 2014 were analyzed and they comprise of 78negative cloud-to-ground return strokes. The geometric mean, median, minimum, and maximum for thepeak currents of the 78 return strokes are 13.4 kA, 13.6 kA, 3.7 kA, and 38.4 kA, respectively. The peakcurrents represent typical subsequent return strokes in natural cloud-to-ground lightning.Earth Networks has developed a new data processor to improve the performance of their network. Inthis study, results are presented for the ENTLN data using the old processor (originally reported in 2014)and the ENTLN data simulated using the new processor. The flash detection efficiency, stroke detectionefficiency, percentage of misclassification, median location error, median peak current estimation error,and median absolute peak current estimation error for the originally reported data from old processorare 100%, 94%, 49%, 271 m, 5%, and 13%, respectively, and those for the simulated data using the newprocessor are 100%, 99%, 9%, 280 m, 11%, and 15%, respectively. The use of new processor resulted inhigher stroke detection efficiency and lower percentage of misclassification. It is worth noting that theslight differences in median location error, median peak current estimation error, and median absolutepeak current estimation error for the two processors are due to the fact that the new processordetected more number of return strokes than the old processor.

  13. Projections of the current and future disease burden of hepatitis C virus infection in Malaysia.

    PubMed

    McDonald, Scott A; Dahlui, Maznah; Mohamed, Rosmawati; Naning, Herlianna; Shabaruddin, Fatiha Hana; Kamarulzaman, Adeeba

    2015-01-01

    The prevalence of hepatitis C virus (HCV) infection in Malaysia has been estimated at 2.5% of the adult population. Our objective, satisfying one of the directives of the WHO Framework for Global Action on Viral Hepatitis, was to forecast the HCV disease burden in Malaysia using modelling methods. An age-structured multi-state Markov model was developed to simulate the natural history of HCV infection. We tested three historical incidence scenarios that would give rise to the estimated prevalence in 2009, and calculated the incidence of cirrhosis, end-stage liver disease, and death, and disability-adjusted life-years (DALYs) under each scenario, to the year 2039. In the baseline scenario, current antiviral treatment levels were extended from 2014 to the end of the simulation period. To estimate the disease burden averted under current sustained virological response rates and treatment levels, the baseline scenario was compared to a counterfactual scenario in which no past or future treatment is assumed. In the baseline scenario, the projected disease burden for the year 2039 is 94,900 DALYs/year (95% credible interval (CrI): 77,100 to 124,500), with 2,002 (95% CrI: 1340 to 3040) and 540 (95% CrI: 251 to 1,030) individuals predicted to develop decompensated cirrhosis and hepatocellular carcinoma, respectively, in that year. Although current treatment practice is estimated to avert a cumulative total of 2,200 deaths from DC or HCC, a cumulative total of 63,900 HCV-related deaths is projected by 2039. The HCV-related disease burden is already high and is forecast to rise steeply over the coming decades under current levels of antiviral treatment. Increased governmental resources to improve HCV screening and treatment rates and to reduce transmission are essential to address the high projected HCV disease burden in Malaysia.

  14. Projections of the Current and Future Disease Burden of Hepatitis C Virus Infection in Malaysia

    PubMed Central

    McDonald, Scott A.; Dahlui, Maznah; Mohamed, Rosmawati; Naning, Herlianna; Shabaruddin, Fatiha Hana; Kamarulzaman, Adeeba

    2015-01-01

    Background The prevalence of hepatitis C virus (HCV) infection in Malaysia has been estimated at 2.5% of the adult population. Our objective, satisfying one of the directives of the WHO Framework for Global Action on Viral Hepatitis, was to forecast the HCV disease burden in Malaysia using modelling methods. Methods An age-structured multi-state Markov model was developed to simulate the natural history of HCV infection. We tested three historical incidence scenarios that would give rise to the estimated prevalence in 2009, and calculated the incidence of cirrhosis, end-stage liver disease, and death, and disability-adjusted life-years (DALYs) under each scenario, to the year 2039. In the baseline scenario, current antiviral treatment levels were extended from 2014 to the end of the simulation period. To estimate the disease burden averted under current sustained virological response rates and treatment levels, the baseline scenario was compared to a counterfactual scenario in which no past or future treatment is assumed. Results In the baseline scenario, the projected disease burden for the year 2039 is 94,900 DALYs/year (95% credible interval (CrI): 77,100 to 124,500), with 2,002 (95% CrI: 1340 to 3040) and 540 (95% CrI: 251 to 1,030) individuals predicted to develop decompensated cirrhosis and hepatocellular carcinoma, respectively, in that year. Although current treatment practice is estimated to avert a cumulative total of 2,200 deaths from DC or HCC, a cumulative total of 63,900 HCV-related deaths is projected by 2039. Conclusions The HCV-related disease burden is already high and is forecast to rise steeply over the coming decades under current levels of antiviral treatment. Increased governmental resources to improve HCV screening and treatment rates and to reduce transmission are essential to address the high projected HCV disease burden in Malaysia. PMID:26042425

  15. A framework for determining improved placement of current energy converters subject to environmental constraints

    DOE PAGES

    Nelson, Kurt; James, Scott C.; Roberts, Jesse D.; ...

    2017-06-05

    A modelling framework identifies deployment locations for current-energy-capture devices that maximise power output while minimising potential environmental impacts. The framework, based on the Environmental Fluid Dynamics Code, can incorporate site-specific environmental constraints. Over a 29-day period, energy outputs from three array layouts were estimated for: (1) the preliminary configuration (baseline), (2) an updated configuration that accounted for environmental constraints, (3) and an improved configuration subject to no environmental constraints. Of these layouts, array placement that did not consider environmental constraints extracted the most energy from flow (4.38 MW-hr/day), 19% higher than output from the baseline configuration (3.69 MW-hr/day). Array placementmore » that considered environmental constraints removed 4.27 MW-hr/day of energy (16% more than baseline). In conclusion, this analysis framework accounts for bathymetry and flow-pattern variations that typical experimental studies cannot, demonstrating that it is a valuable tool for identifying improved array layouts for field deployments.« less

  16. Array Effects in Large Wind Farms. Cooperative Research and Development Final Report, CRADA Number CRD-09-343

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moriarty, Patrick

    2016-02-23

    The effects of wind turbine wakes within operating wind farms have a substantial impact on the overall energy production from the farm. The current generation of models drastically underpredicts the impact of these wakes leading to non-conservative estimates of energy capture and financial losses to wind farm operators and developers. To improve these models, detailed research of operating wind farms is necessary. Rebecca Barthelmie of Indiana University is a world leader of wind farm wakes effects and would like to partner with NREL to help improve wind farm modeling by gathering additional wind farm data, develop better models and increasemore » collaboration with European researchers working in the same area. This is currently an active area of research at NREL and the capabilities of both parties should mesh nicely.« less

  17. The value of improved (ERS) information based on domestic distribution effects of U.S. agriculture crops

    NASA Technical Reports Server (NTRS)

    Bradford, D. F.; Kelejian, H. H.; Brusch, R.; Gross, J.; Fishman, H.; Feenberg, D.

    1974-01-01

    The value of improving information for forecasting future crop harvests was investigated. Emphasis was placed upon establishing practical evaluation procedures firmly based in economic theory. The analysis was applied to the case of U.S. domestic wheat consumption. Estimates for a cost of storage function and a demand function for wheat were calculated. A model of market determinations of wheat inventories was developed for inventory adjustment. The carry-over horizon is computed by the solution of a nonlinear programming problem, and related variables such as spot and future price at each stage are determined. The model is adaptable to other markets. Results are shown to depend critically on the accuracy of current and proposed measurement techniques. The quantitative results are presented parametrically, in terms of various possible values of current and future accuracies.

  18. NASA Earth Science Research Results for Improved Regional Crop Yield Prediction

    NASA Astrophysics Data System (ADS)

    Mali, P.; O'Hara, C. G.; Shrestha, B.; Sinclair, T. R.; G de Goncalves, L. G.; Salado Navarro, L. R.

    2007-12-01

    National agencies such as USDA Foreign Agricultural Service (FAS), Production Estimation and Crop Assessment Division (PECAD) work specifically to analyze and generate timely crop yield estimates that help define national as well as global food policies. The USDA/FAS/PECAD utilizes a Decision Support System (DSS) called CADRE (Crop Condition and Data Retrieval Evaluation) mainly through an automated database management system that integrates various meteorological datasets, crop and soil models, and remote sensing data; providing significant contribution to the national and international crop production estimates. The "Sinclair" soybean growth model has been used inside CADRE DSS as one of the crop models. This project uses Sinclair model (a semi-mechanistic crop growth model) for its potential to be effectively used in a geo-processing environment with remote-sensing-based inputs. The main objective of this proposed work is to verify, validate and benchmark current and future NASA earth science research results for the benefit in the operational decision making process of the PECAD/CADRE DSS. For this purpose, the NASA South American Land Data Assimilation System (SALDAS) meteorological dataset is tested for its applicability as a surrogate meteorological input in the Sinclair model meteorological input requirements. Similarly, NASA sensor MODIS products is tested for its applicability in the improvement of the crop yield prediction through improving precision of planting date estimation, plant vigor and growth monitoring. The project also analyzes simulated Visible/Infrared Imager/Radiometer Suite (VIIRS, a future NASA sensor) vegetation product for its applicability in crop growth prediction to accelerate the process of transition of VIIRS research results for the operational use of USDA/FAS/PECAD DSS. The research results will help in providing improved decision making capacity to the USDA/FAS/PECAD DSS through improved vegetation growth monitoring from high spatial and temporal resolution remote sensing datasets; improved time-series meteorological inputs required for crop growth models; and regional prediction capability through geo-processing-based yield modeling.

  19. Global biomass production potentials exceed expected future demand without the need for cropland expansion

    PubMed Central

    Mauser, Wolfram; Klepper, Gernot; Zabel, Florian; Delzeit, Ruth; Hank, Tobias; Putzenlechner, Birgitta; Calzadilla, Alvaro

    2015-01-01

    Global biomass demand is expected to roughly double between 2005 and 2050. Current studies suggest that agricultural intensification through optimally managed crops on today's cropland alone is insufficient to satisfy future demand. In practice though, improving crop growth management through better technology and knowledge almost inevitably goes along with (1) improving farm management with increased cropping intensity and more annual harvests where feasible and (2) an economically more efficient spatial allocation of crops which maximizes farmers' profit. By explicitly considering these two factors we show that, without expansion of cropland, today's global biomass potentials substantially exceed previous estimates and even 2050s' demands. We attribute 39% increase in estimated global production potentials to increasing cropping intensities and 30% to the spatial reallocation of crops to their profit-maximizing locations. The additional potentials would make cropland expansion redundant. Their geographic distribution points at possible hotspots for future intensification. PMID:26558436

  20. Global biomass production potentials exceed expected future demand without the need for cropland expansion.

    PubMed

    Mauser, Wolfram; Klepper, Gernot; Zabel, Florian; Delzeit, Ruth; Hank, Tobias; Putzenlechner, Birgitta; Calzadilla, Alvaro

    2015-11-12

    Global biomass demand is expected to roughly double between 2005 and 2050. Current studies suggest that agricultural intensification through optimally managed crops on today's cropland alone is insufficient to satisfy future demand. In practice though, improving crop growth management through better technology and knowledge almost inevitably goes along with (1) improving farm management with increased cropping intensity and more annual harvests where feasible and (2) an economically more efficient spatial allocation of crops which maximizes farmers' profit. By explicitly considering these two factors we show that, without expansion of cropland, today's global biomass potentials substantially exceed previous estimates and even 2050s' demands. We attribute 39% increase in estimated global production potentials to increasing cropping intensities and 30% to the spatial reallocation of crops to their profit-maximizing locations. The additional potentials would make cropland expansion redundant. Their geographic distribution points at possible hotspots for future intensification.

  1. Does more education lead to better health habits? Evidence from the school reforms in Australia.

    PubMed

    Li, Jinhu; Powdthavee, Nattavudh

    2015-02-01

    The current study provides new empirical evidence on the causal effect of education on health-related behaviors by exploiting historical changes in the compulsory schooling laws in Australia. Since World War II, Australian states increased the minimum school leaving age from 14 to 15 in different years. Using differences in the laws regarding minimum school leaving age across different cohorts and across different states as a source of exogenous variation in education, we show that more education improves people's diets and their tendency to engage in more regular exercise and drinking moderately, but not necessarily their tendency to avoid smoking and to engage in more preventive health checks. The improvements in health behaviors are also reflected in the estimated positive effect of education on some health outcomes. Our results are robust to alternative measures of education and different estimation methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Enhanced switching stability in Ta2O5 resistive RAM by fluorine doping

    NASA Astrophysics Data System (ADS)

    Sedghi, N.; Li, H.; Brunell, I. F.; Dawson, K.; Guo, Y.; Potter, R. J.; Gibbon, J. T.; Dhanak, V. R.; Zhang, W. D.; Zhang, J. F.; Hall, S.; Robertson, J.; Chalker, P. R.

    2017-08-01

    The effect of fluorine doping on the switching stability of Ta2O5 resistive random access memory devices is investigated. It shows that the dopant serves to increase the memory window and improve the stability of the resistive states due to the neutralization of oxygen vacancies. The ability to alter the current in the low resistance state with set current compliance coupled with large memory window makes multilevel cell switching more favorable. The devices have set and reset voltages of <1 V with improved stability due to the fluorine doping. Density functional modeling shows that the incorporation of fluorine dopant atoms at the two-fold O vacancy site in the oxide network removes the defect state in the mid bandgap, lowering the overall density of defects capable of forming conductive filaments. This reduces the probability of forming alternative conducting paths and hence improves the current stability in the low resistance states. The doped devices exhibit more stable resistive states in both dc and pulsed set and reset cycles. The retention failure time is estimated to be a minimum of 2 years for F-doped devices measured by temperature accelerated and stress voltage accelerated retention failure methods.

  3. Approaches and Data Quality for Global Precipitation Estimation

    NASA Astrophysics Data System (ADS)

    Huffman, G. J.; Bolvin, D. T.; Nelkin, E. J.

    2015-12-01

    The space and time scales on which precipitation varies are small compared to the satellite coverage that we have, so it is necessary to merge "all" of the available satellite estimates. Differing retrieval capabilities from the various satellites require inter-calibration for the satellite estimates, while "morphing", i.e., Lagrangian time interpolation, is used to lengthen the period over which time interpolation is valid. Additionally, estimates from geostationary-Earth-orbit infrared data are plentiful, but of sufficiently lower quality compared to low-Earth-orbit passive microwave estimates that they are only used when needed. Finally, monthly surface precipitation gauge data can be used to reduce bias and improve patterns of occurrence for monthly satellite data, and short-interval satellite estimates can be improved with a simple scaling such that they sum to the monthly satellite-gauge combination. The presentation will briefly consider some of the design decisions for practical computation of the Global Precipitation Measurement (GPM) mission product Integrated Multi-satellitE Retrievals for GPM (IMERG), then examine design choices that maximize value for end users. For example, data fields are provided in the output file that provide insight into the basis for the estimated precipitation, including error, sensor providing the estimate, precipitation phase (solid/liquid), and intermediate precipitation estimates. Another important initiative is successive computations for the same data date/time at longer latencies as additional data are received, which for IMERG is currently done at 6 hours, 16 hours, and 3 months after observation time. Importantly, users require long records for each latency, which runs counter to the data archiving practices at most archive sites. As well, the assignment of Digital Object Identifiers (DOI's) for near-real-time data sets (at 6 and 16 hours for IMERG) is not a settled issue.

  4. A quick earthquake disaster loss assessment method supported by dasymetric data for emergency response in China

    NASA Astrophysics Data System (ADS)

    Xu, Jinghai; An, Jiwen; Nie, Gaozong

    2016-04-01

    Improving earthquake disaster loss estimation speed and accuracy is one of the key factors in effective earthquake response and rescue. The presentation of exposure data by applying a dasymetric map approach has good potential for addressing this issue. With the support of 30'' × 30'' areal exposure data (population and building data in China), this paper presents a new earthquake disaster loss estimation method for emergency response situations. This method has two phases: a pre-earthquake phase and a co-earthquake phase. In the pre-earthquake phase, we pre-calculate the earthquake loss related to different seismic intensities and store them in a 30'' × 30'' grid format, which has several stages: determining the earthquake loss calculation factor, gridding damage probability matrices, calculating building damage and calculating human losses. Then, in the co-earthquake phase, there are two stages of estimating loss: generating a theoretical isoseismal map to depict the spatial distribution of the seismic intensity field; then, using the seismic intensity field to extract statistics of losses from the pre-calculated estimation data. Thus, the final loss estimation results are obtained. The method is validated by four actual earthquakes that occurred in China. The method not only significantly improves the speed and accuracy of loss estimation but also provides the spatial distribution of the losses, which will be effective in aiding earthquake emergency response and rescue. Additionally, related pre-calculated earthquake loss estimation data in China could serve to provide disaster risk analysis before earthquakes occur. Currently, the pre-calculated loss estimation data and the two-phase estimation method are used by the China Earthquake Administration.

  5. Posthospitalization home health care use and changes in functional status in a Medicare population.

    PubMed

    Hadley, J; Rabin, D; Epstein, A; Stein, S; Rimes, C

    2000-05-01

    The objective of this work was to estimate the effect of Medicare beneficiaries' use of home health care (HHC) for 6 months after hospital discharge on the change in functional status over a 1-year period beginning before hospitalization. Data came from the Medicare Current Beneficiary Survey, which is a nationally representative sample of Medicare beneficiaries, in-person interview data, and Medicare claims for 1991 through 1994 for 2,127 nondisabled, community-dwelling, elderly Medicare beneficiaries who were hospitalized within 6 months of their annual in-person interviews. Econometric estimation with the instrumental variable method was used to correct for observational data bias, ie, the nonrandom allocation of discharged beneficiaries to the use of posthospitalization HHC. The analysis estimates a first-stage model of HHC use from which an instrumental variable estimate is constructed to estimate the effect on change in functional status. The instrumental variable estimates suggest that HHC users experienced greater improvements in functional status than nonusers as measured by the change in a continuous scale based on the number and mix of activities of daily living and instrumental activities of daily living before and after hospitalization. The estimated improvement in functional status could be as large as 13% for a 10% increase in HHC use. In contrast, estimation with the observational data on HHC use implies that HHC users had poorer health outcomes. Adjusting for potential observational data bias is critical to obtaining estimates of the relationship between the use of posthospitalization HHC and the change in health before and after hospitalization. After adjustment, the results suggest that efforts to constrain Medicare's spending for HHC, as required by the Balanced Budget Act of 1997, may lead to poorer health outcomes for some beneficiaries.

  6. Can we improve top-down GHG inverse methods through informed prior and better representations of atmospheric transport? Insights from the Atmospheric Carbon and Transport (ACT) - America Aircraft Mission

    NASA Astrophysics Data System (ADS)

    Feng, S.; Lauvaux, T.; Keller, K.; Davis, K. J.

    2016-12-01

    Current estimates of biogenic carbon fluxes over North America based on top-down atmospheric inversions are subject to considerable uncertainty. This uncertainty stems to a large part from the uncertain prior fluxes estimates with the associated error covariances and approximations in the atmospheric transport models that link observed carbon dioxide mixing ratios with surface fluxes. Specifically, approximations in the representation of vertical mixing associated with atmospheric turbulence or convective transport and largely under-determined prior fluxes and their error structures significantly hamper our capacity to reliably estimate regional carbon fluxes. The Atmospheric Carbon and Transport - America (ACT-America) mission aims at reducing the uncertainties in inverse fluxes at the regional-scale by deploying airborne and ground-based platforms to characterize atmospheric GHG mixing ratios and the concurrent atmospheric dynamics. Two aircraft measure the 3-dimensional distribution of greenhouse gases at synoptic scales, focusing on the atmospheric boundary layer and the free troposphere during both fair and stormy weather conditions. Here we analyze two main questions: (i) What level of information can we expect from the currently planned observations? (ii) How might ACT-America reduce the hindcast and predictive uncertainty of carbon estimates over North America?

  7. Cost, Energy, and Environmental Impact of Automated Electric Taxi Fleets in Manhattan.

    PubMed

    Bauer, Gordon S; Greenblatt, Jeffery B; Gerke, Brian F

    2018-04-17

    Shared automated electric vehicles (SAEVs) hold great promise for improving transportation access in urban centers while drastically reducing transportation-related energy consumption and air pollution. Using taxi-trip data from New York City, we develop an agent-based model to predict the battery range and charging infrastructure requirements of a fleet of SAEVs operating on Manhattan Island. We also develop a model to estimate the cost and environmental impact of providing service and perform extensive sensitivity analysis to test the robustness of our predictions. We estimate that costs will be lowest with a battery range of 50-90 mi, with either 66 chargers per square mile, rated at 11 kW or 44 chargers per square mile, rated at 22 kW. We estimate that the cost of service provided by such an SAEV fleet will be $0.29-$0.61 per revenue mile, an order of magnitude lower than the cost of service of present-day Manhattan taxis and $0.05-$0.08/mi lower than that of an automated fleet composed of any currently available hybrid or internal combustion engine vehicle (ICEV). We estimate that such an SAEV fleet drawing power from the current NYC power grid would reduce GHG emissions by 73% and energy consumption by 58% compared to an automated fleet of ICEVs.

  8. Using Multitemporal Remote Sensing Imagery and Inundation Measures to Improve Land Change Estimates in Coastal Wetlands

    USGS Publications Warehouse

    Allen, Y.C.; Couvillion, B.R.; Barras, J.A.

    2012-01-01

    Remote sensing imagery can be an invaluable resource to quantify land change in coastal wetlands. Obtaining an accurate measure of land change can, however, be complicated by differences in fluvial and tidal inundation experienced when the imagery is captured. This study classified Landsat imagery from two wetland areas in coastal Louisiana from 1983 to 2010 into categories of land and water. Tide height, river level, and date were used as independent variables in a multiple regression model to predict land area in the Wax Lake Delta (WLD) and compare those estimates with an adjacent marsh area lacking direct fluvial inputs. Coefficients of determination from regressions using both measures of water level along with date as predictor variables of land extent in the WLD, were higher than those obtained using the current methodology which only uses date to predict land change. Land change trend estimates were also improved when the data were divided by time period. Water level corrected land gain in the WLD from 1983 to 2010 was 1 km 2 year -1, while rates in the adjacent marsh remained roughly constant. This approach of isolating environmental variability due to changing water levels improves estimates of actual land change in a dynamic system, so that other processes that may control delta development such as hurricanes, floods, and sediment delivery, may be further investigated. ?? 2011 Coastal and Estuarine Research Federation (outside the USA).

  9. Tuberculosis in a South African prison – a transmission modelling analysis

    PubMed Central

    Johnstone-Robertson, Simon; Lawn, Stephen D; Welte, Alex; Bekker, Linda-Gail; Wood, Robin

    2015-01-01

    Background Prisons are recognised internationally as institutions with very high tuberculosis (TB) burdens where transmission is predominantly determined by contact between infectious and susceptible prisoners. A recent South African court case described the conditions under which prisoners awaiting trial were kept. With the use of these data, a mathematical model was developed to explore the interactions between incarceration conditions and TB control measures. Methods Cell dimensions, cell occupancy, lock-up time, TB incidence and treatment delays were derived from court evidence and judicial reports. Using the Wells-Riley equation and probability analyses of contact between prisoners, we estimated the current TB transmission probability within prison cells, and estimated transmission probabilities of improved levels of case finding in combination with implementation of national and international minimum standards for incarceration. Results Levels of overcrowding (230%) in communal cells and poor TB case finding result in annual TB transmission risks of 90% per annum. Implementing current national or international cell occupancy recommendations would reduce TB transmission probabilities by 30% and 50%, respectively. Improved passive case finding, modest ventilation increase or decreased lock-up time would minimally impact on transmission if introduced individually. However, active case finding together with implementation of minimum national and international standards of incarceration could reduce transmission by 50% and 94%, respectively. Conclusions Current conditions of detention for awaiting-trial prisoners are highly conducive for spread of drug-sensitive and drug-resistant TB. Combinations of simple well-established scientific control measures should be implemented urgently. PMID:22272961

  10. Comparison Between One-Point Calibration and Two-Point Calibration Approaches in a Continuous Glucose Monitoring Algorithm

    PubMed Central

    Mahmoudi, Zeinab; Johansen, Mette Dencker; Christiansen, Jens Sandahl

    2014-01-01

    Background: The purpose of this study was to investigate the effect of using a 1-point calibration approach instead of a 2-point calibration approach on the accuracy of a continuous glucose monitoring (CGM) algorithm. Method: A previously published real-time CGM algorithm was compared with its updated version, which used a 1-point calibration instead of a 2-point calibration. In addition, the contribution of the corrective intercept (CI) to the calibration performance was assessed. Finally, the sensor background current was estimated real-time and retrospectively. The study was performed on 132 type 1 diabetes patients. Results: Replacing the 2-point calibration with the 1-point calibration improved the CGM accuracy, with the greatest improvement achieved in hypoglycemia (18.4% median absolute relative differences [MARD] in hypoglycemia for the 2-point calibration, and 12.1% MARD in hypoglycemia for the 1-point calibration). Using 1-point calibration increased the percentage of sensor readings in zone A+B of the Clarke error grid analysis (EGA) in the full glycemic range, and also enhanced hypoglycemia sensitivity. Exclusion of CI from calibration reduced hypoglycemia accuracy, while slightly increased euglycemia accuracy. Both real-time and retrospective estimation of the sensor background current suggest that the background current can be considered zero in the calibration of the SCGM1 sensor. Conclusions: The sensor readings calibrated with the 1-point calibration approach indicated to have higher accuracy than those calibrated with the 2-point calibration approach. PMID:24876420

  11. Robust w-Estimators for Cryo-EM Class Means.

    PubMed

    Huang, Chenxi; Tagare, Hemant D

    2016-02-01

    A critical step in cryogenic electron microscopy (cryo-EM) image analysis is to calculate the average of all images aligned to a projection direction. This average, called the class mean, improves the signal-to-noise ratio in single-particle reconstruction. The averaging step is often compromised because of the outlier images of ice, contaminants, and particle fragments. Outlier detection and rejection in the majority of current cryo-EM methods are done using cross-correlation with a manually determined threshold. Empirical assessment shows that the performance of these methods is very sensitive to the threshold. This paper proposes an alternative: a w-estimator of the average image, which is robust to outliers and which does not use a threshold. Various properties of the estimator, such as consistency and influence function are investigated. An extension of the estimator to images with different contrast transfer functions is also provided. Experiments with simulated and real cryo-EM images show that the proposed estimator performs quite well in the presence of outliers.

  12. Machine Learning Based Diagnosis of Lithium Batteries

    NASA Astrophysics Data System (ADS)

    Ibe-Ekeocha, Chinemerem Christopher

    The depletion of the world's current petroleum reserve, coupled with the negative effects of carbon monoxide and other harmful petrochemical by-products on the environment, is the driving force behind the movement towards renewable and sustainable energy sources. Furthermore, the growing transportation sector consumes a significant portion of the total energy used in the United States. A complete electrification of this sector would require a significant development in electric vehicles (EVs) and hybrid electric vehicles (HEVs), thus translating to a reduction in the carbon footprint. As the market for EVs and HEVs grows, their battery management systems (BMS) need to be improved accordingly. The BMS is not only responsible for optimally charging and discharging the battery, but also monitoring battery's state of charge (SOC) and state of health (SOH). SOC, similar to an energy gauge, is a representation of a battery's remaining charge level as a percentage of its total possible charge at full capacity. Similarly, SOH is a measure of deterioration of a battery; thus it is a representation of the battery's age. Both SOC and SOH are not measurable, so it is important that these quantities are estimated accurately. An inaccurate estimation could not only be inconvenient for EV consumers, but also potentially detrimental to battery's performance and life. Such estimations could be implemented either online, while battery is in use, or offline when battery is at rest. This thesis presents intelligent online SOC and SOH estimation methods using machine learning tools such as artificial neural network (ANN). ANNs are a powerful generalization tool if programmed and trained effectively. Unlike other estimation strategies, the techniques used require no battery modeling or knowledge of battery internal parameters but rather uses battery's voltage, charge/discharge current, and ambient temperature measurements to accurately estimate battery's SOC and SOH. The developed algorithms are evaluated experimentally using two different batteries namely lithium iron phosphate (LiFePO 4) and lithium titanate (LTO), both subjected to constant and dynamic current profiles. Results highlight the robustness of these algorithms to battery's nonlinear dynamic nature, hysteresis, aging, dynamic current profile, and parametric uncertainties. Consequently, these methods are susceptible and effective if incorporated with the BMS of EVs', HEVs', and other battery powered devices.

  13. Use of Smartphones to Estimate Carbohydrates in Foods for Diabetes Management.

    PubMed

    Huang, Jurong; Ding, Hang; McBride, Simon; Ireland, David; Karunanithi, Mohan

    2015-01-01

    Over 380 million adults worldwide are currently living with diabetes and the number has been projected to reach 590 million by 2035. Uncontrolled diabetes often lead to complications, disability, and early death. In the management of diabetes, dietary intervention to control carbohydrate intake is essential to help manage daily blood glucose level within a recommended range. The intervention traditionally relies on a self-report to estimate carbohydrate intake through a paper based diary. The traditional approach is known to be inaccurate, inconvenient, and resource intensive. Additionally, patients often require a long term of learning or training to achieve a certain level of accuracy and reliability. To address these issues, we propose a design of a smartphone application that automatically estimates carbohydrate intake from food images. The application uses imaging processing techniques to classify food type, estimate food volume, and accordingly calculate the amount of carbohydrates. To examine the proof of concept, a small fruit database was created to train a classification algorithm implemented in the application. Consequently, a set of fruit photos (n=6) from a real smartphone were applied to evaluate the accuracy of the carbohydrate estimation. This study demonstrates the potential to use smartphones to improve dietary intervention, although further studies are needed to improve the accuracy, and extend the capability of the smartphone application to analyse broader food contents.

  14. Future Directions for the National Health Accounts

    PubMed Central

    Huskamp, Haiden A.; Newhouse, Joseph P.

    1999-01-01

    Over the past 15 years, the Health Care Financing Administration (HCFA) has engaged in ongoing efforts to improve the methodology and data collection processes used to develop the national health accounts (NHA) estimates of national health expenditures (NHE). In March 1998, HCFA initiated a third conference to explore possible improvements or useful extensions to the current NHA projects. This article summarizes the issues discussed at the conference, provides an overview of three commissioned papers on future directions for the NHA that were presented, and summarizes suggestions made by participants regarding future directions for the accounts. PMID:11481786

  15. The role of remote wind forcing in the subinertial current variability in the central and northern parts of the South Brazil Bight

    NASA Astrophysics Data System (ADS)

    Dottori, Marcelo; Castro, Belmiro Mendes

    2018-06-01

    Data analysis of continental shelf currents and coastal sea level, together with the application of a semi-analytical model, are used to estimate the importance of remote wind forcing on the subinertial variability of the current in the central and northern areas of the South Brazil Bight. Results from both the data analysis and from the semi-analytical model are robust in showing subinertial variability that propagates along-shelf leaving the coast to the left in accordance with theoretical studies of Continental Shelf Waves (CSW). Both the subinertial variability observed in along-shelf currents and sea level oscillations present different propagation speeds for the narrow northern part of the SBB ( 6-7 m/s) and the wide central SBB region ( 11 m/s), those estimates being in agreement with the modeled CSW propagation speed. On the inner and middle shelf, observed along-shelf subinertial currents show higher correlation coefficients with the winds located southward and earlier in time than with the local wind at the current meter mooring position and at the time of measurement. The inclusion of the remote (located southwestward) wind forcing improves the prediction of the subinertial currents when compared to the currents forced only by the local wind, since the along-shelf-modeled currents present correlation coefficients with observed along-shelf currents up to 20% higher on the inner and middle shelf when the remote wind is included. For most of the outer shelf, on the other hand, this is not observed since usually, the correlation between the currents and the synoptic winds is not statistically significant.

  16. The role of remote wind forcing in the subinertial current variability in the central and northern parts of the South Brazil Bight

    NASA Astrophysics Data System (ADS)

    Dottori, Marcelo; Castro, Belmiro Mendes

    2018-05-01

    Data analysis of continental shelf currents and coastal sea level, together with the application of a semi-analytical model, are used to estimate the importance of remote wind forcing on the subinertial variability of the current in the central and northern areas of the South Brazil Bight. Results from both the data analysis and from the semi-analytical model are robust in showing subinertial variability that propagates along-shelf leaving the coast to the left in accordance with theoretical studies of Continental Shelf Waves (CSW). Both the subinertial variability observed in along-shelf currents and sea level oscillations present different propagation speeds for the narrow northern part of the SBB ( 6-7 m/s) and the wide central SBB region ( 11 m/s), those estimates being in agreement with the modeled CSW propagation speed. On the inner and middle shelf, observed along-shelf subinertial currents show higher correlation coefficients with the winds located southward and earlier in time than with the local wind at the current meter mooring position and at the time of measurement. The inclusion of the remote (located southwestward) wind forcing improves the prediction of the subinertial currents when compared to the currents forced only by the local wind, since the along-shelf-modeled currents present correlation coefficients with observed along-shelf currents up to 20% higher on the inner and middle shelf when the remote wind is included. For most of the outer shelf, on the other hand, this is not observed since usually, the correlation between the currents and the synoptic winds is not statistically significant.

  17. Geomanetically Induced Currents (GIC) calculation, impact assessment on transmission system and validation using 3-D earth conductivity tensors and GIC measurements.

    NASA Astrophysics Data System (ADS)

    Sharma, R.; McCalley, J. D.

    2016-12-01

    Geomagnetic disturbance (GMD) causes the flow of geomagnetically induced currents (GIC) in the power transmission system that may cause large scale power outages and power system equipment damage. In order to plan for defense against GMD, it is necessary to accurately estimate the flow of GICs in the power transmission system. The current calculation as per NERC standards uses the 1-D earth conductivity models that don't reflect the coupling between the geoelectric and geomagnetic field components in the same direction. For accurate estimation of GICs, it is important to have spatially granular 3-D earth conductivity tensors, accurate DC network model of the transmission system and precisely estimated or measured input in the form of geomagnetic or geoelectric field data. Using these models and data the pre event, post event and online planning and assessment can be performed. The pre, post and online planning can be done by calculating GIC, analyzing voltage stability margin, identifying protection system vulnerabilities and estimating heating in transmission equipment. In order to perform the above mentioned tasks, an established GIC calculation and analysis procedure is needed that uses improved geophysical and DC network models obtained by model parameter tuning. The issue is addressed by performing the following tasks; 1) Geomagnetic field data and improved 3-D earth conductivity tensors are used to plot the geoelectric field map of a given area. The obtained geoelectric field map then serves as an input to the PSS/E platform, where through DC circuit analysis the GIC flows are calculated. 2) The computed GIC is evaluated against GIC measurements in order to fine tune the geophysical and DC network model parameters for any mismatch in the calculated and measured GIC. 3) The GIC calculation procedure is then adapted for a one in 100 year storm, in order to assess the impact of the worst case GMD on the power system. 4) Using the transformer models, the voltage stability margin would be analyzed for various real and synthetic geomagnetic or geoelectric field inputs, by calculating the reactive power absorbed by the transformers during an event. All four steps will help the electric utilities and planners to make use of better and accurate estimation techniques for GIC calculation, and impact assessment for future GMD events.

  18. Prediction and discovery of new geothermal resources in the Great Basin: Multiple evidence of a large undiscovered resource base

    USGS Publications Warehouse

    Coolbaugh, M.F.; Raines, G.L.; Zehner, R.E.; Shevenell, L.; Williams, C.F.

    2006-01-01

    Geothermal potential maps by themselves cannot directly be used to estimate undiscovered resources. To address the undiscovered resource base in the Great Basin, a new and relatively quantitative methodology is presented. The methodology involves three steps, the first being the construction of a data-driven probabilistic model of the location of known geothermal systems using weights of evidence. The second step is the construction of a degree-of-exploration model. This degree-of-exploration model uses expert judgment in a fuzzy logic context to estimate how well each spot in the state has been explored, using as constraints digital maps of the depth to the water table, presence of the carbonate aquifer, and the location, depth, and type of drill-holes. Finally, the exploration model and the data-driven occurrence model are combined together quantitatively using area-weighted modifications to the weights-of-evidence equations. Using this methodology in the state of Nevada, the number of undiscovered geothermal systems with reservoir temperatures ???100??C is estimated at 157, which is 3.2 times greater than the 69 known systems. Currently, nine of the 69 known systems are producing electricity. If it is conservatively assumed that an additional nine for a total of 18 of the known systems will eventually produce electricity, then the model predicts 59 known and undiscovered geothermal systems are capable of producing electricity under current economic conditions in the state, a figure that is more than six times higher than the current number. Many additional geothermal systems could potentially become economic under improved economic conditions or with improved methods of reservoir stimulation (Enhanced Geothermal Systems).This large predicted geothermal resource base appears corroborated by recent grass-roots geothermal discoveries in the state of Nevada. At least two and possibly three newly recognized geothermal systems with estimated reservoir temperatures ???150??C have been identified on the Pyramid Lake Paiute Reservation in west-central Nevada. Evidence of three blind geothermal systems has recently been uncovered near the borate-bearing playas at Rhodes, Teels, and Columbus Marshes in southwestern Nevada. Recent gold exploration drilling has resulted in at least four new geothermal discoveries, including the McGinness Hills geothermal system with an estimated reservoir temperature of roughly 200??C. All of this evidence suggests that the potential for expansion of geothermal power production in Nevada is significant.

  19. Estimating patient dose from CT exams that use automatic exposure control: Development and validation of methods to accurately estimate tube current values.

    PubMed

    McMillan, Kyle; Bostani, Maryam; Cagnon, Christopher H; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H; McNitt-Gray, Michael F

    2017-08-01

    The vast majority of body CT exams are performed with automatic exposure control (AEC), which adapts the mean tube current to the patient size and modulates the tube current either angularly, longitudinally or both. However, most radiation dose estimation tools are based on fixed tube current scans. Accurate estimates of patient dose from AEC scans require knowledge of the tube current values, which is usually unavailable. The purpose of this work was to develop and validate methods to accurately estimate the tube current values prescribed by one manufacturer's AEC system to enable accurate estimates of patient dose. Methods were developed that took into account available patient attenuation information, user selected image quality reference parameters and x-ray system limits to estimate tube current values for patient scans. Methods consistent with AAPM Report 220 were developed that used patient attenuation data that were: (a) supplied by the manufacturer in the CT localizer radiograph and (b) based on a simulated CT localizer radiograph derived from image data. For comparison, actual tube current values were extracted from the projection data of each patient. Validation of each approach was based on data collected from 40 pediatric and adult patients who received clinically indicated chest (n = 20) and abdomen/pelvis (n = 20) scans on a 64 slice multidetector row CT (Sensation 64, Siemens Healthcare, Forchheim, Germany). For each patient dataset, the following were collected with Institutional Review Board (IRB) approval: (a) projection data containing actual tube current values at each projection view, (b) CT localizer radiograph (topogram) and (c) reconstructed image data. Tube current values were estimated based on the actual topogram (actual-topo) as well as the simulated topogram based on image data (sim-topo). Each of these was compared to the actual tube current values from the patient scan. In addition, to assess the accuracy of each method in estimating patient organ doses, Monte Carlo simulations were performed by creating voxelized models of each patient, identifying key organs and incorporating tube current values into the simulations to estimate dose to the lungs and breasts (females only) for chest scans and the liver, kidney, and spleen for abdomen/pelvis scans. Organ doses from simulations using the actual tube current values were compared to those using each of the estimated tube current values (actual-topo and sim-topo). When compared to the actual tube current values, the average error for tube current values estimated from the actual topogram (actual-topo) and simulated topogram (sim-topo) was 3.9% and 5.8% respectively. For Monte Carlo simulations of chest CT exams using the actual tube current values and estimated tube current values (based on the actual-topo and sim-topo methods), the average differences for lung and breast doses ranged from 3.4% to 6.6%. For abdomen/pelvis exams, the average differences for liver, kidney, and spleen doses ranged from 4.2% to 5.3%. Strong agreement between organ doses estimated using actual and estimated tube current values provides validation of both methods for estimating tube current values based on data provided in the topogram or simulated from image data. © 2017 American Association of Physicists in Medicine.

  20. mBEEF-vdW: Robust fitting of error estimation density functionals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes

    Here, we propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator overmore » the training datasets. Using this estimator, we show that the robust loss function leads to a 10% improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.« less

  1. mBEEF-vdW: Robust fitting of error estimation density functionals

    DOE PAGES

    Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes; ...

    2016-06-15

    Here, we propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator overmore » the training datasets. Using this estimator, we show that the robust loss function leads to a 10% improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.« less

  2. Incidence of induced abortion in Malawi, 2015.

    PubMed

    Polis, Chelsea B; Mhango, Chisale; Philbin, Jesse; Chimwaza, Wanangwa; Chipeta, Effie; Msusa, Ausbert

    2017-01-01

    In Malawi, abortion is legal only if performed to save a woman's life; other attempts to procure an abortion are punishable by 7-14 years imprisonment. Most induced abortions in Malawi are performed under unsafe conditions, contributing to Malawi's high maternal mortality ratio. Malawians are currently debating whether to provide additional exceptions under which an abortion may be legally obtained. An estimated 67,300 induced abortions occurred in Malawi in 2009 (equivalent to 23 abortions per 1,000 women aged 15-44), but changes since 2009, including dramatic increases in contraceptive prevalence, may have impacted abortion rates. We conducted a nationally representative survey of health facilities to estimate the number of cases of post-abortion care, as well as a survey of knowledgeable informants to estimate the probability of needing and obtaining post-abortion care following induced abortion. These data were combined with national population and fertility data to determine current estimates of induced abortion and unintended pregnancy in Malawi using the Abortion Incidence Complications Methodology. We estimate that approximately 141,044 (95% CI: 121,161-160,928) induced abortions occurred in Malawi in 2015, translating to a national rate of 38 abortions per 1,000 women aged 15-49 (95% CI: 32 to 43); which varied by geographical zone (range: 28-61). We estimate that 53% of pregnancies in Malawi are unintended, and that 30% of unintended pregnancies end in abortion. Given the challenges of estimating induced abortion, and the assumptions required for calculation, results should be viewed as approximate estimates, rather than exact measures. The estimated abortion rate in 2015 is higher than in 2009 (potentially due to methodological differences), but similar to recent estimates from nearby countries including Tanzania (36), Uganda (39), and regional estimates in Eastern and Southern Africa (34-35). Over half of pregnancies in Malawi are unintended. Our findings should inform ongoing efforts to reduce maternal morbidity and mortality and to improve public health in Malawi.

  3. Incidence of induced abortion in Malawi, 2015

    PubMed Central

    Mhango, Chisale; Philbin, Jesse; Chimwaza, Wanangwa; Chipeta, Effie; Msusa, Ausbert

    2017-01-01

    Background In Malawi, abortion is legal only if performed to save a woman’s life; other attempts to procure an abortion are punishable by 7–14 years imprisonment. Most induced abortions in Malawi are performed under unsafe conditions, contributing to Malawi’s high maternal mortality ratio. Malawians are currently debating whether to provide additional exceptions under which an abortion may be legally obtained. An estimated 67,300 induced abortions occurred in Malawi in 2009 (equivalent to 23 abortions per 1,000 women aged 15–44), but changes since 2009, including dramatic increases in contraceptive prevalence, may have impacted abortion rates. Methods We conducted a nationally representative survey of health facilities to estimate the number of cases of post-abortion care, as well as a survey of knowledgeable informants to estimate the probability of needing and obtaining post-abortion care following induced abortion. These data were combined with national population and fertility data to determine current estimates of induced abortion and unintended pregnancy in Malawi using the Abortion Incidence Complications Methodology. Results We estimate that approximately 141,044 (95% CI: 121,161–160,928) induced abortions occurred in Malawi in 2015, translating to a national rate of 38 abortions per 1,000 women aged 15–49 (95% CI: 32 to 43); which varied by geographical zone (range: 28–61). We estimate that 53% of pregnancies in Malawi are unintended, and that 30% of unintended pregnancies end in abortion. Given the challenges of estimating induced abortion, and the assumptions required for calculation, results should be viewed as approximate estimates, rather than exact measures. Conclusions The estimated abortion rate in 2015 is higher than in 2009 (potentially due to methodological differences), but similar to recent estimates from nearby countries including Tanzania (36), Uganda (39), and regional estimates in Eastern and Southern Africa (34–35). Over half of pregnancies in Malawi are unintended. Our findings should inform ongoing efforts to reduce maternal morbidity and mortality and to improve public health in Malawi. PMID:28369114

  4. Improved Critical Eigenfunction Restriction Estimates on Riemannian Surfaces with Nonpositive Curvature

    NASA Astrophysics Data System (ADS)

    Xi, Yakun; Zhang, Cheng

    2017-03-01

    We show that one can obtain improved L 4 geodesic restriction estimates for eigenfunctions on compact Riemannian surfaces with nonpositive curvature. We achieve this by adapting Sogge's strategy in (Improved critical eigenfunction estimates on manifolds of nonpositive curvature, Preprint). We first combine the improved L 2 restriction estimate of Blair and Sogge (Concerning Toponogov's Theorem and logarithmic improvement of estimates of eigenfunctions, Preprint) and the classical improved {L^∞} estimate of Bérard to obtain an improved weak-type L 4 restriction estimate. We then upgrade this weak estimate to a strong one by using the improved Lorentz space estimate of Bak and Seeger (Math Res Lett 18(4):767-781, 2011). This estimate improves the L 4 restriction estimate of Burq et al. (Duke Math J 138:445-486, 2007) and Hu (Forum Math 6:1021-1052, 2009) by a power of {(log logλ)^{-1}}. Moreover, in the case of compact hyperbolic surfaces, we obtain further improvements in terms of {(logλ)^{-1}} by applying the ideas from (Chen and Sogge, Commun Math Phys 329(3):435-459, 2014) and (Blair and Sogge, Concerning Toponogov's Theorem and logarithmic improvement of estimates of eigenfunctions, Preprint). We are able to compute various constants that appeared in (Chen and Sogge, Commun Math Phys 329(3):435-459, 2014) explicitly, by proving detailed oscillatory integral estimates and lifting calculations to the universal cover H^2.

  5. Mothers' Preferences and Willingness to Pay for Human Papillomavirus Vaccination for Their Daughters: A Discrete Choice Experiment in Hong Kong.

    PubMed

    Wong, Carlos K H; Man, Kenneth K C; Ip, Patrick; Kwan, Mike; McGhee, Sarah M

    2018-05-01

    To determine the preference of mothers in Hong Kong and their willingness to pay (WTP) for human papillomavirus (HPV) vaccination for their daughters. A discrete choice experiment survey with a two-alternative study design was developed. Data were collected from pediatric specialist outpatient clinics from 482 mothers with daughters aged between 8 and 17 years. Preferences of the four attributes of HPV vaccines (protection against cervical cancer, protection duration, side effects, and out-of-pocket costs) were evaluated. The marginal and overall WTP were estimated using multinomial logistic regression. A subgroup analysis was conducted to explore the impact of socioeconomic factors on mothers' WTP. Side effects, protection against cervical cancer, protection duration, and out-of-pocket cost determined the decision to receive or not receive the vaccine. All attributes had a statistically significant effect on the preference of and the WTP for the vaccine. Maximum WTP for ideal vaccines (i.e., 100% protection, lifetime protection duration, and 0% side effects) was HK$8976 (US $1129). The estimated WTP for vaccines currently available was HK$1620 (US $208), lower than the current market price. Among those who had a monthly household income of more than HK$100,000 (US $12,821), the WTP for vaccines currently offered was higher than the market price. This study provides new data on how features of the HPV vaccine are viewed and valued by mothers by determining their perception of ideal or improved and current vaccine technologies. These findings could contribute to future policies on the improvement of HPV vaccine and be useful for the immunization service in Hong Kong. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  6. Estimating micro area behavioural risk factor prevalence from large population-based surveys: a full Bayesian approach.

    PubMed

    Seliske, L; Norwood, T A; McLaughlin, J R; Wang, S; Palleschi, C; Holowaty, E

    2016-06-07

    An important public health goal is to decrease the prevalence of key behavioural risk factors, such as tobacco use and obesity. Survey information is often available at the regional level, but heterogeneity within large geographic regions cannot be assessed. Advanced spatial analysis techniques are demonstrated to produce sensible micro area estimates of behavioural risk factors that enable identification of areas with high prevalence. A spatial Bayesian hierarchical model was used to estimate the micro area prevalence of current smoking and excess bodyweight for the Erie-St. Clair region in southwestern Ontario. Estimates were mapped for male and female respondents of five cycles of the Canadian Community Health Survey (CCHS). The micro areas were 2006 Census Dissemination Areas, with an average population of 400-700 people. Two individual-level models were specified: one controlled for survey cycle and age group (model 1), and one controlled for survey cycle, age group and micro area median household income (model 2). Post-stratification was used to derive micro area behavioural risk factor estimates weighted to the population structure. SaTScan analyses were conducted on the granular, postal-code level CCHS data to corroborate findings of elevated prevalence. Current smoking was elevated in two urban areas for both sexes (Sarnia and Windsor), and an additional small community (Chatham) for males only. Areas of excess bodyweight were prevalent in an urban core (Windsor) among males, but not females. Precision of the posterior post-stratified current smoking estimates was improved in model 2, as indicated by narrower credible intervals and a lower coefficient of variation. For excess bodyweight, both models had similar precision. Aggregation of the micro area estimates to CCHS design-based estimates validated the findings. This is among the first studies to apply a full Bayesian model to complex sample survey data to identify micro areas with variation in risk factor prevalence, accounting for spatial correlation and other covariates. Application of micro area analysis techniques helps define areas for public health planning, and may be informative to surveillance and research modeling of relevant chronic disease outcomes.

  7. Data assimilation and bathymetric inversion in a two-dimensional horizontal surf zone model

    NASA Astrophysics Data System (ADS)

    Wilson, G. W.; Ã-Zkan-Haller, H. T.; Holman, R. A.

    2010-12-01

    A methodology is described for assimilating observations in a steady state two-dimensional horizontal (2-DH) model of nearshore hydrodynamics (waves and currents), using an ensemble-based statistical estimator. In this application, we treat bathymetry as a model parameter, which is subject to a specified prior uncertainty. The statistical estimator uses state augmentation to produce posterior (inverse, updated) estimates of bathymetry, wave height, and currents, as well as their posterior uncertainties. A case study is presented, using data from a 2-D array of in situ sensors on a natural beach (Duck, NC). The prior bathymetry is obtained by interpolation from recent bathymetric surveys; however, the resulting prior circulation is not in agreement with measurements. After assimilating data (significant wave height and alongshore current), the accuracy of modeled fields is improved, and this is quantified by comparing with observations (both assimilated and unassimilated). Hence, for the present data, 2-DH bathymetric uncertainty is an important source of error in the model and can be quantified and corrected using data assimilation. Here the bathymetric uncertainty is ascribed to inadequate temporal sampling; bathymetric surveys were conducted on a daily basis, but bathymetric change occurred on hourly timescales during storms, such that hydrodynamic model skill was significantly degraded. Further tests are performed to analyze the model sensitivities used in the assimilation and to determine the influence of different observation types and sampling schemes.

  8. Pregnancy intentions-a complex construct and call for new measures.

    PubMed

    Mumford, Sunni L; Sapra, Katherine J; King, Rosalind B; Louis, Jean Fredo; Buck Louis, Germaine M

    2016-11-01

    To estimate the prevalence of unintended pregnancies under relaxed assumptions regarding birth control use compared with a traditional constructed measure. Cross-sectional survey. Not applicable. Nationally representative sample of U.S. women aged 15-44 years. None. Prevalence of intended and unintended pregnancies as estimated by [1] a traditional constructed measure from the National Survey of Family Growth (NSFG), and [2] a constructed measure relaxing assumptions regarding birth control use, reasons for nonuse, and pregnancy timing. The prevalence of unintended pregnancies was 6% higher using the traditional constructed measure as compared with the approach with relaxed assumptions (NSFG: 44%, 95% confidence interval [CI] 41, 46; new construct 38%, 95% CI, 36, 41). Using the NSFG approach, only 92% of women who stopped birth control to become pregnant and 0 women who were not using contraceptives at the time of the pregnancy and reported that they did not mind getting pregnant were classified as having intended pregnancies, compared with 100% using the new construct. Current measures of pregnancy intention may overestimate rates of unintended pregnancy, with over 340,000 pregnancies in the United States misclassified as unintended using the current approach, corresponding to an estimated savings of $678 million in public health-care expenditures. Current constructs make assumptions that may not reflect contemporary reproductive practices, so improved measures are needed. Published by Elsevier Inc.

  9. United States benefits of improved worldwide wheat crop information from a LANDSAT system

    NASA Technical Reports Server (NTRS)

    Heiss, K. P.; Sand, F.; Seidel, A.; Warner, D.; Sheflin, N.; Bhattacharyya, R.; Andrews, J.

    1975-01-01

    The value of worldwide information improvements on wheat crops, promised by LANDSAT, is measured in the context of world wheat markets. These benefits are based on current LANDSAT technical goals and assume that information is made available to all (United States and other countries) at the same time. A detailed empirical sample demonstration of the effect of improved information is given; the history of wheat commodity prices for 1971-72 is reconstructed and the price changes from improved vs. historical information are compared. The improved crop forecasting from a LANDSAT system assumed include wheat crop estimates of 90 percent accuracy for each major wheat producing region. Accurate, objective worldwide wheat crop information using space systems may have a very stabilizing influence on world commodity markets, in part making possible the establishment of long-term, stable trade relationships.

  10. Integrating landslide and liquefaction hazard and loss estimates with existing USGS real-time earthquake information products

    USGS Publications Warehouse

    Allstadt, Kate E.; Thompson, Eric M.; Hearne, Mike; Nowicki Jessee, M. Anna; Zhu, J.; Wald, David J.; Tanyas, Hakan

    2017-01-01

    The U.S. Geological Survey (USGS) has made significant progress toward the rapid estimation of shaking and shakingrelated losses through their Did You Feel It? (DYFI), ShakeMap, ShakeCast, and PAGER products. However, quantitative estimates of the extent and severity of secondary hazards (e.g., landsliding, liquefaction) are not currently included in scenarios and real-time post-earthquake products despite their significant contributions to hazard and losses for many events worldwide. We are currently running parallel global statistical models for landslides and liquefaction developed with our collaborators in testing mode, but much work remains in order to operationalize these systems. We are expanding our efforts in this area by not only improving the existing statistical models, but also by (1) exploring more sophisticated, physics-based models where feasible; (2) incorporating uncertainties; and (3) identifying and undertaking research and product development to provide useful landslide and liquefaction estimates and their uncertainties. Although our existing models use standard predictor variables that are accessible globally or regionally, including peak ground motions, topographic slope, and distance to water bodies, we continue to explore readily available proxies for rock and soil strength as well as other susceptibility terms. This work is based on the foundation of an expanding, openly available, case-history database we are compiling along with historical ShakeMaps for each event. The expected outcome of our efforts is a robust set of real-time secondary hazards products that meet the needs of a wide variety of earthquake information users. We describe the available datasets and models, developments currently underway, and anticipated products. 

  11. Women who take n-3 long-chain polyunsaturated fatty acid supplements during pregnancy and lactation meet the recommended intake.

    PubMed

    Jia, Xiaoming; Pakseresht, Mohammadreza; Wattar, Nour; Wildgrube, Jamie; Sontag, Stephanie; Andrews, Murphy; Subhan, Fatheema Begum; McCargar, Linda; Field, Catherine J

    2015-05-01

    The aim of the current study was to estimate total intake and dietary sources of eicosapentaenoic acid (EPA), docosapentanoic (DPA), and docosahexaenoic acid (DHA) and compare DHA intakes with the recommended intakes in a cohort of pregnant and lactating women. Twenty-four-hour dietary recalls and supplement intake questionnaires were collected from 600 women in the Alberta Pregnancy Outcomes and Nutrition (APrON) cohort at each trimester of pregnancy and 3 months postpartum. Dietary intake was estimated in 2 ways: by using a commercial software program and by using a database created for APrON. Only 27% of women during pregnancy and 25% at 3 months postpartum met the current European Union (EU) consensus recommendation for DHA. Seafood, fish, and seaweed products contributed to 79% of overall n-3 long-chain polyunsaturated fatty acids intake from foods, with the majority from salmon. The estimated intake of DHA and EPA was similar between databases, but the estimated DPA intake was 20%-30% higher using the comprehensive database built for this study. Women who took a supplement containing DHA were 10.6 and 11.1 times more likely to meet the current EU consensus recommendation for pregnancy (95% confidence interval (CI): 6.952-16.07; P<0.001) and postpartum (95% CI: 6.803-18.14; P<0.001), respectively. Our results suggest that the majority of women in the cohort were not meeting the EU recommendation for DHA during pregnancy and lactation, but taking a supplement significantly improved the likelihood that they would meet recommendations.

  12. Improved pressure contour analysis for estimating cardiac stroke volume using pulse wave velocity measurement.

    PubMed

    Kamoi, Shun; Pretty, Christopher; Balmer, Joel; Davidson, Shaun; Pironet, Antoine; Desaive, Thomas; Shaw, Geoffrey M; Chase, J Geoffrey

    2017-04-24

    Pressure contour analysis is commonly used to estimate cardiac performance for patients suffering from cardiovascular dysfunction in the intensive care unit. However, the existing techniques for continuous estimation of stroke volume (SV) from pressure measurement can be unreliable during hemodynamic instability, which is inevitable for patients requiring significant treatment. For this reason, pressure contour methods must be improved to capture changes in vascular properties and thus provide accurate conversion from pressure to flow. This paper presents a novel pressure contour method utilizing pulse wave velocity (PWV) measurement to capture vascular properties. A three-element Windkessel model combined with the reservoir-wave concept are used to decompose the pressure contour into components related to storage and flow. The model parameters are identified beat-to-beat from the water-hammer equation using measured PWV, wave component of the pressure, and an estimate of subject-specific aortic dimension. SV is then calculated by converting pressure to flow using identified model parameters. The accuracy of this novel method is investigated using data from porcine experiments (N = 4 Pietrain pigs, 20-24.5 kg), where hemodynamic properties were significantly altered using dobutamine, fluid administration, and mechanical ventilation. In the experiment, left ventricular volume was measured using admittance catheter, and aortic pressure waveforms were measured at two locations, the aortic arch and abdominal aorta. Bland-Altman analysis comparing gold-standard SV measured by the admittance catheter and estimated SV from the novel method showed average limits of agreement of ±26% across significant hemodynamic alterations. This result shows the method is capable of estimating clinically acceptable absolute SV values according to Critchely and Critchely. The novel pressure contour method presented can accurately estimate and track SV even when hemodynamic properties are significantly altered. Integrating PWV measurements into pressure contour analysis improves identification of beat-to-beat changes in Windkessel model parameters, and thus, provides accurate estimate of blood flow from measured pressure contour. The method has great potential for overcoming weaknesses associated with current pressure contour methods for estimating SV.

  13. Histogram equalization with Bayesian estimation for noise robust speech recognition.

    PubMed

    Suh, Youngjoo; Kim, Hoirin

    2018-02-01

    The histogram equalization approach is an efficient feature normalization technique for noise robust automatic speech recognition. However, it suffers from performance degradation when some fundamental conditions are not satisfied in the test environment. To remedy these limitations of the original histogram equalization methods, class-based histogram equalization approach has been proposed. Although this approach showed substantial performance improvement under noise environments, it still suffers from performance degradation due to the overfitting problem when test data are insufficient. To address this issue, the proposed histogram equalization technique employs the Bayesian estimation method in the test cumulative distribution function estimation. It was reported in a previous study conducted on the Aurora-4 task that the proposed approach provided substantial performance gains in speech recognition systems based on the acoustic modeling of the Gaussian mixture model-hidden Markov model. In this work, the proposed approach was examined in speech recognition systems with deep neural network-hidden Markov model (DNN-HMM), the current mainstream speech recognition approach where it also showed meaningful performance improvement over the conventional maximum likelihood estimation-based method. The fusion of the proposed features with the mel-frequency cepstral coefficients provided additional performance gains in DNN-HMM systems, which otherwise suffer from performance degradation in the clean test condition.

  14. GPS Estimates of Integrated Precipitable Water Aid Weather Forecasters

    NASA Technical Reports Server (NTRS)

    Moore, Angelyn W.; Gutman, Seth I.; Holub, Kirk; Bock, Yehuda; Danielson, David; Laber, Jayme; Small, Ivory

    2013-01-01

    Global Positioning System (GPS) meteorology provides enhanced density, low-latency (30-min resolution), integrated precipitable water (IPW) estimates to NOAA NWS (National Oceanic and Atmospheric Adminis tration Nat ional Weather Service) Weather Forecast Offices (WFOs) to provide improved model and satellite data verification capability and more accurate forecasts of extreme weather such as flooding. An early activity of this project was to increase the number of stations contributing to the NOAA Earth System Research Laboratory (ESRL) GPS meteorology observing network in Southern California by about 27 stations. Following this, the Los Angeles/Oxnard and San Diego WFOs began using the enhanced GPS-based IPW measurements provided by ESRL in the 2012 and 2013 monsoon seasons. Forecasters found GPS IPW to be an effective tool in evaluating model performance, and in monitoring monsoon development between weather model runs for improved flood forecasting. GPS stations are multi-purpose, and routine processing for position solutions also yields estimates of tropospheric zenith delays, which can be converted into mm-accuracy PWV (precipitable water vapor) using in situ pressure and temperature measurements, the basis for GPS meteorology. NOAA ESRL has implemented this concept with a nationwide distribution of more than 300 "GPSMet" stations providing IPW estimates at sub-hourly resolution currently used in operational weather models in the U.S.

  15. Speech Enhancement, Gain, and Noise Spectrum Adaptation Using Approximate Bayesian Estimation

    PubMed Central

    Hao, Jiucang; Attias, Hagai; Nagarajan, Srikantan; Lee, Te-Won; Sejnowski, Terrence J.

    2010-01-01

    This paper presents a new approximate Bayesian estimator for enhancing a noisy speech signal. The speech model is assumed to be a Gaussian mixture model (GMM) in the log-spectral domain. This is in contrast to most current models in frequency domain. Exact signal estimation is a computationally intractable problem. We derive three approximations to enhance the efficiency of signal estimation. The Gaussian approximation transforms the log-spectral domain GMM into the frequency domain using minimal Kullback–Leiber (KL)-divergency criterion. The frequency domain Laplace method computes the maximum a posteriori (MAP) estimator for the spectral amplitude. Correspondingly, the log-spectral domain Laplace method computes the MAP estimator for the log-spectral amplitude. Further, the gain and noise spectrum adaptation are implemented using the expectation–maximization (EM) algorithm within the GMM under Gaussian approximation. The proposed algorithms are evaluated by applying them to enhance the speeches corrupted by the speech-shaped noise (SSN). The experimental results demonstrate that the proposed algorithms offer improved signal-to-noise ratio, lower word recognition error rate, and less spectral distortion. PMID:20428253

  16. Inertial sensor-based smoother for gait analysis.

    PubMed

    Suh, Young Soo

    2014-12-17

    An off-line smoother algorithm is proposed to estimate foot motion using an inertial sensor unit (three-axis gyroscopes and accelerometers) attached to a shoe. The smoother gives more accurate foot motion estimation than filter-based algorithms by using all of the sensor data instead of using the current sensor data. The algorithm consists of two parts. In the first part, a Kalman filter is used to obtain initial foot motion estimation. In the second part, the error in the initial estimation is compensated using a smoother, where the problem is formulated in the quadratic optimization problem. An efficient solution of the quadratic optimization problem is given using the sparse structure. Through experiments, it is shown that the proposed algorithm can estimate foot motion more accurately than a filter-based algorithm with reasonable computation time. In particular, there is significant improvement in the foot motion estimation when the foot is moving off the floor: the z-axis position error squared sum (total time: 3.47 s) when the foot is in the air is 0.0807 m2 (Kalman filter) and 0.0020 m2 (the proposed smoother).

  17. A New Quaternion-Based Kalman Filter for Real-Time Attitude Estimation Using the Two-Step Geometrically-Intuitive Correction Algorithm.

    PubMed

    Feng, Kaiqiang; Li, Jie; Zhang, Xiaoming; Shen, Chong; Bi, Yu; Zheng, Tao; Liu, Jun

    2017-09-19

    In order to reduce the computational complexity, and improve the pitch/roll estimation accuracy of the low-cost attitude heading reference system (AHRS) under conditions of magnetic-distortion, a novel linear Kalman filter, suitable for nonlinear attitude estimation, is proposed in this paper. The new algorithm is the combination of two-step geometrically-intuitive correction (TGIC) and the Kalman filter. In the proposed algorithm, the sequential two-step geometrically-intuitive correction scheme is used to make the current estimation of pitch/roll immune to magnetic distortion. Meanwhile, the TGIC produces a computed quaternion input for the Kalman filter, which avoids the linearization error of measurement equations and reduces the computational complexity. Several experiments have been carried out to validate the performance of the filter design. The results demonstrate that the mean time consumption and the root mean square error (RMSE) of pitch/roll estimation under magnetic disturbances are reduced by 45.9% and 33.8%, respectively, when compared with a standard filter. In addition, the proposed filter is applicable for attitude estimation under various dynamic conditions.

  18. A New Quaternion-Based Kalman Filter for Real-Time Attitude Estimation Using the Two-Step Geometrically-Intuitive Correction Algorithm

    PubMed Central

    Feng, Kaiqiang; Li, Jie; Zhang, Xiaoming; Shen, Chong; Bi, Yu; Zheng, Tao; Liu, Jun

    2017-01-01

    In order to reduce the computational complexity, and improve the pitch/roll estimation accuracy of the low-cost attitude heading reference system (AHRS) under conditions of magnetic-distortion, a novel linear Kalman filter, suitable for nonlinear attitude estimation, is proposed in this paper. The new algorithm is the combination of two-step geometrically-intuitive correction (TGIC) and the Kalman filter. In the proposed algorithm, the sequential two-step geometrically-intuitive correction scheme is used to make the current estimation of pitch/roll immune to magnetic distortion. Meanwhile, the TGIC produces a computed quaternion input for the Kalman filter, which avoids the linearization error of measurement equations and reduces the computational complexity. Several experiments have been carried out to validate the performance of the filter design. The results demonstrate that the mean time consumption and the root mean square error (RMSE) of pitch/roll estimation under magnetic disturbances are reduced by 45.9% and 33.8%, respectively, when compared with a standard filter. In addition, the proposed filter is applicable for attitude estimation under various dynamic conditions. PMID:28925979

  19. Preliminary estimates of the economic implications of addiction in the United Arab Emirates.

    PubMed

    Doran, C M

    2017-01-23

    This study aimed to provide preliminary estimates of the economic implications of addiction in the United Arab Emirates (UAE). Local and international data sources were used to derive estimates of substancerelated healthcare costs, lost productivity and criminal behaviour. From an estimated population of 8.26 million: ~1.47 million used tobacco (20.5% of adults); 380 085 used cannabis (> 5%); 14 077 used alcohol in a harmful manner (0.2%); and 1408 used opiates (0.02%). The cost of addiction was estimated at US$ 5.47 billion in 2012, equivalent to 1.4% of gross domestic product. Productivity costs were the largest contributor at US$ 4.79 billion (88%) followed by criminal behaviour at US$ 0.65 billion (12%). There were no data to estimate cost of: treating tobacco-related diseases, community education and prevention efforts, or social disharmony. Current data collection efforts are limited in their capacity to fully inform an appropriate response to addiction in the UAE. Resources are required to improve indicators of drug use, monitor harm and evaluate treatment.

  20. Estimating reliable paediatric reference intervals in clinical chemistry and haematology.

    PubMed

    Ridefelt, Peter; Hellberg, Dan; Aldrimer, Mattias; Gustafsson, Jan

    2014-01-01

    Very few high-quality studies on paediatric reference intervals for general clinical chemistry and haematology analytes have been performed. Three recent prospective community-based projects utilising blood samples from healthy children in Sweden, Denmark and Canada have substantially improved the situation. The present review summarises current reference interval studies for common clinical chemistry and haematology analyses. ©2013 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.

  1. Improving the Parametric Method of Cost Estimating Relationships of Naval Ships

    DTIC Science & Technology

    2014-06-01

    tool since the total cost of the ship is broken down into smaller parts as defined by the WBS. The Navy currently uses the Expanded Ship Work Breakdown...Includes boilers , reactors, turbines, gears, shafting, propellers, steam piping, lube oil piping, and radiation 300 Electric Plant Includes ship...spaces, ladders, storerooms, laundry, and workshops 700 Armament Includes guns, missile launchers, ammunition handling and stowage, torpedo tubes , depth

  2. Benefits and costs of substance abuse treatment programs for state prison inmates: results from a lifetime simulation model.

    PubMed

    Zarkin, Gary A; Cowell, Alexander J; Hicks, Katherine A; Mills, Michael J; Belenko, Steven; Dunlap, Laura J; Houser, Kimberly A; Keyes, Vince

    2012-06-01

    Reflecting drug use patterns and criminal justice policies throughout the 1990s and 2000s, prisons hold a disproportionate number of society's drug abusers. Approximately 50% of state prisoners meet the criteria for a diagnosis of drug abuse or dependence, but only 10% receive medically based drug treatment. Because of the link between substance abuse and crime, treating substance abusing and dependent state prisoners while incarcerated has the potential to yield substantial economic benefits. In this paper, we simulate the lifetime costs and benefits of improving prison-based substance abuse treatment and post-release aftercare for a cohort of state prisoners. Our model captures the dynamics of substance abuse as a chronic disease; estimates the benefits of substance abuse treatment over individuals' lifetimes; and tracks the costs of crime and criminal justice costs related to policing, adjudication, and incarceration. We estimate net societal benefits and cost savings to the criminal justice system of the current treatment system and five policy scenarios. We find that four of the five policy scenarios provide positive net societal benefits and cost savings to the criminal justice system relative to the current treatment system. Our study demonstrates the societal gains to improving the drug treatment system for state prisoners. Copyright © 2011 John Wiley & Sons, Ltd.

  3. BENEFITS AND COSTS OF SUBSTANCE ABUSE TREATMENT PROGRAMS FOR STATE PRISON INMATES: RESULTS FROM A LIFETIME SIMULATION MODEL

    PubMed Central

    ZARKIN, GARY A.; COWELL, ALEXANDER J.; HICKS, KATHERINE A.; MILLS, MICHAEL J.; BELENKO, STEVEN; DUNLAP, LAURA J.; HOUSER, KIMBERLY A.; KEYES, VINCE

    2011-01-01

    SUMMARY Reflecting drug use patterns and criminal justice policies throughout the 1990s and 2000s, prisons hold a disproportionate number of society’s drug abusers. Approximately 50% of state prisoners meet the criteria for a diagnosis of drug abuse or dependence, but only 10% receive medically based drug treatment. Because of the link between substance abuse and crime, treating substance abusing and dependent state prisoners while incarcerated has the potential to yield substantial economic benefits. In this paper, we simulate the lifetime costs and benefits of improving prison-based substance abuse treatment and post-release aftercare for a cohort of state prisoners. Our model captures the dynamics of substance abuse as a chronic disease; estimates the benefits of substance abuse treatment over individuals’ lifetimes; and tracks the costs of crime and criminal justice costs related to policing, adjudication, and incarceration. We estimate net societal benefits and cost savings to the criminal justice system of the current treatment system and five policy scenarios. We find that four of the five policy scenarios provide positive net societal benefits and cost savings to the criminal justice system relative to the current treatment system. Our study demonstrates the societal gains to improving the drug treatment system for state prisoners. PMID:21506193

  4. Coupling of hydrogeological models with hydrogeophysical data to characterize seawater intrusion and shallow geothermal systems

    NASA Astrophysics Data System (ADS)

    Beaujean, J.; Kemna, A.; Engesgaard, P. K.; Hermans, T.; Vandenbohede, A.; Nguyen, F.

    2013-12-01

    While coastal aquifers are being stressed due to climate changes and excessive groundwater withdrawals require characterizing efficiently seawater intrusion (SWI) dynamics, production of geothermal energy is increasingly being used to hinder global warming. To study these issues, we need both robust measuring technologies and reliable predictions based on numerical models. SWI models are currently calibrated using borehole observations. Similarly, geothermal models depend mainly on the temperature field at few locations. Electrical resistivity tomography (ERT) can be used to improve these models given its high sensitivity to TDS and temperature and its relatively high lateral resolution. Inherent geophysical limitations, such as the resolution loss, can affect the overall quality of the ERT images and also prevent the correct recovery of the desired hydrochemical property. We present an uncoupled and coupled hydrogeophysical inversion to calibrate SWI and thermohydrogeologic models using ERT. In the SWI models, we demonstrate with two synthetic benchmarks (homogeneous and heterogeneous coastal aquifers) the ability of cumulative sensitivity-filtered ERT images using surface-only data to recover the hydraulic conductivity. Filtering of ERT-derived data at depth, where resolution is poorer, and the model errors make the dispersivity more difficult to estimate. In the coupled approach, we showed that parameter estimation is significantly improved because regularization bias is replaced by forward modeling only. Our efforts are currently focusing on applying the uncoupled/coupled approaches on a real life case study using field data from the site of Almeria, SE Spain. In the thermohydrogeologic models, the most sensitive hydrologic parameters responsible for heat transport are estimated from surface ERT-derived temperatures and ERT resistance data. A real life geothermal experiment that took place on the Campus De Sterre of Ghent University, Belgium and a synthetic case are tested. They consist in a thermal injection and storage of water in a shallow sandy aquifer. The use of a physically-based constraint accounting for the difference in conductivity between the formation and the tap injected water and based on the hydrogeological model calibrated first on temperatures is necessary to improve the parameter estimation. Results suggest that time-lapse ERT data may be limited but useful information for estimating groundwater flow and transport parameters for both the convection and conduction phases.

  5. Estimate of potential benefit for Europe of fitting Autonomous Emergency Braking (AEB) systems for pedestrian protection to passenger cars.

    PubMed

    Edwards, Mervyn; Nathanson, Andrew; Wisch, Marcus

    2014-01-01

    The objective of the current study was to estimate the benefit for Europe of fitting precrash braking systems to cars that detect pedestrians and autonomously brake the car to prevent or lower the speed of the impact with the pedestrian. The analysis was divided into 2 main parts: (1) Develop and apply methodology to estimate benefit for Great Britain and Germany; (2) scale Great Britain and German results to give an indicative estimate for Europe (EU27). The calculation methodology developed to estimate the benefit was based on 2 main steps: 1. Calculate the change in the impact speed distribution curve for pedestrian casualties hit by the fronts of cars assuming pedestrian autonomous emergency braking (AEB) system fitment. 2. From this, calculate the change in the number of fatally, seriously, and slightly injured casualties by using the relationship between risk of injury and the casualty impact speed distribution to sum the resulting risks for each individual casualty. The methodology was applied to Great Britain and German data for 3 types of pedestrian AEB systems representative of (1) currently available systems; (2) future systems with improved performance, which are expected to be available in the next 2-3 years; and (3) reference limit system, which has the best performance currently thought to be technically feasible. Nominal benefits estimated for Great Britain ranged from £119 million to £385 million annually and for Germany from €63 million to €216 million annually depending on the type of AEB system assumed fitted. Sensitivity calculations showed that the benefit estimated could vary from about half to twice the nominal estimate, depending on factors such as whether or not the system would function at night and the road friction assumed. Based on scaling of estimates made for Great Britain and Germany, the nominal benefit of implementing pedestrian AEB systems on all cars in Europe was estimated to range from about €1 billion per year for current generation AEB systems to about €3.5 billion for a reference limit system (i.e., best performance thought technically feasible at present). Dividing these values by the number of new passenger cars registered in Europe per year gives an indication that the cost of a system per car should be less than ∼€80 to ∼€280 for it to be cost effective. The potential benefit of fitting AEB systems to cars in Europe for pedestrian protection has been estimated and the results interpreted to indicate the upper limit of cost for a system to allow it to be cost effective.

  6. Using environmental tracers and transient hydraulic heads to estimate groundwater recharge and conductivity

    NASA Astrophysics Data System (ADS)

    Erdal, Daniel; Cirpka, Olaf A.

    2017-04-01

    Regional groundwater flow strongly depends on groundwater recharge and hydraulic conductivity. While conductivity is a spatially variable field, recharge can vary in both space and time. None of the two fields can be reliably observed on larger scales, and their estimation from other sparse data sets is an open topic. Further, common hydraulic-head observations may not suffice to constrain both fields simultaneously. In the current work we use the Ensemble Kalman filter to estimate spatially variable conductivity, spatiotemporally variable recharge and porosity for a synthetic phreatic aquifer. We use transient hydraulic-head and one spatially distributed set of environmental tracer observations to constrain the estimation. As environmental tracers generally reside for a long time in an aquifer, they require long simulation times and carries a long memory that makes them highly unsuitable for use in a sequential framework. Therefore, in this work we use the environmental tracer information to precondition the initial ensemble of recharge and conductivities, before starting the sequential filter. Thereby, we aim at improving the performance of the sequential filter by limiting the range of the recharge to values similar to the long-term annual recharge means and by creating an initial ensemble of conductivities that show similar pattern and values to the true field. The sequential filter is then used to further improve the parameters and to estimate the short term temporal behavior as well as the temporally evolving head field needed for short term predictions within the aquifer. For a virtual reality covering a subsection of the river Neckar it is shown that the use of environmental tracers can improve the performance of the filter. Results using the EnKF with and without this preconditioned initial ensemble are evaluated and discussed.

  7. Review of Quantitative Ultrasound: Envelope Statistics and Backscatter Coefficient Imaging and Contributions to Diagnostic Ultrasound.

    PubMed

    Oelze, Michael L; Mamou, Jonathan

    2016-02-01

    Conventional medical imaging technologies, including ultrasound, have continued to improve over the years. For example, in oncology, medical imaging is characterized by high sensitivity, i.e., the ability to detect anomalous tissue features, but the ability to classify these tissue features from images often lacks specificity. As a result, a large number of biopsies of tissues with suspicious image findings are performed each year with a vast majority of these biopsies resulting in a negative finding. To improve specificity of cancer imaging, quantitative imaging techniques can play an important role. Conventional ultrasound B-mode imaging is mainly qualitative in nature. However, quantitative ultrasound (QUS) imaging can provide specific numbers related to tissue features that can increase the specificity of image findings leading to improvements in diagnostic ultrasound. QUS imaging can encompass a wide variety of techniques including spectral-based parameterization, elastography, shear wave imaging, flow estimation, and envelope statistics. Currently, spectral-based parameterization and envelope statistics are not available on most conventional clinical ultrasound machines. However, in recent years, QUS techniques involving spectral-based parameterization and envelope statistics have demonstrated success in many applications, providing additional diagnostic capabilities. Spectral-based techniques include the estimation of the backscatter coefficient (BSC), estimation of attenuation, and estimation of scatterer properties such as the correlation length associated with an effective scatterer diameter (ESD) and the effective acoustic concentration (EAC) of scatterers. Envelope statistics include the estimation of the number density of scatterers and quantification of coherent to incoherent signals produced from the tissue. Challenges for clinical application include correctly accounting for attenuation effects and transmission losses and implementation of QUS on clinical devices. Successful clinical and preclinical applications demonstrating the ability of QUS to improve medical diagnostics include characterization of the myocardium during the cardiac cycle, cancer detection, classification of solid tumors and lymph nodes, detection and quantification of fatty liver disease, and monitoring and assessment of therapy.

  8. Review of quantitative ultrasound: envelope statistics and backscatter coefficient imaging and contributions to diagnostic ultrasound

    PubMed Central

    Oelze, Michael L.; Mamou, Jonathan

    2017-01-01

    Conventional medical imaging technologies, including ultrasound, have continued to improve over the years. For example, in oncology, medical imaging is characterized by high sensitivity, i.e., the ability to detect anomalous tissue features, but the ability to classify these tissue features from images often lacks specificity. As a result, a large number of biopsies of tissues with suspicious image findings are performed each year with a vast majority of these biopsies resulting in a negative finding. To improve specificity of cancer imaging, quantitative imaging techniques can play an important role. Conventional ultrasound B-mode imaging is mainly qualitative in nature. However, quantitative ultrasound (QUS) imaging can provide specific numbers related to tissue features that can increase the specificity of image findings leading to improvements in diagnostic ultrasound. QUS imaging techniques can encompass a wide variety of techniques including spectral-based parameterization, elastography, shear wave imaging, flow estimation and envelope statistics. Currently, spectral-based parameterization and envelope statistics are not available on most conventional clinical ultrasound machines. However, in recent years QUS techniques involving spectral-based parameterization and envelope statistics have demonstrated success in many applications, providing additional diagnostic capabilities. Spectral-based techniques include the estimation of the backscatter coefficient, estimation of attenuation, and estimation of scatterer properties such as the correlation length associated with an effective scatterer diameter and the effective acoustic concentration of scatterers. Envelope statistics include the estimation of the number density of scatterers and quantification of coherent to incoherent signals produced from the tissue. Challenges for clinical application include correctly accounting for attenuation effects and transmission losses and implementation of QUS on clinical devices. Successful clinical and pre-clinical applications demonstrating the ability of QUS to improve medical diagnostics include characterization of the myocardium during the cardiac cycle, cancer detection, classification of solid tumors and lymph nodes, detection and quantification of fatty liver disease, and monitoring and assessment of therapy. PMID:26761606

  9. Improving best-phase image quality in cardiac CT by motion correction with MAM optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohkohl, Christopher; Bruder, Herbert; Stierstorfer, Karl

    2013-03-15

    Purpose: Research in image reconstruction for cardiac CT aims at using motion correction algorithms to improve the image quality of the coronary arteries. The key to those algorithms is motion estimation, which is currently based on 3-D/3-D registration to align the structures of interest in images acquired in multiple heart phases. The need for an extended scan data range covering several heart phases is critical in terms of radiation dose to the patient and limits the clinical potential of the method. Furthermore, literature reports only slight quality improvements of the motion corrected images when compared to the most quiet phasemore » (best-phase) that was actually used for motion estimation. In this paper a motion estimation algorithm is proposed which does not require an extended scan range but works with a short scan data interval, and which markedly improves the best-phase image quality. Methods: Motion estimation is based on the definition of motion artifact metrics (MAM) to quantify motion artifacts in a 3-D reconstructed image volume. The authors use two different MAMs, entropy, and positivity. By adjusting the motion field parameters, the MAM of the resulting motion-compensated reconstruction is optimized using a gradient descent procedure. In this way motion artifacts are minimized. For a fast and practical implementation, only analytical methods are used for motion estimation and compensation. Both the MAM-optimization and a 3-D/3-D registration-based motion estimation algorithm were investigated by means of a computer-simulated vessel with a cardiac motion profile. Image quality was evaluated using normalized cross-correlation (NCC) with the ground truth template and root-mean-square deviation (RMSD). Four coronary CT angiography patient cases were reconstructed to evaluate the clinical performance of the proposed method. Results: For the MAM-approach, the best-phase image quality could be improved for all investigated heart phases, with a maximum improvement of the NCC value by 100% and of the RMSD value by 81%. The corresponding maximum improvements for the registration-based approach were 20% and 40%. In phases with very rapid motion the registration-based algorithm obtained better image quality, while the image quality of the MAM algorithm was superior in phases with less motion. The image quality improvement of the MAM optimization was visually confirmed for the different clinical cases. Conclusions: The proposed method allows a software-based best-phase image quality improvement in coronary CT angiography. A short scan data interval at the target heart phase is sufficient, no additional scan data in other cardiac phases are required. The algorithm is therefore directly applicable to any standard cardiac CT acquisition protocol.« less

  10. Development of Real Time Implementation of 5/5 Rule based Fuzzy Logic Controller Shunt Active Power Filter for Power Quality Improvement

    NASA Astrophysics Data System (ADS)

    Puhan, Pratap Sekhar; Ray, Pravat Kumar; Panda, Gayadhar

    2016-12-01

    This paper presents the effectiveness of 5/5 Fuzzy rule implementation in Fuzzy Logic Controller conjunction with indirect control technique to enhance the power quality in single phase system, An indirect current controller in conjunction with Fuzzy Logic Controller is applied to the proposed shunt active power filter to estimate the peak reference current and capacitor voltage. Current Controller based pulse width modulation (CCPWM) is used to generate the switching signals of voltage source inverter. Various simulation results are presented to verify the good behaviour of the Shunt active Power Filter (SAPF) with proposed two levels Hysteresis Current Controller (HCC). For verification of Shunt Active Power Filter in real time, the proposed control algorithm has been implemented in laboratory developed setup in dSPACE platform.

  11. Improving GRADE evidence tables part 1: a randomized trial shows improved understanding of content in summary of findings tables with a new format.

    PubMed

    Carrasco-Labra, Alonso; Brignardello-Petersen, Romina; Santesso, Nancy; Neumann, Ignacio; Mustafa, Reem A; Mbuagbaw, Lawrence; Etxeandia Ikobaltzeta, Itziar; De Stio, Catherine; McCullagh, Lauren J; Alonso-Coello, Pablo; Meerpohl, Joerg J; Vandvik, Per Olav; Brozek, Jan L; Akl, Elie A; Bossuyt, Patrick; Churchill, Rachel; Glenton, Claire; Rosenbaum, Sarah; Tugwell, Peter; Welch, Vivian; Garner, Paul; Guyatt, Gordon; Schünemann, Holger J

    2016-06-01

    The current format of summary of findings (SoFs) tables for presenting effect estimates and associated quality of evidence improve understanding and assist users finding key information in systematic reviews. Users of SoF tables have demanded alternative formats to express findings from systematic reviews. We conducted a randomized controlled trial among systematic review users to compare the relative merits of a new format with the current formats of SoF tables regarding understanding, accessibility of information, satisfaction, and preference. Our primary goal was to show that the new format is not inferior to the current format. Of 390 potentially eligible subjects, 290 were randomized. Of seven items testing understanding, three showed similar results, two showed small differences favoring the new format, and two (understanding risk difference and quality of the evidence associated with a treatment effect) showed large differences favoring the new format [63% (95% confidence interval {CI}: 55, 71) and 62% (95% CI: 52, 71) more correct answers, respectively]. Respondents rated information in the alternative format as more accessible overall and preferred the new format over the current format. While providing at least similar levels of understanding for some items and increased understanding for others, users prefer the new format of SoF tables. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Magnitude Estimation for the 2011 Tohoku-Oki Earthquake Based on Ground Motion Prediction Equations

    NASA Astrophysics Data System (ADS)

    Eshaghi, Attieh; Tiampo, Kristy F.; Ghofrani, Hadi; Atkinson, Gail M.

    2015-08-01

    This study investigates whether real-time strong ground motion data from seismic stations could have been used to provide an accurate estimate of the magnitude of the 2011 Tohoku-Oki earthquake in Japan. Ultimately, such an estimate could be used as input data for a tsunami forecast and would lead to more robust earthquake and tsunami early warning. We collected the strong motion accelerograms recorded by borehole and free-field (surface) Kiban Kyoshin network stations that registered this mega-thrust earthquake in order to perform an off-line test to estimate the magnitude based on ground motion prediction equations (GMPEs). GMPEs for peak ground acceleration and peak ground velocity (PGV) from a previous study by Eshaghi et al. in the Bulletin of the Seismological Society of America 103. (2013) derived using events with moment magnitude ( M) ≥ 5.0, 1998-2010, were used to estimate the magnitude of this event. We developed new GMPEs using a more complete database (1998-2011), which added only 1 year but approximately twice as much data to the initial catalog (including important large events), to improve the determination of attenuation parameters and magnitude scaling. These new GMPEs were used to estimate the magnitude of the Tohoku-Oki event. The estimates obtained were compared with real time magnitude estimates provided by the existing earthquake early warning system in Japan. Unlike the current operational magnitude estimation methods, our method did not saturate and can provide robust estimates of moment magnitude within ~100 s after earthquake onset for both catalogs. It was found that correcting for average shear-wave velocity in the uppermost 30 m () improved the accuracy of magnitude estimates from surface recordings, particularly for magnitude estimates of PGV (Mpgv). The new GMPEs also were used to estimate the magnitude of all earthquakes in the new catalog with at least 20 records. Results show that the magnitude estimate from PGV values using borehole recordings had the smallest standard deviation among the estimated magnitudes and produced more stable and robust magnitude estimates. This suggests that incorporating borehole strong ground-motion records immediately available after the occurrence of large earthquakes can provide robust and accurate magnitude estimation.

  13. An appraisal of Indonesia's immense peat carbon stock using national peatland maps: uncertainties and potential losses from conversion.

    PubMed

    Warren, Matthew; Hergoualc'h, Kristell; Kauffman, J Boone; Murdiyarso, Daniel; Kolka, Randall

    2017-12-01

    A large proportion of the world's tropical peatlands occur in Indonesia where rapid conversion and associated losses of carbon, biodiversity and ecosystem services have brought peatland management to the forefront of Indonesia's climate mitigation efforts. We evaluated peat volume from two commonly referenced maps of peat distribution and depth published by Wetlands International (WI) and the Indonesian Ministry of Agriculture (MoA), and used regionally specific values of carbon density to calculate carbon stocks. Peatland extent and volume published in the MoA maps are lower than those in the WI maps, resulting in lower estimates of carbon storage. We estimate Indonesia's total peat carbon store to be within 13.6 GtC (the low MoA map estimate) and 40.5 GtC (the high WI map estimate) with a best estimate of 28.1 GtC: the midpoint of medium carbon stock estimates derived from WI (30.8 GtC) and MoA (25.3 GtC) maps. This estimate is about half of previous assessments which used an assumed average value of peat thickness for all Indonesian peatlands, and revises the current global tropical peat carbon pool to 75 GtC. Yet, these results do not diminish the significance of Indonesia's peatlands, which store an estimated 30% more carbon than the biomass of all Indonesian forests. The largest discrepancy between maps is for the Papua province, which accounts for 62-71% of the overall differences in peat area, volume and carbon storage. According to the MoA map, 80% of Indonesian peatlands are <300 cm thick and thus vulnerable to conversion outside of protected areas according to environmental regulations. The carbon contained in these shallower peatlands is conservatively estimated to be 10.6 GtC, equivalent to 42% of Indonesia's total peat carbon and about 12 years of global emissions from land use change at current rates. Considering the high uncertainties in peatland extent, volume and carbon storage revealed in this assessment of current maps, a systematic revision of Indonesia's peat maps to produce a single geospatial reference that is universally accepted would improve national peat carbon storage estimates and greatly benefit carbon cycle research, land use management and spatial planning.

  14. How Much Will It Cost To Monitor Microbial Drinking Water Quality in Sub-Saharan Africa?

    PubMed Central

    2017-01-01

    Microbial water quality monitoring is crucial for managing water resources and protecting public health. However, institutional testing activities in sub-Saharan Africa are currently limited. Because the economics of water quality testing are poorly understood, the extent to which cost may be a barrier to monitoring in different settings is unclear. This study used cost data from 18 African monitoring institutions (piped water suppliers and health surveillance agencies in six countries) and estimates of water supply type coverage from 15 countries to assess the annual financial requirements for microbial water testing at both national and regional levels, using World Health Organization recommendations for sampling frequency. We found that a microbial water quality test costs 21.0 ± 11.3 USD, on average, including consumables, equipment, labor, and logistics, which is higher than previously calculated. Our annual cost estimates for microbial monitoring of piped supplies and improved point sources ranged between 8 000 USD for Equatorial Guinea and 1.9 million USD for Ethiopia, depending primarily on the population served but also on the distribution of piped water system sizes. A comparison with current national water and sanitation budgets showed that the cost of implementing prescribed testing levels represents a relatively modest proportion of existing budgets (<2%). At the regional level, we estimated that monitoring the microbial quality of all improved water sources in sub-Saharan Africa would cost 16.0 million USD per year, which is minimal in comparison to the projected annual capital costs of achieving Sustainable Development Goal 6.1 of safe water for all (14.8 billion USD). PMID:28459563

  15. How Much Will It Cost To Monitor Microbial Drinking Water Quality in Sub-Saharan Africa?

    PubMed

    Delaire, Caroline; Peletz, Rachel; Kumpel, Emily; Kisiangani, Joyce; Bain, Robert; Khush, Ranjiv

    2017-06-06

    Microbial water quality monitoring is crucial for managing water resources and protecting public health. However, institutional testing activities in sub-Saharan Africa are currently limited. Because the economics of water quality testing are poorly understood, the extent to which cost may be a barrier to monitoring in different settings is unclear. This study used cost data from 18 African monitoring institutions (piped water suppliers and health surveillance agencies in six countries) and estimates of water supply type coverage from 15 countries to assess the annual financial requirements for microbial water testing at both national and regional levels, using World Health Organization recommendations for sampling frequency. We found that a microbial water quality test costs 21.0 ± 11.3 USD, on average, including consumables, equipment, labor, and logistics, which is higher than previously calculated. Our annual cost estimates for microbial monitoring of piped supplies and improved point sources ranged between 8 000 USD for Equatorial Guinea and 1.9 million USD for Ethiopia, depending primarily on the population served but also on the distribution of piped water system sizes. A comparison with current national water and sanitation budgets showed that the cost of implementing prescribed testing levels represents a relatively modest proportion of existing budgets (<2%). At the regional level, we estimated that monitoring the microbial quality of all improved water sources in sub-Saharan Africa would cost 16.0 million USD per year, which is minimal in comparison to the projected annual capital costs of achieving Sustainable Development Goal 6.1 of safe water for all (14.8 billion USD).

  16. Algal cell disruption using microbubbles to localize ultrasonic energy

    PubMed Central

    Krehbiel, Joel D.; Schideman, Lance C.; King, Daniel A.; Freund, Jonathan B.

    2015-01-01

    Microbubbles were added to an algal solution with the goal of improving cell disruption efficiency and the net energy balance for algal biofuel production. Experimental results showed that disruption increases with increasing peak rarefaction ultrasound pressure over the range studied: 1.90 to 3.07 MPa. Additionally, ultrasound cell disruption increased by up to 58% by adding microbubbles, with peak disruption occurring in the range of 108 microbubbles/ml. The localization of energy in space and time provided by the bubbles improve efficiency: energy requirements for such a process were estimated to be one-fourth of the available heat of combustion of algal biomass and one-fifth of currently used cell disruption methods. This increase in energy efficiency could make microbubble enhanced ultrasound viable for bioenergy applications and is expected to integrate well with current cell harvesting methods based upon dissolved air flotation. PMID:25311188

  17. Position and speed control of brushless DC motors using sensorless techniques and application trends.

    PubMed

    Gamazo-Real, José Carlos; Vázquez-Sánchez, Ernesto; Gómez-Gil, Jaime

    2010-01-01

    This paper provides a technical review of position and speed sensorless methods for controlling Brushless Direct Current (BLDC) motor drives, including the background analysis using sensors, limitations and advances. The performance and reliability of BLDC motor drivers have been improved because the conventional control and sensing techniques have been improved through sensorless technology. Then, in this paper sensorless advances are reviewed and recent developments in this area are introduced with their inherent advantages and drawbacks, including the analysis of practical implementation issues and applications. The study includes a deep overview of state-of-the-art back-EMF sensing methods, which includes Terminal Voltage Sensing, Third Harmonic Voltage Integration, Terminal Current Sensing, Back-EMF Integration and PWM strategies. Also, the most relevant techniques based on estimation and models are briefly analysed, such as Sliding-mode Observer, Extended Kalman Filter, Model Reference Adaptive System, Adaptive observers (Full-order and Pseudoreduced-order) and Artificial Neural Networks.

  18. Ensemble formulation of surface fluxes and improvement in evapotranspiration and cloud parameterizations in a GCM. [General Circulation Model

    NASA Technical Reports Server (NTRS)

    Sud, Y. C.; Smith, W. E.

    1984-01-01

    The influence of some modifications to the parameters of the current general circulation model (GCM) is investigated. The aim of the modifications was to eliminate strong occasional bursts of oscillations in planetary boundary layer (PBL) fluxes. Smoothly varying bulk aerodynamic friction and heat transport coefficients were found by ensemble averaging of the PBL fluxes in the current GCM. A comparison was performed of the simulations of the modified model and the unmodified model. The comparison showed that the surface fluxes and cloudiness in the modified model simulations were much more accurate. The planetary albedo in the model was also realistic. Weaknesses persisted in the models positioning of the Inter-tropical convergence zone (ICTZ) and in the temperature estimates for polar regions. A second simulation of the model following reparametrization of the cloud data showed improved results and these are described in detail.

  19. Coping with unreliable public water supplies: Averting expenditures by households in Kathmandu, Nepal

    NASA Astrophysics Data System (ADS)

    Pattanayak, Subhrendu K.; Yang, Jui-Chen; Whittington, Dale; Bal Kumar, K. C.

    2005-02-01

    This paper investigates two complementary pieces of data on households' demand for improved water services, coping costs and willingness to pay (WTP), from a survey of 1500 randomly sampled households in Kathmandu, Nepal. We evaluate how coping costs and WTP vary across types of water users and income. We find that households in Kathmandu Valley engage in five main types of coping behaviors: collecting, pumping, treating, storing, and purchasing. These activities impose coping costs on an average household of as much as 3 U.S. dollars per month or about 1% of current incomes, representing hidden but real costs of poor infrastructure service. We find that these coping costs are almost twice as much as the current monthly bills paid to the water utility but are significantly lower than estimates of WTP for improved services. We find that coping costs are statistically correlated with WTP and several household characteristics.

  20. Bottom friction optimization for a better barotropic tide modelling

    NASA Astrophysics Data System (ADS)

    Boutet, Martial; Lathuilière, Cyril; Son Hoang, Hong; Baraille, Rémy

    2015-04-01

    At a regional scale, barotropic tides are the dominant source of variability of currents and water heights. A precise representation of these processes is essential because of their great impacts on human activities (submersion risks, marine renewable energies, ...). Identified sources of error for tide modelling at a regional scale are the followings: bathymetry, boundary forcing and dissipation due to bottom friction. Nevertheless, bathymetric databases are nowadays known with a good accuracy, especially over shelves, and global tide models performances are better than ever. The most promising improvement is thus the bottom friction representation. The method used to estimate bottom friction is the simultaneous perturbation stochastic approximation (SPSA) which consists in the approximation of the gradient based on a fixed number of cost function measurements, regardless of the dimension of the vector to be estimated. Indeed, each cost function measurement is obtained by randomly perturbing every component of the parameter vector. An important feature of SPSA is its relative ease of implementation. In particular, the method does not require the development of tangent linear and adjoint version of the circulation model. Experiments are carried out to estimate bottom friction with the HYbrid Coordinate Ocean Model (HYCOM) in barotropic mode (one isopycnal layer). The study area is the Northeastern Atlantic margin which is characterized by strong currents and an intense dissipation. Bottom friction is parameterized with a quadratic term and friction coefficient is computed with the water height and the bottom roughness. The latter parameter is the one to be estimated. Assimilated data are the available tide gauge observations. First, the bottom roughness is estimated taking into account bottom sediment natures and bathymetric ranges. Then, it is estimated with geographical degrees of freedom. Finally, the impact of the estimation of a mixed quadratic/linear friction is evaluated.

  1. Identifying and Assessing Gaps in Subseasonal to Seasonal Prediction Skill using the North American Multi-model Ensemble

    NASA Astrophysics Data System (ADS)

    Pegion, K.; DelSole, T. M.; Becker, E.; Cicerone, T.

    2016-12-01

    Predictability represents the upper limit of prediction skill if we had an infinite member ensemble and a perfect model. It is an intrinsic limit of the climate system associated with the chaotic nature of the atmosphere. Producing a forecast system that can make predictions very near to this limit is the ultimate goal of forecast system development. Estimates of predictability together with calculations of current prediction skill are often used to define the gaps in our prediction capabilities on subseasonal to seasonal timescales and to inform the scientific issues that must be addressed to build the next forecast system. Quantification of the predictability is also important for providing a scientific basis for relaying to stakeholders what kind of climate information can be provided to inform decision-making and what kind of information is not possible given the intrinsic predictability of the climate system. One challenge with predictability estimates is that different prediction systems can give different estimates of the upper limit of skill. How do we know which estimate of predictability is most representative of the true predictability of the climate system? Previous studies have used the spread-error relationship and the autocorrelation to evaluate the fidelity of the signal and noise estimates. Using a multi-model ensemble prediction system, we can quantify whether these metrics accurately indicate an individual model's ability to properly estimate the signal, noise, and predictability. We use this information to identify the best estimates of predictability for 2-meter temperature, precipitation, and sea surface temperature from the North American Multi-model Ensemble and compare with current skill to indicate the regions with potential for improving skill.

  2. Bivariate quadratic method in quantifying the differential capacitance and energy capacity of supercapacitors under high current operation

    NASA Astrophysics Data System (ADS)

    Goh, Chin-Teng; Cruden, Andrew

    2014-11-01

    Capacitance and resistance are the fundamental electrical parameters used to evaluate the electrical characteristics of a supercapacitor, namely the dynamic voltage response, energy capacity, state of charge and health condition. In the British Standards EN62391 and EN62576, the constant capacitance method can be further improved with a differential capacitance that more accurately describes the dynamic voltage response of supercapacitors. This paper presents a novel bivariate quadratic based method to model the dynamic voltage response of supercapacitors under high current charge-discharge cycling, and to enable the derivation of the differential capacitance and energy capacity directly from terminal measurements, i.e. voltage and current, rather than from multiple pulsed-current or excitation signal tests across different bias levels. The estimation results the author achieves are in close agreement with experimental measurements, within a relative error of 0.2%, at various high current levels (25-200 A), more accurate than the constant capacitance method (4-7%). The archival value of this paper is the introduction of an improved quantification method for the electrical characteristics of supercapacitors, and the disclosure of the distinct properties of supercapacitors: the nonlinear capacitance-voltage characteristic, capacitance variation between charging and discharging, and distribution of energy capacity across the operating voltage window.

  3. Population Size Estimation of Men Who Have Sex with Men in Ho Chi Minh City and Nghe An Using Social App Multiplier Method.

    PubMed

    Safarnejad, Ali; Nga, Nguyen Thien; Son, Vo Hai

    2017-06-01

    This study aims to estimate the number of men who have sex with men (MSM) in Ho Chi Minh City (HCMC) and Nghe An province, Viet Nam, using a novel method of population size estimation, and to assess the feasibility of the method in implementation. An innovative approach to population size estimation grounded on the principles of the multiplier method, and using social app technology and internet-based surveys was undertaken among MSM in two regions of Viet Nam in 2015. Enumeration of active users of popular social apps for MSM in Viet Nam was conducted over 4 weeks. Subsequently, an independent online survey was done using respondent driven sampling. We also conducted interviews with key informants in Nghe An and HCMC on their experience and perceptions of this method and other methods of size estimation. The population of MSM in Nghe An province was estimated to be 1765 [90% CI 1251-3150]. The population of MSM in HCMC was estimated to be 37,238 [90% CI 24,146-81,422]. These estimates correspond to 0.17% of the adult male population in Nghe An province [90% CI 0.12-0.30], and 1.35% of the adult male population in HCMC [90% CI 0.87-2.95]. Our size estimates of the MSM population (1.35% [90% CI 0.87%-2.95%] of the adult male population in HCMC) fall within current standard practice of estimating 1-3% of adult male population in big cities. Our size estimates of the MSM population (0.17% [90% CI 0.12-0.30] of the adult male population in Nghe An province) are lower than the current standard practice of estimating 0.5-1.5% of adult male population in rural provinces. These estimates can provide valuable information for sub-national level HIV prevention program planning and evaluation. Furthermore, we believe that our results help to improve application of this population size estimation method in other regions of Viet Nam.

  4. Inefficiencies and high-value improvements in U.S. cervical cancer screening practice: A cost-effectiveness analysis

    PubMed Central

    Kim, Jane J.; Campos, Nicole G.; Sy, Stephen; Burger, Emily A.; Cuzick, Jack; Castle, Philip E.; Hunt, William C.; Waxman, Alan; Wheeler, Cosette M.

    2016-01-01

    Background Studies suggest that cervical cancer screening practice in the United States is inefficient. The cost and health implications of non-compliance in the screening process compared to recommended guidelines are uncertain. Objective To estimate the benefits, costs, and cost-effectiveness of current cervical cancer screening practice and assess the value of screening improvements. Design Model-based cost-effectiveness analysis. Data Sources New Mexico HPV Pap Registry; medical literature. Target Population Cohort of women eligible for routine screening. Time Horizon Lifetime. Perspective Societal. Interventions Current cervical cancer screening practice; improved compliance to guidelines-based screening interval, triage testing, diagnostic referrals, and precancer treatment referrals. Outcome Measures Reductions in lifetime cervical cancer risk, quality-adjusted life-years (QALYs), lifetime costs, incremental cost-effectiveness ratios (ICERs), incremental net monetary benefits (INMBs Results of Base-Case Analysis Current screening practice was associated with lower health benefit and was not cost-effective relative to guidelines-based strategies. Improvements in the screening process were associated with higher QALYs and small changes in costs. Perfect c4mpliance to a 3-yearly screening interval and to colposcopy/biopsy referrals were associated with the highest INMBs ($759 and $741, respectively, at a willingness-to-pay threshold of $100,000 per QALY gained); together, the INMB increased to $1,645. Results of Sensitivity Analysis Current screening practice was inefficient in 100% of simulations. The rank ordering of screening improvements according to INMBs was stable over a range of screening inputs and willingness-to-pay thresholds. Limitations The impact of HPV vaccination was not considered. Conclusions The added health benefit of improving compliance to guidelines, especially the 3-yearly interval for cytology screening and diagnostic follow-up, may justify additional investments in interventions to improve U.S. cervical cancer screening practice. Funding Source U.S. National Cancer Institute. PMID:26414147

  5. Assessing fire emissions from tropical savanna and forests of central Brazil

    NASA Technical Reports Server (NTRS)

    Riggan, Philip J.; Brass, James A.; Lockwood, Robert N.

    1993-01-01

    Wildfires in tropical forest and savanna are a strong source of trace gas and particulate emissions to the atmosphere, but estimates of the continental-scale impacts are limited by large uncertainties in the rates of fire occurrence and biomass combustion. Satellite-based remote sensing offers promise for characterizing fire physical properties and impacts on the environment, but currently available sensors saturate over high-radiance targets and provide only indications of regions and times at which fires are extensive and their areal rate of growing as recorded in ash layers. Here we describe an approach combining satellite- and aircraft-based remote sensing with in situ measurements of smoke to estimate emissions from central Brazil. These estimates will improve global accounting of radiation-absorbing gases and particulates that may be contributing to climate change and will provide strategic data for fire management.

  6. Psychometrics evaluation of Charcot-Marie-Tooth Neuropathy Score (CMTNSv2) second version, using Rasch analysis.

    PubMed

    Sadjadi, Reza; Reilly, Mary M; Shy, Michael E; Pareyson, Davide; Laura, Matilde; Murphy, Sinead; Feely, Shawna M E; Grider, Tiffany; Bacon, Chelsea; Piscosquito, Giuseppe; Calabrese, Daniela; Burns, Ted M

    2014-09-01

    Charcot-Marie-Tooth Neuropathy Score second version (CMTNSv2) is a validated clinical outcome measure developed for use in clinical trials to monitor disease impairment and progression in affected CMT patients. Currently, all items of CMTNSv2 have identical contribution to the total score. We used Rasch analysis to further explore psychometric properties of CMTNSv2, and in particular, category response functioning, and their weight on the overall disease progression. Weighted category responses represent a more accurate estimate of actual values measuring disease severity and therefore could potentially be used in improving the current version. © 2014 Peripheral Nerve Society.

  7. Examination of Solar Cycle Statistical Model and New Prediction of Solar Cycle 23

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee Y.; Wilson, John W.

    2000-01-01

    Sunspot numbers in the current solar cycle 23 were estimated by using a statistical model with the accumulating cycle sunspot data based on the odd-even behavior of historical sunspot cycles from 1 to 22. Since cycle 23 has progressed and the accurate solar minimum occurrence has been defined, the statistical model is validated by comparing the previous prediction with the new measured sunspot number; the improved sunspot projection in short range of future time is made accordingly. The current cycle is expected to have a moderate level of activity. Errors of this model are shown to be self-correcting as cycle observations become available.

  8. [Current situation and reflection on the prevention and treatment of burns in the elderly].

    PubMed

    Zhang, J P; Huang, Y S

    2017-09-20

    With ageing of the population, it is estimated that the percentage of old people aged above 65 years old will be approached to 30% in China by 2035. This presents a considerable challenge to geriatric burn treatment, as elderly burn patients have more serious injuries, longer hospital lengths of stay, and higher rates of complications and mortality. In this article, we analyze the current status of burns in the elderly in China and the factors contributing to the outcome of the elderly, and put forward therapeutic strategies so as to improve the level of prevention and treatment of burns in the elderly.

  9. Demonstrating Improvements from a NWP-based Satellite Precipitation Adjustment Technique in Tropical Mountainous Regions

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Anagnostou, E. N.

    2016-12-01

    This research contributes to the improvement of high resolution satellite applications in tropical regions with mountainous topography. Such mountainous regions are usually covered by sparse networks of in-situ observations while quantitative precipitation estimation from satellite sensors exhibits strong underestimation of heavy orographically enhanced storm events. To address this issue, our research applies a satellite error correction technique based solely on high-resolution numerical weather predictions (NWP). Our previous work has demonstrated the accuracy of this method in two mid-latitude mountainous regions (Zhang et al. 2013*1, Zhang et al. 2016*2), while the current research focuses on a comprehensive evaluation in three topical mountainous regions: Colombia, Peru and Taiwan. In addition, two different satellite precipitation products, NOAA Climate Prediction Center morphing technique (CMORPH) and Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS), are considered. The study includes a large number of heavy precipitation events (68 events over the three regions) in the period 2004 to 2012. The NWP-based adjustments of the two satellite products are contrasted to their corresponding gauge-adjusted post-processing products. Preliminary results show that the NWP-based adjusted CMORPH product is consistently improved relative to both original and gauge-adjusted precipitation products for all regions and storms examined. The improvement of PERSIANN-CCS product is less significant and less consistent relative to the CMORPH performance improvements from the NWP-based adjustment. *1Zhang, Xinxuan, Emmanouil N. Anagnostou, Maria Frediani, Stavros Solomos, and George Kallos. "Using NWP simulations in satellite rainfall estimation of heavy precipitation events over mountainous areas." Journal of Hydrometeorology 14, no. 6 (2013): 1844-1858.*2 Zhang, Xinxuan, Emmanouil N. Anagnostou, and Humberto Vergara. "Hydrologic Evaluation of NWP-Adjusted CMORPH Estimates of Hurricane-Induced Precipitation in the Southern Appalachians." Journal of Hydrometeorology 17.4 (2016): 1087-1099.

  10. Improved measurements of mean sea surface velocity in the Nordic Seas from synthetic aperture radar

    NASA Astrophysics Data System (ADS)

    Wergeland Hansen, Morten; Johnsen, Harald; Engen, Geir; Øie Nilsen, Jan Even

    2017-04-01

    The warm and saline surface Atlantic Water (AW) flowing into the Nordic Seas across the Greenland-Scotland ridge transports heat into the Arctic, maintaining the ice-free oceans and regulating sea-ice extent. The AW influences the region's relatively mild climate and is the northern branch of the global thermohaline overturning circulation. Heat loss in the Norwegian Sea is key for both heat transport and deep water formation. In general, the ocean currents in the Nordic Seas and the North Atlantic Ocean is a complex system of topographically steered barotropic and baroclinic currents of which the wind stress and its variability is a driver of major importance. The synthetic aperture radar (SAR) Doppler centroid shift has been demonstrated to contain geophysical information about sea surface wind, waves and current at an accuracy of 5 Hz and pixel spacing of 3.5 - 9 × 8 km2. This corresponds to a horizontal surface velocity of about 20 cm/s at 35° incidence angle. The ESA Prodex ISAR project aims to implement new and improved SAR Doppler shift processing routines to enable reprocessing of the wide swath acquisitions available from the Envisat ASAR archive (2002-2012) at higher resolution and better accuracy than previously obtained, allowing combined use with Sentinel-1 and Radarsat-2 retrievals to build timeseries of the sea surface velocity in the Nordic Seas. Estimation of the geophysical Doppler shift from new SAR Doppler centroid shift retrievals will be demonstrated, addressing key issues relating to geometric (satellite orbit and attitude) and electronic (antenna mis-pointing) contributions and corrections. Geophysical Doppler shift retrievals from one month of data in January 2010 and the inverted surface velocity in the Nordic Seas are then addressed and compared to other direct and indirect estimates of the upper ocean current, in particular those obtained in the ESA GlobCurrent project.

  11. Estimating tuberculosis incidence from primary survey data: a mathematical modeling approach

    PubMed Central

    Chadha, V. K.; Laxminarayan, R.; Arinaminpathy, N.

    2017-01-01

    SUMMARY BACKGROUND: There is an urgent need for improved estimations of the burden of tuberculosis (TB). OBJECTIVE: To develop a new quantitative method based on mathematical modelling, and to demonstrate its application to TB in India. DESIGN: We developed a simple model of TB transmission dynamics to estimate the annual incidence of TB disease from the annual risk of tuberculous infection and prevalence of smear-positive TB. We first compared model estimates for annual infections per smear-positive TB case using previous empirical estimates from China, Korea and the Philippines. We then applied the model to estimate TB incidence in India, stratified by urban and rural settings. RESULTS: Study model estimates show agreement with previous empirical estimates. Applied to India, the model suggests an annual incidence of smear-positive TB of 89.8 per 100 000 population (95%CI 56.8–156.3). Results show differences in urban and rural TB: while an urban TB case infects more individuals per year, a rural TB case remains infectious for appreciably longer, suggesting the need for interventions tailored to these different settings. CONCLUSIONS: Simple models of TB transmission, in conjunction with necessary data, can offer approaches to burden estimation that complement those currently being used. PMID:28284250

  12. Rip current related drowning deaths and rescues in Australia 2004-2011

    NASA Astrophysics Data System (ADS)

    Brighton, B.; Sherker, S.; Brander, R.; Thompson, M.; Bradstreet, A.

    2013-04-01

    Rip currents are a common hazard to beachgoers found on many beaches around the world, but it has proven difficult to accurately quantify the actual number of rip current related drowning deaths in many regions and countries. Consequently, reported estimates of rip current drowning can fluctuate considerably and are often based on anecdotal evidence. This study aims to quantify the incidence of rip current related drowning deaths and rescues in Australia from 2004 to 2011. A retrospective search was undertaken for fatal and non-fatal rip-related drowning incidents from Australia's National Coronial Information System (NCIS), Surf Life Saving Australia's (SLSA, 2005-2011) SurfGuard Incident Report Database (IRD), and Media Monitors for the period 1 July 2004 to 30 June 2011. In this time, rip currents were recorded as a factor in 142 fatalities of a total of 613 coastal drowning deaths (23.2%), an average of 21 per year. Rip currents were related to 44% of all beach-related drowning deaths and were involved in 57.4% of reported major rescues in Australian locations where rips occur. A comparison with international operational statistics over the same time period describes rip-related rescues as 53.7% of the total rescues in the US, 57.9% in the UK and 49.4% in New Zealand. The range 49-58% is much lower than 80-89% traditionally cited. The results reported are likely to underestimate the size of the rip current hazard, because we are limited by the completeness of data on rip-related events; however this is the most comprehensive estimate to date. Beach safety practitioners need improved data collection and standardized definitions across organisations. The collection of drowning data using consistent categories and the routine collection of rip current information will allow for more accurate global comparisons.

  13. Potential of European 14CO2 observation network to estimate the fossil fuel CO2 emissions via atmospheric inversions

    NASA Astrophysics Data System (ADS)

    Wang, Yilong; Broquet, Grégoire; Ciais, Philippe; Chevallier, Frédéric; Vogel, Felix; Wu, Lin; Yin, Yi; Wang, Rong; Tao, Shu

    2018-03-01

    Combining measurements of atmospheric CO2 and its radiocarbon (14CO2) fraction and transport modeling in atmospheric inversions offers a way to derive improved estimates of CO2 emitted from fossil fuel (FFCO2). In this study, we solve for the monthly FFCO2 emission budgets at regional scale (i.e., the size of a medium-sized country in Europe) and investigate the performance of different observation networks and sampling strategies across Europe. The inversion system is built on the LMDZv4 global transport model at 3.75° × 2.5° resolution. We conduct Observing System Simulation Experiments (OSSEs) and use two types of diagnostics to assess the potential of the observation and inverse modeling frameworks. The first one relies on the theoretical computation of the uncertainty in the estimate of emissions from the inversion, known as posterior uncertainty, and on the uncertainty reduction compared to the uncertainty in the inventories of these emissions, which are used as a prior knowledge by the inversion (called prior uncertainty). The second one is based on comparisons of prior and posterior estimates of the emission to synthetic true emissions when these true emissions are used beforehand to generate the synthetic fossil fuel CO2 mixing ratio measurements that are assimilated in the inversion. With 17 stations currently measuring 14CO2 across Europe using 2-week integrated sampling, the uncertainty reduction for monthly FFCO2 emissions in a country where the network is rather dense like Germany, is larger than 30 %. With the 43 14CO2 measurement stations planned in Europe, the uncertainty reduction for monthly FFCO2 emissions is increased for the UK, France, Italy, eastern Europe and the Balkans, depending on the configuration of prior uncertainty. Further increasing the number of stations or the sampling frequency improves the uncertainty reduction (up to 40 to 70 %) in high emitting regions, but the performance of the inversion remains limited over low-emitting regions, even assuming a dense observation network covering the whole of Europe. This study also shows that both the theoretical uncertainty reduction (and resulting posterior uncertainty) from the inversion and the posterior estimate of emissions itself, for a given prior and true estimate of the emissions, are highly sensitive to the choice between two configurations of the prior uncertainty derived from the general estimate by inventory compilers or computations on existing inventories. In particular, when the configuration of the prior uncertainty statistics in the inversion system does not match the difference between these prior and true estimates, the posterior estimate of emissions deviates significantly from the truth. This highlights the difficulty of filtering the targeted signal in the model-data misfit for this specific inversion framework, the need to strongly rely on the prior uncertainty characterization for this and, consequently, the need for improved estimates of the uncertainties in current emission inventories for real applications with actual data. We apply the posterior uncertainty in annual emissions to the problem of detecting a trend of FFCO2, showing that increasing the monitoring period (e.g., more than 20 years) is more efficient than reducing uncertainty in annual emissions by adding stations. The coarse spatial resolution of the atmospheric transport model used in this OSSE (typical of models used for global inversions of natural CO2 fluxes) leads to large representation errors (related to the inability of the transport model to capture the spatial variability of the actual fluxes and mixing ratios at subgrid scales), which is a key limitation of our OSSE setup to improve the accuracy of the monitoring of FFCO2 emissions in European regions. Using a high-resolution transport model should improve the potential to retrieve FFCO2 emissions, and this needs to be investigated.

  14. Novel Dynamic Framed-Slotted ALOHA Using Litmus Slots in RFID Systems

    NASA Astrophysics Data System (ADS)

    Yim, Soon-Bin; Park, Jongho; Lee, Tae-Jin

    Dynamic Framed Slotted ALOHA (DFSA) is one of the most popular protocols to resolve tag collisions in RFID systems. In DFSA, it is widely known that the optimal performance is achieved when the frame size is equal to the number of tags. So, a reader dynamically adjusts the next frame size according to the current number of tags. Thus it is important to estimate the number of tags exactly. In this paper, we propose a novel tag estimation and identification method using litmus (test) slots for DFSA. We compare the performance of the proposed method with those of existing methods by analysis. We conduct simulations and show that our scheme improves the speed of tag identification.

  15. Importance of pre-pregnancy and pregnancy iron status: can long-term weekly preventive iron and folic acid supplementation achieve desirable and safe status?

    PubMed

    Viteri, Fernando E; Berger, Jacques

    2005-12-01

    Most women worldwide enter pregnancy without adequate iron reserves or are already iron deficient. Estimates of iron needs during pregnancy are markedly reduced when iron reserves are available. The needs of absorbed iron to correct mild to moderate anemia in the last two trimesters are estimated. Pre-pregnancy and prenatal weekly supplementation can improve iron reserves effectively and safely, preventing excess iron and favoring better pregnancy outcomes. We explain how the weekly supplementation idea was developed, why current hemoglobin norms may be inadequately high (especially in pregnancy), and why excess iron as recommended by many agencies for developing populations can be undesirable.

  16. Efficient depth intraprediction method for H.264/AVC-based three-dimensional video coding

    NASA Astrophysics Data System (ADS)

    Oh, Kwan-Jung; Oh, Byung Tae

    2015-04-01

    We present an intracoding method that is applicable to depth map coding in multiview plus depth systems. Our approach combines skip prediction and plane segmentation-based prediction. The proposed depth intraskip prediction uses the estimated direction at both the encoder and decoder, and does not need to encode residual data. Our plane segmentation-based intraprediction divides the current block into biregions, and applies a different prediction scheme for each segmented region. This method avoids incorrect estimations across different regions, resulting in higher prediction accuracy. Simulation results demonstrate that the proposed scheme is superior to H.264/advanced video coding intraprediction and has the ability to improve the subjective rendering quality.

  17. Evaluation of Clear Sky Models for Satellite-Based Irradiance Estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sengupta, Manajit; Gotseff, Peter

    2013-12-01

    This report describes an intercomparison of three popular broadband clear sky solar irradiance model results with measured data, as well as satellite-based model clear sky results compared to measured clear sky data. The authors conclude that one of the popular clear sky models (the Bird clear sky model developed by Richard Bird and Roland Hulstrom) could serve as a more accurate replacement for current satellite-model clear sky estimations. Additionally, the analysis of the model results with respect to model input parameters indicates that rather than climatological, annual, or monthly mean input data, higher-time-resolution input parameters improve the general clear skymore » model performance.« less

  18. Bayesian model aggregation for ensemble-based estimates of protein pKa values

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gosink, Luke J.; Hogan, Emilie A.; Pulsipher, Trenton C.

    2014-03-01

    This paper investigates an ensemble-based technique called Bayesian Model Averaging (BMA) to improve the performance of protein amino acid pmore » $$K_a$$ predictions. Structure-based p$$K_a$$ calculations play an important role in the mechanistic interpretation of protein structure and are also used to determine a wide range of protein properties. A diverse set of methods currently exist for p$$K_a$$ prediction, ranging from empirical statistical models to {\\it ab initio} quantum mechanical approaches. However, each of these methods are based on a set of assumptions that have inherent bias and sensitivities that can effect a model's accuracy and generalizability for p$$K_a$$ prediction in complicated biomolecular systems. We use BMA to combine eleven diverse prediction methods that each estimate pKa values of amino acids in staphylococcal nuclease. These methods are based on work conducted for the pKa Cooperative and the pKa measurements are based on experimental work conducted by the Garc{\\'i}a-Moreno lab. Our study demonstrates that the aggregated estimate obtained from BMA outperforms all individual prediction methods in our cross-validation study with improvements from 40-70\\% over other method classes. This work illustrates a new possible mechanism for improving the accuracy of p$$K_a$$ prediction and lays the foundation for future work on aggregate models that balance computational cost with prediction accuracy.« less

  19. Rapid scatter estimation for CBCT using the Boltzmann transport equation

    NASA Astrophysics Data System (ADS)

    Sun, Mingshan; Maslowski, Alex; Davis, Ian; Wareing, Todd; Failla, Gregory; Star-Lack, Josh

    2014-03-01

    Scatter in cone-beam computed tomography (CBCT) is a significant problem that degrades image contrast, uniformity and CT number accuracy. One means of estimating and correcting for detected scatter is through an iterative deconvolution process known as scatter kernel superposition (SKS). While the SKS approach is efficient, clinically significant errors on the order 2-4% (20-40 HU) still remain. We have previously shown that the kernel method can be improved by perturbing the kernel parameters based on reference data provided by limited Monte Carlo simulations of a first-pass reconstruction. In this work, we replace the Monte Carlo modeling with a deterministic Boltzmann solver (AcurosCTS) to generate the reference scatter data in a dramatically reduced time. In addition, the algorithm is improved so that instead of adjusting kernel parameters, we directly perturb the SKS scatter estimates. Studies were conducted on simulated data and on a large pelvis phantom scanned on a tabletop system. The new method reduced average reconstruction errors (relative to a reference scan) from 2.5% to 1.8%, and significantly improved visualization of low contrast objects. In total, 24 projections were simulated with an AcurosCTS execution time of 22 sec/projection using an 8-core computer. We have ported AcurosCTS to the GPU, and current run-times are approximately 4 sec/projection using two GPU's running in parallel.

  20. Evaluation of Improvements to the TRMM Microwave Rain Algorithm

    NASA Technical Reports Server (NTRS)

    Yang, Song; Olson, Williams S.; Smith, Eric A.; Kummerow, Christian

    2002-01-01

    Improvements made to the Version 5 TRMM passive microwave rain retrieval algorithm (2A-12) are evaluated using independent data. Surface rain rate estimates from the Version 5 TRMM TMI (2A-12), PR (2A-25) and TMI/PR Combined (2B-31) algorithms and ground-based radar estimates for selected coincident subset datasets in 1998 over Melbourne and Kwajalein show varying degrees of agreement. The surface rain rates are then classified into convective and stratiform rain types over ocean, land, and coastal areas for more detailed comparisons to the ground radar measurements. These comparisons lead to a better understanding of the relative performances of the current TRMM rain algorithms. For example, at Melbourne more than 80% of the radar-derived rainfall is classified as convective rain. Convective rain from the TRMM rain algorithms is less than that from ground radar measurements, while TRMM stratiform rain is much greater. Rain area coverage from 2A-12 is also in reasonable agreement with ground radar measurements, with about 25% more over ocean and 25% less over land and coastal areas. Retrieved rain rates from the improved (Version 6) 2A-12 algorithm will be compared to 2A-25, 2B-31, and ground-based radar measurements to evaluate the impact of improvements to 2A-12 in Version 6. An important improvement to the Version 6 2A-12 algorithm is the retrieval of Q1/Q2 (latent heating/drying) profiles in addition to the surface rain rate and hydrometeor profiles. In order to ascertain the credibility of the new products, retrieved Q1/Q2 profiles are compared to independent ground-based estimates. Analyses of dual-Doppler radar data in conjunction with coincident rawinsonde data yield estimates of the vertical distributions of diabatic heating/drying at high horizontal resolution for selected cases over the Kwajalein and LBA field sites. The estimated vertical heating/drying structures appear to be reasonable. Comparisons of Q1/Q2 profiles from Version 6 2A-12 and the ground-based estimates are in progress. Retrieved Q1/Q2 structures will also be compared to MM5 hurricane simulations for selected cases. The results of these intercomparisons will be presented at the conference.

  1. DNA methylation markers in combination with skeletal and dental ages to improve age estimation in children.

    PubMed

    Shi, Lei; Jiang, Fan; Ouyang, Fengxiu; Zhang, Jun; Wang, Zhimin; Shen, Xiaoming

    2018-03-01

    Age estimation is critical in forensic science, in competitive sports and games and in other age-related fields, but the current methods are suboptimal. The combination of age-associated DNA methylation markers with skeletal age (SA) and dental age (DA) may improve the accuracy and precision of age estimation, but no study has examined this topic. In the current study, we measured SA (GP, TW3-RUS, and TW3-Carpal methods) and DA (Demirjian and Willems methods) by X-ray examination in 124 Chinese children (78 boys and 46 girls) aged 6-15 years. To identify age-associated CpG sites, we analyzed methylome-wide DNA methylation profiling by using the Illumina HumanMethylation450 BeadChip system in 48 randomly selected children. Five CpG sites were identified as associated with chronologic age (CA), with an absolute value of Pearson's correlation coefficient (r)>0.5 (p<0.01) and a false discovery rate<0.01. The validation of age-associated CpG sites was performed using droplet digital PCR techniques in all 124 children. After validation, four CpG sites for boys and five CpG sites for girls were further adopted to build the age estimation model with SA and DA using multivariate linear stepwise regressions. These CpG sites were located at 4 known genes: DDO, PRPH2, DHX8, and ITGA2B and at one unknown gene with the Illumina ID number of 22398226. The accuracy of age estimation methods was compared according to the mean absolute error (MAE) and root mean square error (RMSE). The best single measure for SA was the TW3-RUS method (MAE=0.69years, RMSE=0.95years) in boys, and the GP method (MAE=0.74years, RMSE=0.94years) in girls. For DA, the Willems method was the best single measure for both boys (MAE=0.63years, RMSE=0.78years) and girls (MAE=0.54years, RMSE=0.68years). The models that incorporated SA and DA with the methylation levels of age-associated CpG sites provided the highest accuracy of age estimation in both boys (MAE=0.47years, R 2 =0.886) and girls (MAE=0.33years, R 2 =0.941). Cross validation of the results confirmed the reliability and validity of the models. In conclusion, age-associated DNA methylation markers in combination with SA and DA greatly improve the accuracy of age estimation in Chinese children. This method may be applied in forensic science, in competitive sports and games and in other age-related fields. Copyright © 2017. Published by Elsevier B.V.

  2. Combined prevalence of inherited skeletal disorders in dog breeds in Belgium.

    PubMed

    Coopman, F; Broeckx, B; Verelst, E; Deforce, D; Saunders, J; Duchateau, L; Verhoeven, G

    2014-01-01

    Canine hip dysplasia (CHD), canine elbow dysplasia (CED), and humeral head osteochondrosis (HHOC) are inherited traits with uneven incidence in dog breeds. Knowledge of the combined prevalence of these three disorders is necessary to estimate the effect of the currently applied breeding strategies, in order to improve the genetic health of the population. Official screening results of the Belgian National Committee for Inherited Skeletal Disorders (NCSID) revealed that an average of 31.8% (CHD, CED, or both; n = 1273 dogs) and 47.2% (CHD, CED, HHOC, or a combination of these three diseases; n = 250 dogs) of dogs are mildly to severely affected by at least one skeletal disorder. According to the current breeding recommendations in some dog breeds in Belgium, these animals should be restricted (mild signs) or excluded (moderate to severe signs) from breeding. The introduction of genetic parameters, such as estimated breeding values, might create a better approach to gradually reduce the incidence of these complex inherited joint disorders, without compromising genetic population health.

  3. Agent-Based Simulations for Project Management

    NASA Technical Reports Server (NTRS)

    White, J. Chris; Sholtes, Robert M.

    2011-01-01

    Currently, the most common approach used in project planning tools is the Critical Path Method (CPM). While this method was a great improvement over the basic Gantt chart technique being used at the time, it now suffers from three primary flaws: (1) task duration is an input, (2) productivity impacts are not considered , and (3) management corrective actions are not included. Today, computers have exceptional computational power to handle complex simulations of task e)(eculion and project management activities (e.g ., dynamically changing the number of resources assigned to a task when it is behind schedule). Through research under a Department of Defense contract, the author and the ViaSim team have developed a project simulation tool that enables more realistic cost and schedule estimates by using a resource-based model that literally turns the current duration-based CPM approach "on its head." The approach represents a fundamental paradigm shift in estimating projects, managing schedules, and reducing risk through innovative predictive techniques.

  4. Assessment of xylem phenology: a first attempt to verify its accuracy and precision.

    PubMed

    Lupi, C; Rossi, S; Vieira, J; Morin, H; Deslauriers, A

    2014-01-01

    This manuscript aims to evaluate the precision and accuracy of current methodology for estimating xylem phenology and tracheid production in trees. Through a simple approach, sampling at two positions on the stem of co-dominant black spruce trees in two sites of the boreal forest of Quebec, we were able to quantify variability among sites, between trees and within a tree for different variables. We demonstrated that current methodology is accurate for the estimation of the onset of xylogenesis, while the accuracy for the evaluation of the ending of xylogenesis may be improved by sampling at multiple positions on the stem. The pattern of variability in different phenological variables and cell production allowed us to advance a novel hypothesis on the shift in the importance of various drivers of xylogenesis, from factors mainly varying at the level of site (e.g., climate) at the beginning of the growing season to factors varying at the level of individual trees (e.g., possibly genetic variability) at the end of the growing season.

  5. Selecting Great Lakes streams for lampricide treatment based on larval sea lamprey surveys

    USGS Publications Warehouse

    Christie, Gavin C.; Adams, Jean V.; Steeves, Todd B.; Slade, Jeffrey W.; Cuddy, Douglas W.; Fodale, Michael F.; Young, Robert J.; Kuc, Miroslaw; Jones, Michael L.

    2003-01-01

    The Empiric Stream Treatment Ranking (ESTR) system is a data-driven, model-based, decision tool for selecting Great Lakes streams for treatment with lampricide, based on estimates from larval sea lamprey (Petromyzon marinus) surveys conducted throughout the basin. The 2000 ESTR system was described and applied to larval assessment surveys conducted from 1996 to 1999. A comparative analysis of stream survey and selection data was conducted and improvements to the stream selection process were recommended. Streams were selected for treatment based on treatment cost, predicted treatment effectiveness, and the projected number of juvenile sea lampreys produced. On average, lampricide treatments were applied annually to 49 streams with 1,075 ha of larval habitat, killing 15 million larval and 514,000 juvenile sea lampreys at a total cost of $5.3 million, and marginal and mean costs of $85 and $10 per juvenile killed. The numbers of juvenile sea lampreys killed for given treatment costs showed a pattern of diminishing returns with increasing investment. Of the streams selected for treatment, those with > 14 ha of larval habitat targeted 73% of the juvenile sea lampreys for 60% of the treatment cost. Suggested improvements to the ESTR system were to improve accuracy and precision of model estimates, account for uncertainty in estimates, include all potentially productive streams in the process (not just those surveyed in the current year), consider the value of all larvae killed during treatment (not just those predicted to metamorphose the following year), use lake-specific estimates of damage, and establish formal suppression targets.

  6. The technological future of 7 T MRI hardware.

    PubMed

    Webb, A G; Van de Moortele, P F

    2016-09-01

    In this article we present our projections of future hardware developments on 7 T human MRI systems. These include compact cryogen-light magnets, improved gradient performance, integrated RF-receive and direct current shimming coil arrays, new RF technology with adaptive impedance matching, patient-specific specific absorption rate estimation and monitoring, and increased integration of physiological monitoring systems. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  7. Radar data smoothing filter study

    NASA Technical Reports Server (NTRS)

    White, J. V.

    1984-01-01

    The accuracy of the current Wallops Flight Facility (WFF) data smoothing techniques for a variety of radars and payloads is examined. Alternative data reduction techniques are given and recommendations are made for improving radar data processing at WFF. A data adaptive algorithm, based on Kalman filtering and smoothing techniques, is also developed for estimating payload trajectories above the atmosphere from noisy time varying radar data. This algorithm is tested and verified using radar tracking data from WFF.

  8. The mercedes-benz approach to γ-ray astronomy

    NASA Astrophysics Data System (ADS)

    Akerlof, Carl W.

    1988-02-01

    The sensitivity requirements for ground-based γ-ray astronomy are reviewed in the light of the most reliable estimates of stellar fluxes above 100 GeV. Current data strongly favor the construction of detectors with the lowest energy thresholds. Since improvements in angular resolution are limited by shower fluctuations, better methods of rejecting hadronic showers must be found to reliably observe the known astrophysical sources. Several possible methods for reducing this hadronic background are discussed.

  9. Improved Methodology for Developing Cost Uncertainty Models for Naval Vessels

    DTIC Science & Technology

    2009-04-22

    Deegan , 2007). Risk cannot be assessed with a point estimate, as it represents a single value that serves as a best guess for the parameter to be...or stakeholders ( Deegan & Fields, 2007). This paper analyzes the current NAVSEA 05C Cruiser (CG(X)) probabilistic cost model including data...provided by Mr. Chris Deegan and his CG(X) analysts. The CG(X) model encompasses all factors considered for cost of the entire program, including

  10. A Review of Biological Communication Mechanisms Applicable to Small Autonomous Systems

    DTIC Science & Technology

    2010-09-01

    studies of cochlear potentials of the Myotis lucifugus indicate that the bat’s sensitivity to an acoustic signal is poor at low frequencies, improves as...1991]). 2.3.1 Antennae Insect antennae can be extremely sensitive to air flow and displacement. Many arthropods, including crickets, cockroaches...flies also use their antennae to estimate flight speed by the amount of air flowing past them. Currently, researchers are investigating how flies

  11. Alternative Methods to Standby Gain Scheduling Following Air Data System Failure

    DTIC Science & Technology

    2009-09-01

    in the event of air data system failures. There are two problems with this current method. First, the pilot must take time away from other ...pertinent tasks to manually position the standby-gains via the landing gear handle, air-to-air refueling door switch or some other means. Second, the...the way, the original airspeed estimator was improved and two other alternatives to standby-gain-scheduling were investigated. Knowing what

  12. Cloning by limiting dilution: an improved estimate that an interesting culture is monoclonal.

    PubMed Central

    Staszewski, R.

    1984-01-01

    An interesting culture obtained by limiting dilution is less likely to be monoclonal than a random viable culture. Current practice using limiting dilution to establish monoclonal lines of interesting recombinant DNA or hybridoma-derived organisms overestimates the probability that promising cultures are monoclonal, resulting in inadequate dilutions, with the need for additional subcloning and the avoidable loss (avoidable instability) of interesting lines by overgrowth with uninteresting varieties. PMID:6537695

  13. A cost-benefit analysis of The National Map

    USGS Publications Warehouse

    Halsing, David L.; Theissen, Kevin; Bernknopf, Richard

    2003-01-01

    The Geography Discipline of the U.S. Geological Survey (USGS) has conducted this cost-benefit analysis (CBA) of The National Map. This analysis is an evaluation of the proposed Geography Discipline initiative to provide the Nation with a mechanism to access current and consistent digital geospatial data. This CBA is a supporting document to accompany the Exhibit 300 Capital Asset Plan and Business Case of The National Map Reengineering Program. The framework for estimating the benefits is based on expected improvements in processing information to perform any of the possible applications of spatial data. This analysis does not attempt to determine the benefits and costs of performing geospatial-data applications. Rather, it estimates the change in the differences between those benefits and costs with The National Map and the current situation without it. The estimates of total costs and benefits of The National Map were based on the projected implementation time, development and maintenance costs, rates of data inclusion and integration, expected usage levels over time, and a benefits estimation model. The National Map provides data that are current, integrated, consistent, complete, and more accessible in order to decrease the cost of implementing spatial-data applications and (or) improve the outcome of those applications. The efficiency gains in per-application improvements are greater than the cost to develop and maintain The National Map, meaning that the program would bring a positive net benefit to the Nation. The average improvement in the net benefit of performing a spatial data application was multiplied by a simulated number of application implementations across the country. The numbers of users, existing applications, and rates of application implementation increase over time as The National Map is developed and accessed by spatial data users around the country. Results from the 'most likely' estimates of model parameters and data inputs indicate that, over its 30-year projected lifespan, The National Map will bring a net present value (NPV) of benefits of $2.05 billion in 2001 dollars. The average time until the initial investments (the break-even period) are recovered is 14 years. Table ES-1 shows a running total of NPV in each year of the simulation model. In year 14, The National Map first shows a positive NPV, and so the table is highlighted in gray after that point. Figure ES-1 is a graph of the total benefit and total cost curves of a single model run over time. The curves cross in year 14, when the project breaks even. A sensitivity analysis of the input variables illustrated that these results of the NPV of The National Map are quite robust. Figure ES-2 plots the mean NPV results from 60 different scenarios, each consisting of fifty 30-year runs. The error bars represent a two-standard-deviation range around each mean. The analysis that follows contains the details of the cost-benefit analysis, the framework for evaluating economic benefits, a computational simulation tool, and a sensitivity analysis of model variables and values.

  14. Models and Measurements of the Rotation of Mars

    NASA Astrophysics Data System (ADS)

    Folkner, W. M.; Konopliv, A. S.; Park, R. S.; Dehant, V. M. A.; Yseboodt, M.; Rivoldini, A.

    2016-12-01

    The rotation of Mars has been determined more accurately than for any other planet except Earth. This has been done using radio tracking data from spacecraft orbiting Mars or landed on Mars, starting with Mariner 9 in 1972 continuing through the present with several orbiters currently in operation. The Viking landers in 1976 provided the first clear measurements of variation in length of day. Mars Pathfinder combined with Viking lander provided the first estimate of the martian precession rate. The model for rigid Mars rotation developed by Reasenberg and King for Viking data analysis is accurate enough to fit the currently available measurements. With the InSight mission to be launched in 2018 and the ExoMars lander mission to be launched in 2020, nutation of Mars due to non-rigid effects are expected to be detectable, requiring improved models for the effects of the martian fluid core. We will present an overview of the current measurements sets, including comparisons of length-of-day variations from independent subsets, plans for the InSight and ExoMars missions, and summarize potential modeling improvements.

  15. Characterization of particulate emissions from Australian open-cut coal mines: Toward improved emission estimates.

    PubMed

    Richardson, Claire; Rutherford, Shannon; Agranovski, Igor

    2018-06-01

    Given the significance of mining as a source of particulates, accurate characterization of emissions is important for the development of appropriate emission estimation techniques for use in modeling predictions and to inform regulatory decisions. The currently available emission estimation methods for Australian open-cut coal mines relate primarily to total suspended particulates and PM 10 (particulate matter with an aerodynamic diameter <10 μm), and limited data are available relating to the PM 2.5 (<2.5 μm) size fraction. To provide an initial analysis of the appropriateness of the currently available emission estimation techniques, this paper presents results of sampling completed at three open-cut coal mines in Australia. The monitoring data demonstrate that the particulate size fraction varies for different mining activities, and that the region in which the mine is located influences the characteristics of the particulates emitted to the atmosphere. The proportion of fine particulates in the sample increased with distance from the source, with the coarse fraction being a more significant proportion of total suspended particulates close to the source of emissions. In terms of particulate composition, the results demonstrate that the particulate emissions are predominantly sourced from naturally occurring geological material, and coal comprises less than 13% of the overall emissions. The size fractionation exhibited by the sampling data sets is similar to that adopted in current Australian emission estimation methods but differs from the size fractionation presented in the U.S. Environmental Protection Agency methodology. Development of region-specific emission estimation techniques for PM 10 and PM 2.5 from open-cut coal mines is necessary to allow accurate prediction of particulate emissions to inform regulatory decisions and for use in modeling predictions. Development of region-specific emission estimation techniques for PM 10 and PM 2.5 from open-cut coal mines is necessary to allow accurate prediction of particulate emissions to inform regulatory decisions and for use in modeling predictions. Comprehensive air quality monitoring was undertaken, and corresponding recommendations were provided.

  16. Machine Learning for Social Services: A Study of Prenatal Case Management in Illinois.

    PubMed

    Pan, Ian; Nolan, Laura B; Brown, Rashida R; Khan, Romana; van der Boor, Paul; Harris, Daniel G; Ghani, Rayid

    2017-06-01

    To evaluate the positive predictive value of machine learning algorithms for early assessment of adverse birth risk among pregnant women as a means of improving the allocation of social services. We used administrative data for 6457 women collected by the Illinois Department of Human Services from July 2014 to May 2015 to develop a machine learning model for adverse birth prediction and improve upon the existing paper-based risk assessment. We compared different models and determined the strongest predictors of adverse birth outcomes using positive predictive value as the metric for selection. Machine learning algorithms performed similarly, outperforming the current paper-based risk assessment by up to 36%; a refined paper-based assessment outperformed the current assessment by up to 22%. We estimate that these improvements will allow 100 to 170 additional high-risk pregnant women screened for program eligibility each year to receive services that would have otherwise been unobtainable. Our analysis exhibits the potential for machine learning to move government agencies toward a more data-informed approach to evaluating risk and providing social services. Overall, such efforts will improve the efficiency of allocating resource-intensive interventions.

  17. A generalized muon trajectory estimation algorithm with energy loss for application to muon tomography

    NASA Astrophysics Data System (ADS)

    Chatzidakis, Stylianos; Liu, Zhengzhi; Hayward, Jason P.; Scaglione, John M.

    2018-03-01

    This work presents a generalized muon trajectory estimation algorithm to estimate the path of a muon in either uniform or nonuniform media. The use of cosmic ray muons in nuclear nonproliferation and safeguard verification applications has recently gained attention due to the non-intrusive and passive nature of the inspection, penetrating capabilities, as well as recent advances in detectors that measure position and direction of the individual muons before and after traversing the imaged object. However, muon image reconstruction techniques are limited in resolution due to low muon flux and the effects of multiple Coulomb scattering (MCS). Current reconstruction algorithms, e.g., point of closest approach (PoCA) or straight-line path (SLP), rely on overly simple assumptions for muon path estimation through the imaged object. For robust muon tomography, efficient and flexible physics-based algorithms are needed to model the MCS process and accurately estimate the most probable trajectory of a muon as it traverses an object. In the present work, the use of a Bayesian framework and a Gaussian approximation of MCS is explored for estimation of the most likely path of a cosmic ray muon traversing uniform or nonuniform media and undergoing MCS. The algorithm's precision is compared to Monte Carlo simulated muon trajectories. It was found that the algorithm is expected to be able to predict muon tracks to less than 1.5 mm root mean square (RMS) for 0.5 GeV muons and 0.25 mm RMS for 3 GeV muons, a 50% improvement compared to SLP and 15% improvement when compared to PoCA. Further, a 30% increase in useful muon flux was observed relative to PoCA. Muon track prediction improved for higher muon energies or smaller penetration depth where energy loss is not significant. The effect of energy loss due to ionization is investigated, and a linear energy loss relation that is easy to use is proposed.

  18. A generalized muon trajectory estimation algorithm with energy loss for application to muon tomography

    DOE PAGES

    Chatzidakis, Stylianos; Liu, Zhengzhi; Hayward, Jason P.; ...

    2018-03-28

    Here, this work presents a generalized muon trajectory estimation (GMTE) algorithm to estimate the path of a muon in either uniform or nonuniform media. The use of cosmic ray muons in nuclear nonproliferation and safeguards verification applications has recently gained attention due to the non-intrusive and passive nature of the inspection, penetrating capabilities, as well as recent advances in detectors that measure position and direction of the individual muons before and after traversing the imaged object. However, muon image reconstruction techniques are limited in resolution due to low muon flux and the effects of multiple Coulomb scattering (MCS). Current reconstructionmore » algorithms, e.g., point of closest approach (PoCA) or straight-line path (SLP), rely on overly simple assumptions for muon path estimation through the imaged object. For robust muon tomography, efficient and flexible physics-based algorithms are needed to model the MCS process and accurately estimate the most probable trajectory of a muon as it traverses an object. In the present work, the use of a Bayesian framework and a Gaussian approximation of MCS are explored for estimation of the most likely path of a cosmic ray muon traversing uniform or nonuniform media and undergoing MCS. The algorithm’s precision is compared to Monte Carlo simulated muon trajectories. It was found that the algorithm is expected to be able to predict muon tracks to less than 1.5 mm RMS for 0.5 GeV muons and 0.25 mm RMS for 3 GeV muons, a 50% improvement compared to SLP and 15% improvement when compared to PoCA. Further, a 30% increase in useful muon flux was observed relative to PoCA. Muon track prediction improved for higher muon energies or smaller penetration depth where energy loss is not significant. Finally, the effect of energy loss due to ionization is investigated, and a linear energy loss relation that is easy to use is proposed.« less

  19. A generalized muon trajectory estimation algorithm with energy loss for application to muon tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatzidakis, Stylianos; Liu, Zhengzhi; Hayward, Jason P.

    Here, this work presents a generalized muon trajectory estimation (GMTE) algorithm to estimate the path of a muon in either uniform or nonuniform media. The use of cosmic ray muons in nuclear nonproliferation and safeguards verification applications has recently gained attention due to the non-intrusive and passive nature of the inspection, penetrating capabilities, as well as recent advances in detectors that measure position and direction of the individual muons before and after traversing the imaged object. However, muon image reconstruction techniques are limited in resolution due to low muon flux and the effects of multiple Coulomb scattering (MCS). Current reconstructionmore » algorithms, e.g., point of closest approach (PoCA) or straight-line path (SLP), rely on overly simple assumptions for muon path estimation through the imaged object. For robust muon tomography, efficient and flexible physics-based algorithms are needed to model the MCS process and accurately estimate the most probable trajectory of a muon as it traverses an object. In the present work, the use of a Bayesian framework and a Gaussian approximation of MCS are explored for estimation of the most likely path of a cosmic ray muon traversing uniform or nonuniform media and undergoing MCS. The algorithm’s precision is compared to Monte Carlo simulated muon trajectories. It was found that the algorithm is expected to be able to predict muon tracks to less than 1.5 mm RMS for 0.5 GeV muons and 0.25 mm RMS for 3 GeV muons, a 50% improvement compared to SLP and 15% improvement when compared to PoCA. Further, a 30% increase in useful muon flux was observed relative to PoCA. Muon track prediction improved for higher muon energies or smaller penetration depth where energy loss is not significant. Finally, the effect of energy loss due to ionization is investigated, and a linear energy loss relation that is easy to use is proposed.« less

  20. An Economic Evaluation of Food Safety Education Interventions: Estimates and Critical Data Gaps.

    PubMed

    Zan, Hua; Lambea, Maria; McDowell, Joyce; Scharff, Robert L

    2017-08-01

    The economic evaluation of food safety interventions is an important tool that practitioners and policy makers use to assess the efficacy of their efforts. These evaluations are built on models that are dependent on accurate estimation of numerous input variables. In many cases, however, there is no data available to determine input values and expert opinion is used to generate estimates. This study uses a benefit-cost analysis of the food safety component of the adult Expanded Food and Nutrition Education Program (EFNEP) in Ohio as a vehicle for demonstrating how results based on variable values that are not objectively determined may be sensitive to alternative assumptions. In particular, the focus here is on how reported behavioral change is translated into economic benefits. Current gaps in the literature make it impossible to know with certainty how many people are protected by the education (what are the spillover effects?), the length of time education remains effective, and the level of risk reduction from change in behavior. Based on EFNEP survey data, food safety education led 37.4% of participants to improve their food safety behaviors. Under reasonable default assumptions, benefits from this improvement significantly outweigh costs, yielding a benefit-cost ratio of between 6.2 and 10.0. Incorporation of a sensitivity analysis using alternative estimates yields a greater range of estimates (0.2 to 56.3), which highlights the importance of future research aimed at filling these research gaps. Nevertheless, most reasonable assumptions lead to estimates of benefits that justify their costs.

  1. Bringing Together Users and Developers of Forest Biomass Maps

    NASA Technical Reports Server (NTRS)

    Brown, Molly E.; Macauley, Molly

    2011-01-01

    Forests store carbon and thus represent important sinks for atmospheric carbon dioxide. Reducing uncertainty in current estimates of the amount of carbon in standing forests will improve precision of estimates of anthropogenic contributions to carbon dioxide in the atmosphere due to deforestation. Although satellite remote sensing has long been an important tool for mapping land cover, until recently aboveground forest biomass estimates have relied mostly on systematic ground sampling of forests. In alignment with fiscal year 2010 congressional direction, NASA has initiated work toward a carbon monitoring system (CMS) that includes both maps of forest biomass and total carbon flux estimates. A goal of the project is to ensure that the products are useful to a wide community of scientists, managers, and policy makers, as well as to carbon cycle scientists. Understanding the needs and requirements of these data users is helpful not just to the NASA CMS program but also to the entire community working on carbon-related activities. To that end, this meeting brought together a small group of natural resource managers and policy makers who use information on forests in their work with NASA scientists who are working to create aboveground forest biomass maps. These maps, derived from combining remote sensing and ground plots, aim to be more accurate than current inventory approaches when applied at local and regional scales.

  2. Value-based decision-making battery: A Bayesian adaptive approach to assess impulsive and risky behavior.

    PubMed

    Pooseh, Shakoor; Bernhardt, Nadine; Guevara, Alvaro; Huys, Quentin J M; Smolka, Michael N

    2018-02-01

    Using simple mathematical models of choice behavior, we present a Bayesian adaptive algorithm to assess measures of impulsive and risky decision making. Practically, these measures are characterized by discounting rates and are used to classify individuals or population groups, to distinguish unhealthy behavior, and to predict developmental courses. However, a constant demand for improved tools to assess these constructs remains unanswered. The algorithm is based on trial-by-trial observations. At each step, a choice is made between immediate (certain) and delayed (risky) options. Then the current parameter estimates are updated by the likelihood of observing the choice, and the next offers are provided from the indifference point, so that they will acquire the most informative data based on the current parameter estimates. The procedure continues for a certain number of trials in order to reach a stable estimation. The algorithm is discussed in detail for the delay discounting case, and results from decision making under risk for gains, losses, and mixed prospects are also provided. Simulated experiments using prescribed parameter values were performed to justify the algorithm in terms of the reproducibility of its parameters for individual assessments, and to test the reliability of the estimation procedure in a group-level analysis. The algorithm was implemented as an experimental battery to measure temporal and probability discounting rates together with loss aversion, and was tested on a healthy participant sample.

  3. Understanding and estimating effective population size for practical application in marine species management.

    PubMed

    Hare, Matthew P; Nunney, Leonard; Schwartz, Michael K; Ruzzante, Daniel E; Burford, Martha; Waples, Robin S; Ruegg, Kristen; Palstra, Friso

    2011-06-01

    Effective population size (N(e)) determines the strength of genetic drift in a population and has long been recognized as an important parameter for evaluating conservation status and threats to genetic health of populations. Specifically, an estimate of N(e) is crucial to management because it integrates genetic effects with the life history of the species, allowing for predictions of a population's current and future viability. Nevertheless, compared with ecological and demographic parameters, N(e) has had limited influence on species management, beyond its application in very small populations. Recent developments have substantially improved N(e) estimation; however, some obstacles remain for the practical application of N(e) estimates. For example, the need to define the spatial and temporal scale of measurement makes the concept complex and sometimes difficult to interpret. We reviewed approaches to estimation of N(e) over both long-term and contemporary time frames, clarifying their interpretations with respect to local populations and the global metapopulation. We describe multiple experimental factors affecting robustness of contemporary N(e) estimates and suggest that different sampling designs can be combined to compare largely independent measures of N(e) for improved confidence in the result. Large populations with moderate gene flow pose the greatest challenges to robust estimation of contemporary N(e) and require careful consideration of sampling and analysis to minimize estimator bias. We emphasize the practical utility of estimating N(e) by highlighting its relevance to the adaptive potential of a population and describing applications in management of marine populations, where the focus is not always on critically endangered populations. Two cases discussed include the mechanisms generating N(e) estimates many orders of magnitude lower than census N in harvested marine fishes and the predicted reduction in N(e) from hatchery-based population supplementation. ©2011 Society for Conservation Biology.

  4. Computer modelling of cyclic deformation of high-temperature materials. Progress report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duesbery, M.S.; Louat, N.P.

    1992-11-16

    Current methods of lifetime assessment leave much to be desired. Typically, the expected life of a full-scale component exposed to a complex environment is based upon empirical interpretations of measurements performed on microscopic samples in controlled laboratory conditions. Extrapolation to the service component is accomplished by scaling laws which, if used at all, are empirical; little or no attention is paid to synergistic interactions between the different components of the real environment. With the increasingly hostile conditions which must be faced in modern aerospace applications, improvement in lifetime estimation is mandated by both cost and safety considerations. This program aimsmore » at improving current methods of lifetime assessment by building in the characteristics of the micro-mechanisms known to be responsible for damage and failure. The broad approach entails the integration and, where necessary, augmentation of the micro-scale research results currently available in the literature into a macro-scale model with predictive capability. In more detail, the program will develop a set of hierarchically structured models at different length scales, from atomic to macroscopic, at each level taking as parametric input the results of the model at the next smaller scale. In this way the known microscopic properties can be transported by systematic procedures to the unknown macro-scale region. It may not be possible to eliminate empiricism completely, because some of the quantities involved cannot yet be estimated to the required degree of precision. In this case the aim will be at least to eliminate functional empiricism.« less

  5. A near-infrared relationship for estimating black hole masses in active galactic nuclei

    NASA Astrophysics Data System (ADS)

    Landt, Hermine; Ward, Martin J.; Peterson, Bradley M.; Bentz, Misty C.; Elvis, Martin; Korista, Kirk T.; Karovska, Margarita

    2013-06-01

    Black hole masses for samples of active galactic nuclei (AGN) are currently estimated from single-epoch optical spectra using scaling relations anchored in reverberation mapping results. In particular, the two quantities needed for calculating black hole masses, namely the velocity and the radial distance of the orbiting gas are derived from the widths of the Balmer hydrogen broad emission lines and the optical continuum luminosity, respectively. We have recently presented a near-infrared (near-IR) relationship for estimating AGN black hole masses based on the widths of the Paschen hydrogen broad emission lines and the total 1 μm continuum luminosity. The near-IR offers several advantages over the optical: it suffers less from dust extinction, the AGN continuum is observed only weakly contaminated by the host galaxy and the strongest Paschen broad emission lines Paα and Paβ are unblended. Here, we improve the calibration of the near-IR black hole mass relationship by increasing the sample from 14 to 23 reverberation-mapped AGN using additional spectroscopy obtained with the Gemini Near-Infrared Spectrograph. The additional sample improves the number statistics in particular at the high-luminosity end.

  6. Using astrophysical jets for establishing an upper limit for the photon mass

    NASA Astrophysics Data System (ADS)

    Ryutov, D. D.

    2004-11-01

    Finite photon mass is compatible with general principles of the relativity theory; how small it (the mass) actually is has to be established experimentally. The presently accepted upper limit [1], E-22 of the electron mass, is established [2] based on the observations of the Solar wind. This estimate corresponds to the photon Compton length of L=3E6 km. I discuss possible ways of improving this estimate based on the properties of those of astrophysical jets where the pinch force is important for establishing the jet structure. It turns out that, if the jet radius is much greater than L, both pinch equilibrium and stability become very different from the massless photon case. In particular, the equilibrium pressure maximum coincides with the maximum of the current density. These new features are often incompatible with the observations, providing a way for improving the estimate of the photon mass by orders of magnitude. Work performed for the U.S. DOE by UC LLNL under contract W-7405-Eng-48. [1] S. Eidelman, and Particle Phys. Group. "Review of Particle Physics," Phys. Lett. B592, p.1, 2004; [2] D.D. Ryutov. Plasma Phys. Contr. Fus., 39, p.A73, 1997.

  7. Valuing Groundwater Resources in Arid Watersheds under Climate Change: A Framework and Estimates for the Upper Rio Grande

    NASA Astrophysics Data System (ADS)

    Hurd, B. H.; Coonrod, J.

    2008-12-01

    Climate change is expected to alter surface hydrology throughout the arid Western United States, in most cases compressing the period of peak snowmelt and runoff, and in some cases, for example, the Rio Grande, limiting total runoff. As such, climate change is widely expected to further stress arid watersheds, particularly in regions where trends in population growth, economic development and environmental regulation are current challenges. Strategies to adapt to such changes are evolving at various institutional levels including conjunctive management of surface and ground waters. Groundwater resources remain one of the key components of water management strategies aimed at accommodating continued population growth and mitigating the potential for water supply disruptions under climate change. By developing a framework for valuing these resources and for value improvements in the information pertaining to their characteristics, this research can assist in prioritizing infrastructure and investment to change and enhance water resource management. The key objective of this paper is to 1) develop a framework for estimating the value of groundwater resources and improved information, and 2) provide some preliminary estimates of this value and how it responds to plausible scenarios of climate change.

  8. Weighing trees with lasers: advances, challenges and opportunities

    PubMed Central

    Boni Vicari, M.; Burt, A.; Calders, K.; Lewis, S. L.; Raumonen, P.; Wilkes, P.

    2018-01-01

    Terrestrial laser scanning (TLS) is providing exciting new ways to quantify tree and forest structure, particularly above-ground biomass (AGB). We show how TLS can address some of the key uncertainties and limitations of current approaches to estimating AGB based on empirical allometric scaling equations (ASEs) that underpin all large-scale estimates of AGB. TLS provides extremely detailed non-destructive measurements of tree form independent of tree size and shape. We show examples of three-dimensional (3D) TLS measurements from various tropical and temperate forests and describe how the resulting TLS point clouds can be used to produce quantitative 3D models of branch and trunk size, shape and distribution. These models can drastically improve estimates of AGB, provide new, improved large-scale ASEs, and deliver insights into a range of fundamental tree properties related to structure. Large quantities of detailed measurements of individual 3D tree structure also have the potential to open new and exciting avenues of research in areas where difficulties of measurement have until now prevented statistical approaches to detecting and understanding underlying patterns of scaling, form and function. We discuss these opportunities and some of the challenges that remain to be overcome to enable wider adoption of TLS methods. PMID:29503726

  9. Intelligent voltage control strategy for three-phase UPS inverters with output LC filter

    NASA Astrophysics Data System (ADS)

    Jung, J. W.; Leu, V. Q.; Dang, D. Q.; Do, T. D.; Mwasilu, F.; Choi, H. H.

    2015-08-01

    This paper presents a supervisory fuzzy neural network control (SFNNC) method for a three-phase inverter of uninterruptible power supplies (UPSs). The proposed voltage controller is comprised of a fuzzy neural network control (FNNC) term and a supervisory control term. The FNNC term is deliberately employed to estimate the uncertain terms, and the supervisory control term is designed based on the sliding mode technique to stabilise the system dynamic errors. To improve the learning capability, the FNNC term incorporates an online parameter training methodology, using the gradient descent method and Lyapunov stability theory. Besides, a linear load current observer that estimates the load currents is used to exclude the load current sensors. The proposed SFNN controller and the observer are robust to the filter inductance variations, and their stability analyses are described in detail. The experimental results obtained on a prototype UPS test bed with a TMS320F28335 DSP are presented to validate the feasibility of the proposed scheme. Verification results demonstrate that the proposed control strategy can achieve smaller steady-state error and lower total harmonic distortion when subjected to nonlinear or unbalanced loads compared to the conventional sliding mode control method.

  10. Gap state analysis in electric-field-induced band gap for bilayer graphene.

    PubMed

    Kanayama, Kaoru; Nagashio, Kosuke

    2015-10-29

    The origin of the low current on/off ratio at room temperature in dual-gated bilayer graphene field-effect transistors is considered to be the variable range hopping in gap states. However, the quantitative estimation of gap states has not been conducted. Here, we report the systematic estimation of the energy gap by both quantum capacitance and transport measurements and the density of states for gap states by the conductance method. An energy gap of ~ 250 meV is obtained at the maximum displacement field of ~ 3.1 V/nm, where the current on/off ratio of ~ 3 × 10(3) is demonstrated at 20 K. The density of states for the gap states are in the range from the latter half of 10(12) to 10(13) eV(-1) cm(-2). Although the large amount of gap states at the interface of high-k oxide/bilayer graphene limits the current on/off ratio at present, our results suggest that the reduction of gap states below ~ 10(11) eV(-1) cm(-2) by continual improvement of the gate stack makes bilayer graphene a promising candidate for future nanoelectronic device applications.

  11. A Review of Issues Related to Data Acquisition and Analysis in EEG/MEG Studies

    PubMed Central

    Puce, Aina; Hämäläinen, Matti S.

    2017-01-01

    Electroencephalography (EEG) and magnetoencephalography (MEG) are non-invasive electrophysiological methods, which record electric potentials and magnetic fields due to electric currents in synchronously-active neurons. With MEG being more sensitive to neural activity from tangential currents and EEG being able to detect both radial and tangential sources, the two methods are complementary. Over the years, neurophysiological studies have changed considerably: high-density recordings are becoming de rigueur; there is interest in both spontaneous and evoked activity; and sophisticated artifact detection and removal methods are available. Improved head models for source estimation have also increased the precision of the current estimates, particularly for EEG and combined EEG/MEG. Because of their complementarity, more investigators are beginning to perform simultaneous EEG/MEG studies to gain more complete information about neural activity. Given the increase in methodological complexity in EEG/MEG, it is important to gather data that are of high quality and that are as artifact free as possible. Here, we discuss some issues in data acquisition and analysis of EEG and MEG data. Practical considerations for different types of EEG and MEG studies are also discussed. PMID:28561761

  12. How to reduce the uncertainties in predictions of local coastal sea level as decision support: the contribution of GGOS

    NASA Astrophysics Data System (ADS)

    Plag, H.-P.

    2009-04-01

    Local Sea Level (LSL) rise is one of the major anticipated impacts of future global warming. In many low-lying and often subsiding coastal areas, an increase of local sea-surface height is likely to increase the hazards of storm surges and hurricances and to lead to major inundation. Single major disasters due to storm surges and hurricanes hitting densely populated urban areas are estimated to inflict losses in excess of 100 billion. Decision makers face a trade-off between imposing the very high costs of coastal protection, mitigation and adaptation upon today's national economies and leaving the costs of potential major disasters to future generations. Risk and vulnerability assessments in support of informed decisions require as input predictions of the range of future LSL rise with reliable estimates of uncertainties. Secular changes in LSL are the result of a mix of location-dependent factors including ocean temperature and salinity changes, ocean and atmospheric circulation changes, mass exchange of the ocean with terrestrial water storage and the cryosphere, and vertical land motion. Current aleatory uncertainties in observations relevant to past and current LSL changes combined with epistemic uncertainties in some of the forcing functions for LSL changes produce a large range of plausible future LSL trajectories. This large range hampers the development of reasonable mitigation and adaptation strategies in the coastal zone. A detailed analysis of the uncertainties helps to answer the question what and how observations could help to reduce the uncertainties. The analysis shows that the Global Geodetic Observing System (GGOS) provides valuable observations and products towards this goal. Observations of the large ice sheets can improve the constraints on the current mass balance of the cryosphere and support cryosphere model validation. Vertical land motion close to melting ice sheets are highly relevant in the validation of models for the elastic response of the Earth to glacial deloading. Combination of satellite gravity mission with ground-based observations of gravity and vertical land motion in areas with significant mass changes (both in cryosphere, land water storage, and ocean) could help to improve models of the global water and energy cycle, which ultimately improves the understanding of current LSL changes. For LSL projections, local vertical land motion given in a reference frame tied to the center of mass is an important input, which currently contributes significantly to the error budget of LSL predictions. Improvements of the terrestrial reference frame would reduce this error contribution.

  13. Driving range estimation for electric vehicles based on driving condition identification and forecast

    NASA Astrophysics Data System (ADS)

    Pan, Chaofeng; Dai, Wei; Chen, Liao; Chen, Long; Wang, Limei

    2017-10-01

    With the impact of serious environmental pollution in our cities combined with the ongoing depletion of oil resources, electric vehicles are becoming highly favored as means of transport. Not only for the advantage of low noise, but for their high energy efficiency and zero pollution. The Power battery is used as the energy source of electric vehicles. However, it does currently still have a few shortcomings, noticeably the low energy density, with high costs and short cycle life results in limited mileage compared with conventional passenger vehicles. There is great difference in vehicle energy consumption rate under different environment and driving conditions. Estimation error of current driving range is relatively large due to without considering the effects of environmental temperature and driving conditions. The development of a driving range estimation method will have a great impact on the electric vehicles. A new driving range estimation model based on the combination of driving cycle identification and prediction is proposed and investigated. This model can effectively eliminate mileage errors and has good convergence with added robustness. Initially the identification of the driving cycle is based on Kernel Principal Component feature parameters and fuzzy C referring to clustering algorithm. Secondly, a fuzzy rule between the characteristic parameters and energy consumption is established under MATLAB/Simulink environment. Furthermore the Markov algorithm and BP(Back Propagation) neural network method is utilized to predict the future driving conditions to improve the accuracy of the remaining range estimation. Finally, driving range estimation method is carried out under the ECE 15 condition by using the rotary drum test bench, and the experimental results are compared with the estimation results. Results now show that the proposed driving range estimation method can not only estimate the remaining mileage, but also eliminate the fluctuation of the residual range under different driving conditions.

  14. Near Real-time GNSS-based Ionospheric Model using Expanded Kriging in the East Asia Region

    NASA Astrophysics Data System (ADS)

    Choi, P. H.; Bang, E.; Lee, J.

    2016-12-01

    Many applications which utilize radio waves (e.g. navigation, communications, and radio sciences) are influenced by the ionosphere. The technology to provide global ionospheric maps (GIM) which show ionospheric Total Electron Content (TEC) has been progressed by processing GNSS data. However, the GIMs have limited spatial resolution (e.g. 2.5° in latitude and 5° in longitude), because they are generated using globally-distributed and thus relatively sparse GNSS reference station networks. This study presents a near real-time and high spatial resolution TEC model over East Asia by using ionospheric observables from both International GNSS Service (IGS) and local GNSS networks and the expanded kriging method. New signals from multi-constellation (e.g,, GPS L5, Galileo E5) were also used to generate high-precision TEC estimates. The newly proposed estimation method is based on the universal kriging interpolation technique, but integrates TEC data from previous epochs to those from the current epoch to improve the TEC estimation performance by increasing ionospheric observability. To propagate previous measurements to the current epoch, we implemented a Kalman filter whose dynamic model was derived by using the first-order Gauss-Markov process which characterizes temporal ionospheric changes under the nominal ionospheric conditions. Along with the TEC estimates at grids, the method generates the confidence bounds on the estimates using resulting estimation covariance. We also suggest to classify the confidence bounds into several categories to allow users to recognize the quality levels of TEC estimates according to the requirements for user's applications. This paper examines the performance of the proposed method by obtaining estimation results for both nominal and disturbed ionospheric conditions, and compares these results to those provided by GIM of the NASA Jet propulsion Laboratory. In addition, the estimation results based on the expanded kriging method are compared to the results from the universal kriging method for both nominal and disturbed ionospheric conditions.

  15. mBEEF-vdW: Robust fitting of error estimation density functionals

    NASA Astrophysics Data System (ADS)

    Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes; Jacobsen, Karsten W.; Bligaard, Thomas

    2016-06-01

    We propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework [J. Wellendorff et al., Phys. Rev. B 85, 235149 (2012), 10.1103/PhysRevB.85.235149; J. Wellendorff et al., J. Chem. Phys. 140, 144107 (2014), 10.1063/1.4870397]. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator over the training datasets. Using this estimator, we show that the robust loss function leads to a 10 % improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.

  16. A regularized auxiliary particle filtering approach for system state estimation and battery life prediction

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Wang, Wilson; Ma, Fai

    2011-07-01

    System current state estimation (or condition monitoring) and future state prediction (or failure prognostics) constitute the core elements of condition-based maintenance programs. For complex systems whose internal state variables are either inaccessible to sensors or hard to measure under normal operational conditions, inference has to be made from indirect measurements using approaches such as Bayesian learning. In recent years, the auxiliary particle filter (APF) has gained popularity in Bayesian state estimation; the APF technique, however, has some potential limitations in real-world applications. For example, the diversity of the particles may deteriorate when the process noise is small, and the variance of the importance weights could become extremely large when the likelihood varies dramatically over the prior. To tackle these problems, a regularized auxiliary particle filter (RAPF) is developed in this paper for system state estimation and forecasting. This RAPF aims to improve the performance of the APF through two innovative steps: (1) regularize the approximating empirical density and redraw samples from a continuous distribution so as to diversify the particles; and (2) smooth out the rather diffused proposals by a rejection/resampling approach so as to improve the robustness of particle filtering. The effectiveness of the proposed RAPF technique is evaluated through simulations of a nonlinear/non-Gaussian benchmark model for state estimation. It is also implemented for a real application in the remaining useful life (RUL) prediction of lithium-ion batteries.

  17. Improved NASA-ANOPP Noise Prediction Computer Code for Advanced Subsonic Propulsion Systems. Volume 2; Fan Suppression Model Development

    NASA Technical Reports Server (NTRS)

    Kontos, Karen B.; Kraft, Robert E.; Gliebe, Philip R.

    1996-01-01

    The Aircraft Noise Predication Program (ANOPP) is an industry-wide tool used to predict turbofan engine flyover noise in system noise optimization studies. Its goal is to provide the best currently available methods for source noise prediction. As part of a program to improve the Heidmann fan noise model, models for fan inlet and fan exhaust noise suppression estimation that are based on simple engine and acoustic geometry inputs have been developed. The models can be used to predict sound power level suppression and sound pressure level suppression at a position specified relative to the engine inlet.

  18. Improved radial dose function estimation using current version MCNP Monte-Carlo simulation: Model 6711 and ISC3500 125I brachytherapy sources.

    PubMed

    Duggan, Dennis M

    2004-12-01

    Improved cross-sections in a new version of the Monte-Carlo N-particle (MCNP) code may eliminate discrepancies between radial dose functions (as defined by American Association of Physicists in Medicine Task Group 43) derived from Monte-Carlo simulations of low-energy photon-emitting brachytherapy sources and those from measurements on the same sources with thermoluminescent dosimeters. This is demonstrated for two 125I brachytherapy seed models, the Implant Sciences Model ISC3500 (I-Plant) and the Amersham Health Model 6711, by simulating their radial dose functions with two versions of MCNP, 4c2 and 5.

  19. Evaluation of the Waste Tire Resources Recovery Program and Environmental Health Policy in Taiwan

    PubMed Central

    Chen, Chia-Ching; Yamada, Tetsuji; Chiu, I-Ming; Liu, Yi-Kuen

    2009-01-01

    This paper examines the effectiveness of Taiwanese environmental health policies, whose aim is to improve environmental quality by reducing tire waste via the Tire Resource Recovery Program. The results confirm that implemented environmental health policies improve the overall health of the population (i.e. a decrease in death caused by bronchitis and other respiratory diseases). Current policy expenditures are far below the optimal level, as it is estimated that a ten percent increase in the subsidy would decrease the number of deaths caused by bronchitis and other respiratory diseases by 0.58% per county/city per year on average. PMID:19440434

  20. Crowdsourcing-Assisted Radio Environment Database for V2V Communication.

    PubMed

    Katagiri, Keita; Sato, Koya; Fujii, Takeo

    2018-04-12

    In order to realize reliable Vehicle-to-Vehicle (V2V) communication systems for autonomous driving, the recognition of radio propagation becomes an important technology. However, in the current wireless distributed network systems, it is difficult to accurately estimate the radio propagation characteristics because of the locality of the radio propagation caused by surrounding buildings and geographical features. In this paper, we propose a measurement-based radio environment database for improving the accuracy of the radio environment estimation in the V2V communication systems. The database first gathers measurement datasets of the received signal strength indicator (RSSI) related to the transmission/reception locations from V2V systems. By using the datasets, the average received power maps linked with transmitter and receiver locations are generated. We have performed measurement campaigns of V2V communications in the real environment to observe RSSI for the database construction. Our results show that the proposed method has higher accuracy of the radio propagation estimation than the conventional path loss model-based estimation.

Top