Sample records for current estimates based

  1. Estimating patient dose from CT exams that use automatic exposure control: Development and validation of methods to accurately estimate tube current values.

    PubMed

    McMillan, Kyle; Bostani, Maryam; Cagnon, Christopher H; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H; McNitt-Gray, Michael F

    2017-08-01

    The vast majority of body CT exams are performed with automatic exposure control (AEC), which adapts the mean tube current to the patient size and modulates the tube current either angularly, longitudinally or both. However, most radiation dose estimation tools are based on fixed tube current scans. Accurate estimates of patient dose from AEC scans require knowledge of the tube current values, which is usually unavailable. The purpose of this work was to develop and validate methods to accurately estimate the tube current values prescribed by one manufacturer's AEC system to enable accurate estimates of patient dose. Methods were developed that took into account available patient attenuation information, user selected image quality reference parameters and x-ray system limits to estimate tube current values for patient scans. Methods consistent with AAPM Report 220 were developed that used patient attenuation data that were: (a) supplied by the manufacturer in the CT localizer radiograph and (b) based on a simulated CT localizer radiograph derived from image data. For comparison, actual tube current values were extracted from the projection data of each patient. Validation of each approach was based on data collected from 40 pediatric and adult patients who received clinically indicated chest (n = 20) and abdomen/pelvis (n = 20) scans on a 64 slice multidetector row CT (Sensation 64, Siemens Healthcare, Forchheim, Germany). For each patient dataset, the following were collected with Institutional Review Board (IRB) approval: (a) projection data containing actual tube current values at each projection view, (b) CT localizer radiograph (topogram) and (c) reconstructed image data. Tube current values were estimated based on the actual topogram (actual-topo) as well as the simulated topogram based on image data (sim-topo). Each of these was compared to the actual tube current values from the patient scan. In addition, to assess the accuracy of each method in estimating patient organ doses, Monte Carlo simulations were performed by creating voxelized models of each patient, identifying key organs and incorporating tube current values into the simulations to estimate dose to the lungs and breasts (females only) for chest scans and the liver, kidney, and spleen for abdomen/pelvis scans. Organ doses from simulations using the actual tube current values were compared to those using each of the estimated tube current values (actual-topo and sim-topo). When compared to the actual tube current values, the average error for tube current values estimated from the actual topogram (actual-topo) and simulated topogram (sim-topo) was 3.9% and 5.8% respectively. For Monte Carlo simulations of chest CT exams using the actual tube current values and estimated tube current values (based on the actual-topo and sim-topo methods), the average differences for lung and breast doses ranged from 3.4% to 6.6%. For abdomen/pelvis exams, the average differences for liver, kidney, and spleen doses ranged from 4.2% to 5.3%. Strong agreement between organ doses estimated using actual and estimated tube current values provides validation of both methods for estimating tube current values based on data provided in the topogram or simulated from image data. © 2017 American Association of Physicists in Medicine.

  2. Multi-population Genomic Relationships for Estimating Current Genetic Variances Within and Genetic Correlations Between Populations.

    PubMed

    Wientjes, Yvonne C J; Bijma, Piter; Vandenplas, Jérémie; Calus, Mario P L

    2017-10-01

    Different methods are available to calculate multi-population genomic relationship matrices. Since those matrices differ in base population, it is anticipated that the method used to calculate genomic relationships affects the estimate of genetic variances, covariances, and correlations. The aim of this article is to define the multi-population genomic relationship matrix to estimate current genetic variances within and genetic correlations between populations. The genomic relationship matrix containing two populations consists of four blocks, one block for population 1, one block for population 2, and two blocks for relationships between the populations. It is known, based on literature, that by using current allele frequencies to calculate genomic relationships within a population, current genetic variances are estimated. In this article, we theoretically derived the properties of the genomic relationship matrix to estimate genetic correlations between populations and validated it using simulations. When the scaling factor of across-population genomic relationships is equal to the product of the square roots of the scaling factors for within-population genomic relationships, the genetic correlation is estimated unbiasedly even though estimated genetic variances do not necessarily refer to the current population. When this property is not met, the correlation based on estimated variances should be multiplied by a correction factor based on the scaling factors. In this study, we present a genomic relationship matrix which directly estimates current genetic variances as well as genetic correlations between populations. Copyright © 2017 by the Genetics Society of America.

  3. A Method to Simultaneously Detect the Current Sensor Fault and Estimate the State of Energy for Batteries in Electric Vehicles

    PubMed Central

    Xu, Jun; Wang, Jing; Li, Shiying; Cao, Binggang

    2016-01-01

    Recently, State of energy (SOE) has become one of the most fundamental parameters for battery management systems in electric vehicles. However, current information is critical in SOE estimation and current sensor is usually utilized to obtain the latest current information. However, if the current sensor fails, the SOE estimation may be confronted with large error. Therefore, this paper attempts to make the following contributions: Current sensor fault detection and SOE estimation method is realized simultaneously. Through using the proportional integral observer (PIO) based method, the current sensor fault could be accurately estimated. By taking advantage of the accurate estimated current sensor fault, the influence caused by the current sensor fault can be eliminated and compensated. As a result, the results of the SOE estimation will be influenced little by the fault. In addition, the simulation and experimental workbench is established to verify the proposed method. The results indicate that the current sensor fault can be estimated accurately. Simultaneously, the SOE can also be estimated accurately and the estimation error is influenced little by the fault. The maximum SOE estimation error is less than 2%, even though the large current error caused by the current sensor fault still exists. PMID:27548183

  4. A Method to Simultaneously Detect the Current Sensor Fault and Estimate the State of Energy for Batteries in Electric Vehicles.

    PubMed

    Xu, Jun; Wang, Jing; Li, Shiying; Cao, Binggang

    2016-08-19

    Recently, State of energy (SOE) has become one of the most fundamental parameters for battery management systems in electric vehicles. However, current information is critical in SOE estimation and current sensor is usually utilized to obtain the latest current information. However, if the current sensor fails, the SOE estimation may be confronted with large error. Therefore, this paper attempts to make the following contributions: Current sensor fault detection and SOE estimation method is realized simultaneously. Through using the proportional integral observer (PIO) based method, the current sensor fault could be accurately estimated. By taking advantage of the accurate estimated current sensor fault, the influence caused by the current sensor fault can be eliminated and compensated. As a result, the results of the SOE estimation will be influenced little by the fault. In addition, the simulation and experimental workbench is established to verify the proposed method. The results indicate that the current sensor fault can be estimated accurately. Simultaneously, the SOE can also be estimated accurately and the estimation error is influenced little by the fault. The maximum SOE estimation error is less than 2%, even though the large current error caused by the current sensor fault still exists.

  5. Estimating pixel variances in the scenes of staring sensors

    DOEpatents

    Simonson, Katherine M [Cedar Crest, NM; Ma, Tian J [Albuquerque, NM

    2012-01-24

    A technique for detecting changes in a scene perceived by a staring sensor is disclosed. The technique includes acquiring a reference image frame and a current image frame of a scene with the staring sensor. A raw difference frame is generated based upon differences between the reference image frame and the current image frame. Pixel error estimates are generated for each pixel in the raw difference frame based at least in part upon spatial error estimates related to spatial intensity gradients in the scene. The pixel error estimates are used to mitigate effects of camera jitter in the scene between the current image frame and the reference image frame.

  6. Online Detection of Broken Rotor Bar Fault in Induction Motors by Combining Estimation of Signal Parameters via Min-norm Algorithm and Least Square Method

    NASA Astrophysics Data System (ADS)

    Wang, Pan-Pan; Yu, Qiang; Hu, Yong-Jun; Miao, Chang-Xin

    2017-11-01

    Current research in broken rotor bar (BRB) fault detection in induction motors is primarily focused on a high-frequency resolution analysis of the stator current. Compared with a discrete Fourier transformation, the parametric spectrum estimation technique has a higher frequency accuracy and resolution. However, the existing detection methods based on parametric spectrum estimation cannot realize online detection, owing to the large computational cost. To improve the efficiency of BRB fault detection, a new detection method based on the min-norm algorithm and least square estimation is proposed in this paper. First, the stator current is filtered using a band-pass filter and divided into short overlapped data windows. The min-norm algorithm is then applied to determine the frequencies of the fundamental and fault characteristic components with each overlapped data window. Next, based on the frequency values obtained, a model of the fault current signal is constructed. Subsequently, a linear least squares problem solved through singular value decomposition is designed to estimate the amplitudes and phases of the related components. Finally, the proposed method is applied to a simulated current and an actual motor, the results of which indicate that, not only parametric spectrum estimation technique.

  7. B-2 Extremely High Frequency SATCOM and Computer Increment 1 (B-2 EHF Inc 1)

    DTIC Science & Technology

    2015-12-01

    Confidence Level Confidence Level of cost estimate for current APB: 55% This APB reflects cost and funding data based on the B-2 EHF Increment I SCP...This cost estimate was quantified at the Mean (~55%) confidence level . Total Quantity Quantity SAR Baseline Production Estimate Current APB...Production Estimate Econ Qty Sch Eng Est Oth Spt Total 33.624 -0.350 1.381 0.375 0.000 -6.075 0.000 -0.620 -5.289 28.335 Current SAR Baseline to Current

  8. Estimating Ocean Currents from Automatic Identification System Based Ship Drift Measurements

    NASA Astrophysics Data System (ADS)

    Jakub, Thomas D.

    Ship drift is a technique that has been used over the last century and a half to estimate ocean currents. Several of the shortcomings of the ship drift technique include obtaining the data from multiple ships, the time delay in getting those ship positions to a data center for processing and the limited resolution based on the amount of time between position measurements. These shortcomings can be overcome through the use of the Automatic Identification System (AIS). AIS enables more precise ocean current estimates, the option of finer resolution and more timely estimates. In this work, a demonstration of the use of AIS to compute ocean currents is performed. A corresponding error and sensitivity analysis is performed to help identify under which conditions errors will be smaller. A case study in San Francisco Bay with constant AIS message updates was compared against high frequency radar and demonstrated ocean current magnitude residuals of 19 cm/s for ship tracks in a high signal to noise environment. These ship tracks were only minutes long compared to the normally 12 to 24 hour ship tracks. The Gulf of Mexico case study demonstrated the ability to estimate ocean currents over longer baselines and identified the dependency of the estimates on the accuracy of time measurements. Ultimately, AIS measurements when combined with ship drift can provide another method of estimating ocean currents, particularly when other measurements techniques are not available.

  9. Estimated generic prices for novel treatments for drug-resistant tuberculosis.

    PubMed

    Gotham, Dzintars; Fortunak, Joseph; Pozniak, Anton; Khoo, Saye; Cooke, Graham; Nytko, Frederick E; Hill, Andrew

    2017-04-01

    The estimated worldwide annual incidence of MDR-TB is 480 000, representing 5% of TB incidence, but 20% of mortality. Multiple drugs have recently been developed or repurposed for the treatment of MDR-TB. Currently, treatment for MDR-TB costs thousands of dollars per course. To estimate generic prices for novel TB drugs that would be achievable given large-scale competitive manufacture. Prices for linezolid, moxifloxacin and clofazimine were estimated based on per-kilogram prices of the active pharmaceutical ingredient (API). Other costs were added, including formulation, packaging and a profit margin. The projected costs for sutezolid were estimated to be equivalent to those for linezolid, based on chemical similarity. Generic prices for bedaquiline, delamanid and pretomanid were estimated by assessing routes of synthesis, costs/kg of chemical reagents, routes of synthesis and per-step yields. Costing algorithms reflected variable regulatory requirements and efficiency of scale based on demand, and were validated by testing predictive ability against widely available TB medicines. Estimated generic prices were US$8-$17/month for bedaquiline, $5-$16/month for delamanid, $11-$34/month for pretomanid, $4-$9/month for linezolid, $4-$9/month for sutezolid, $4-$11/month for clofazimine and $4-$8/month for moxifloxacin. The estimated generic prices were 87%-94% lower than the current lowest available prices for bedaquiline, 95%-98% for delamanid and 94%-97% for linezolid. Estimated generic prices were $168-$395 per course for the STREAM trial modified Bangladesh regimens (current costs $734-$1799), $53-$276 for pretomanid-based three-drug regimens and $238-$507 for a delamanid-based four-drug regimen. Competitive large-scale generic manufacture could allow supplies of treatment for 5-10 times more MDR-TB cases within current procurement budgets. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Estimating cropland NPP using national crop inventory and MODIS derived crop specific parameters

    NASA Astrophysics Data System (ADS)

    Bandaru, V.; West, T. O.; Ricciuto, D. M.

    2011-12-01

    Estimates of cropland net primary production (NPP) are needed as input for estimates of carbon flux and carbon stock changes. Cropland NPP is currently estimated using terrestrial ecosystem models, satellite remote sensing, or inventory data. All three of these methods have benefits and problems. Terrestrial ecosystem models are often better suited for prognostic estimates rather than diagnostic estimates. Satellite-based NPP estimates often underestimate productivity on intensely managed croplands and are also limited to a few broad crop categories. Inventory-based estimates are consistent with nationally collected data on crop yields, but they lack sub-county spatial resolution. Integrating these methods will allow for spatial resolution consistent with current land cover and land use, while also maintaining total biomass quantities recorded in national inventory data. The main objective of this study was to improve cropland NPP estimates by using a modification of the CASA NPP model with individual crop biophysical parameters partly derived from inventory data and MODIS 8day 250m EVI product. The study was conducted for corn and soybean crops in Iowa and Illinois for years 2006 and 2007. We used EVI as a linear function for fPAR, and used crop land cover data (56m spatial resolution) to extract individual crop EVI pixels. First, we separated mixed pixels of both corn and soybean that occur when MODIS 250m pixel contains more than one crop. Second, we substituted mixed EVI pixels with nearest pure pixel values of the same crop within 1km radius. To get more accurate photosynthetic active radiation (PAR), we applied the Mountain Climate Simulator (MTCLIM) algorithm with the use of temperature and precipitation data from the North American Land Data Assimilation System (NLDAS-2) to generate shortwave radiation data. Finally, county specific light use efficiency (LUE) values of each crop for years 2006 to 2007 were determined by application of mean county inventory NPP and EVI-derived APAR into the Monteith equation. Results indicate spatial variability in LUE values across Iowa and Illinois. Northern regions of both Iowa and Illinois have higher LUE values than southern regions. This trend is reflected in NPP estimates. Results also show that corn has higher LUE values than soybean, resulting in higher NPP for corn than for soybean. Current NPP estimates were compared with NPP estimates from MOD17A3 product and with county inventory-based NPP estimates. Results indicate that current NPP estimates closely agree with inventory-based estimates, and that current NPP estimates are higher than those of the MOD17A3 product. It was also found that when mixed pixels were substituted with nearest pure pixels, revised NPP estimates were improved showing better agreement with inventory-based estimates.

  11. Sliding mode observers for automotive alternator

    NASA Astrophysics Data System (ADS)

    Chen, De-Shiou

    Estimator development for synchronous rectification of the automotive alternator is a desirable approach for estimating alternator's back electromotive forces (EMFs) without a direct mechanical sensor of the rotor position. Recent theoretical studies show that estimation of the back EMF may be observed based on system's phase current model by sensing electrical variables (AC phase currents and DC bus voltage) of the synchronous rectifier. Observer design of the back EMF estimation has been developed for constant engine speed. In this work, we are interested in nonlinear observer design of the back EMF estimation for the real case of variable engine speed. Initial back EMF estimate can be obtained from a first-order sliding mode observer (SMO) based on the phase current model. A fourth-order nonlinear asymptotic observer (NAO), complemented by the dynamics of the back EMF with time-varying frequency and amplitude, is then incorporated into the observer design for chattering reduction. Since the cost of required phase current sensors may be prohibitive, the most applicable approach in real implementation by measuring DC current of the synchronous rectifier is carried out in the dissertation. It is shown that the DC link current consists of sequential "windows" with partial information of the phase currents, hence, the cascaded NAO is responsible not only for the purpose of chattering reduction but also for necessarily accomplishing the process of estimation. Stability analyses of the proposed estimators are considered for most linear and time-varying cases. The stability of the NAO without speed information is substantiated by both numerical and experimental results. Prospective estimation algorithms for the case of battery current measurements are investigated. Theoretical study indicates that the convergence of the proposed LAO may be provided by high gain inputs. Since the order of the LAO/NAO for the battery current case is one order higher than that of the link current measurements, it is hard to find moderate values of the input gains for the real-time sampled-data systems. Technical difficulties in implementation of such high order discrete-time nonlinear estimators have been discussed. Directions of further investigations have been provided.

  12. A physics-based fractional order model and state of energy estimation for lithium ion batteries. Part II: Parameter identification and state of energy estimation for LiFePO4 battery

    NASA Astrophysics Data System (ADS)

    Li, Xiaoyu; Pan, Ke; Fan, Guodong; Lu, Rengui; Zhu, Chunbo; Rizzoni, Giorgio; Canova, Marcello

    2017-11-01

    State of energy (SOE) is an important index for the electrochemical energy storage system in electric vehicles. In this paper, a robust state of energy estimation method in combination with a physical model parameter identification method is proposed to achieve accurate battery state estimation at different operating conditions and different aging stages. A physics-based fractional order model with variable solid-state diffusivity (FOM-VSSD) is used to characterize the dynamic performance of a LiFePO4/graphite battery. In order to update the model parameter automatically at different aging stages, a multi-step model parameter identification method based on the lexicographic optimization is especially designed for the electric vehicle operating conditions. As the battery available energy changes with different applied load current profiles, the relationship between the remaining energy loss and the state of charge, the average current as well as the average squared current is modeled. The SOE with different operating conditions and different aging stages are estimated based on an adaptive fractional order extended Kalman filter (AFEKF). Validation results show that the overall SOE estimation error is within ±5%. The proposed method is suitable for the electric vehicle online applications.

  13. Fast and accurate spectral estimation for online detection of partial broken bar in induction motors

    NASA Astrophysics Data System (ADS)

    Samanta, Anik Kumar; Naha, Arunava; Routray, Aurobinda; Deb, Alok Kanti

    2018-01-01

    In this paper, an online and real-time system is presented for detecting partial broken rotor bar (BRB) of inverter-fed squirrel cage induction motors under light load condition. This system with minor modifications can detect any fault that affects the stator current. A fast and accurate spectral estimator based on the theory of Rayleigh quotient is proposed for detecting the spectral signature of BRB. The proposed spectral estimator can precisely determine the relative amplitude of fault sidebands and has low complexity compared to available high-resolution subspace-based spectral estimators. Detection of low-amplitude fault components has been improved by removing the high-amplitude fundamental frequency using an extended-Kalman based signal conditioner. Slip is estimated from the stator current spectrum for accurate localization of the fault component. Complexity and cost of sensors are minimal as only a single-phase stator current is required. The hardware implementation has been carried out on an Intel i7 based embedded target ported through the Simulink Real-Time. Evaluation of threshold and detectability of faults with different conditions of load and fault severity are carried out with empirical cumulative distribution function.

  14. Possibilities for Estimating Horizontal Electrical Currents in Active Regions on the Sun

    NASA Astrophysics Data System (ADS)

    Fursyak, Yu. A.; Abramenko, V. I.

    2017-12-01

    Part of the "free" magnetic energy associated with electrical current systems in the active region (AR) is released during solar flares. This proposition is widely accepted and it has stimulated interest in detecting electrical currents in active regions. The vertical component of an electric current in the photosphere can be found by observing the transverse magnetic field. At present, however, there are no direct methods for calculating transverse electric currents based on these observations. These calculations require information on the field vector measured simultaneously at several levels in the photosphere, which has not yet been done with solar instrumentation. In this paper we examine an approach to calculating the structure of the square of the density of a transverse electrical current based on a magnetogram of the vertical component of the magnetic field in the AR. Data obtained with the Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamic Observatory (SDO) for the AR of NOAA AR 11283 are used. It is shown that (1) the observed variations in the magnetic field of a sunspot and the proposed estimate of the density of an annular horizontal current around the spot are consistent with Faraday's law and (2) the resulting estimates of the magnitude of the square of the density of the horizontal current {j}_{\\perp}^2 = (0.002- 0.004) A2/m4 are consistent with previously obtained values of the density of a vertical current in the photosphere. Thus, the proposed estimate is physically significant and this method can be used to estimate the density and structure of transverse electrical currents in the photosphere.

  15. What is the lifetime risk of developing cancer?: the effect of adjusting for multiple primaries

    PubMed Central

    Sasieni, P D; Shelton, J; Ormiston-Smith, N; Thomson, C S; Silcocks, P B

    2011-01-01

    Background: The ‘lifetime risk' of cancer is generally estimated by combining current incidence rates with current all-cause mortality (‘current probability' method) rather than by describing the experience of a birth cohort. As individuals may get more than one type of cancer, what is generally estimated is the average (mean) number of cancers over a lifetime. This is not the same as the probability of getting cancer. Methods: We describe a method for estimating lifetime risk that corrects for the inclusion of multiple primary cancers in the incidence rates routinely published by cancer registries. The new method applies cancer incidence rates to the estimated probability of being alive without a previous cancer. The new method is illustrated using data from the Scottish Cancer Registry and is compared with ‘gold-standard' estimates that use (unpublished) data on first primaries. Results: The effect of this correction is to make the estimated ‘lifetime risk' smaller. The new estimates are extremely similar to those obtained using incidence based on first primaries. The usual ‘current probability' method considerably overestimates the lifetime risk of all cancers combined, although the correction for any single cancer site is minimal. Conclusion: Estimation of the lifetime risk of cancer should either be based on first primaries or should use the new method. PMID:21772332

  16. Software risk estimation and management techniques at JPL

    NASA Technical Reports Server (NTRS)

    Hihn, J.; Lum, K.

    2002-01-01

    In this talk we will discuss how uncertainty has been incorporated into the JPL software model, probabilistic-based estimates, and how risk is addressed, how cost risk is currently being explored via a variety of approaches, from traditional risk lists, to detailed WBS-based risk estimates to the Defect Detection and Prevention (DDP) tool.

  17. A Framework of Combining Case-Based Reasoning with a Work Breakdown Structure for Estimating the Cost of Online Course Production Projects

    ERIC Educational Resources Information Center

    He, Wu

    2014-01-01

    Currently, a work breakdown structure (WBS) approach is used as the most common cost estimation approach for online course production projects. To improve the practice of cost estimation, this paper proposes a novel framework to estimate the cost for online course production projects using a case-based reasoning (CBR) technique and a WBS. A…

  18. Medical costs and quality-adjusted life years associated with smoking: a systematic review.

    PubMed

    Feirman, Shari P; Glasser, Allison M; Teplitskaya, Lyubov; Holtgrave, David R; Abrams, David B; Niaura, Raymond S; Villanti, Andrea C

    2016-07-27

    Estimated medical costs ("T") and QALYs ("Q") associated with smoking are frequently used in cost-utility analyses of tobacco control interventions. The goal of this study was to understand how researchers have addressed the methodological challenges involved in estimating these parameters. Data were collected as part of a systematic review of tobacco modeling studies. We searched five electronic databases on July 1, 2013 with no date restrictions and synthesized studies qualitatively. Studies were eligible for the current analysis if they were U.S.-based, provided an estimate for Q, and used a societal perspective and lifetime analytic horizon to estimate T. We identified common methods and frequently cited sources used to obtain these estimates. Across all 18 studies included in this review, 50 % cited a 1992 source to estimate the medical costs associated with smoking and 56 % cited a 1996 study to derive the estimate for QALYs saved by quitting or preventing smoking. Approaches for estimating T varied dramatically among the studies included in this review. T was valued as a positive number, negative number and $0; five studies did not include estimates for T in their analyses. The most commonly cited source for Q based its estimate on the Health Utilities Index (HUI). Several papers also cited sources that based their estimates for Q on the Quality of Well-Being Scale and the EuroQol five dimensions questionnaire (EQ-5D). Current estimates of the lifetime medical care costs and the QALYs associated with smoking are dated and do not reflect the latest evidence on the health effects of smoking, nor the current costs and benefits of smoking cessation and prevention. Given these limitations, we recommend that researchers conducting economic evaluations of tobacco control interventions perform extensive sensitivity analyses around these parameter estimates.

  19. Statistical properties of alternative national forest inventory area estimators

    Treesearch

    Francis Roesch; John Coulston; Andrew D. Hill

    2012-01-01

    The statistical properties of potential estimators of forest area for the USDA Forest Service's Forest Inventory and Analysis (FIA) program are presented and discussed. The current FIA area estimator is compared and contrasted with a weighted mean estimator and an estimator based on the Polya posterior, in the presence of nonresponse. Estimator optimality is...

  20. Dynamic estimator for determining operating conditions in an internal combustion engine

    DOEpatents

    Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob

    2016-01-05

    Methods and systems are provided for estimating engine performance information for a combustion cycle of an internal combustion engine. Estimated performance information for a previous combustion cycle is retrieved from memory. The estimated performance information includes an estimated value of at least one engine performance variable. Actuator settings applied to engine actuators are also received. The performance information for the current combustion cycle is then estimated based, at least in part, on the estimated performance information for the previous combustion cycle and the actuator settings applied during the previous combustion cycle. The estimated performance information for the current combustion cycle is then stored to the memory to be used in estimating performance information for a subsequent combustion cycle.

  1. A global view of shifting cultivation: Recent, current, and future extent

    PubMed Central

    Mertz, Ole; Frolking, Steve; Egelund Christensen, Andreas; Hurni, Kaspar; Sedano, Fernando; Parsons Chini, Louise; Sahajpal, Ritvik; Hansen, Matthew; Hurtt, George

    2017-01-01

    Mosaic landscapes under shifting cultivation, with their dynamic mix of managed and natural land covers, often fall through the cracks in remote sensing–based land cover and land use classifications, as these are unable to adequately capture such landscapes’ dynamic nature and complex spectral and spatial signatures. But information about such landscapes is urgently needed to improve the outcomes of global earth system modelling and large-scale carbon and greenhouse gas accounting. This study combines existing global Landsat-based deforestation data covering the years 2000 to 2014 with very high-resolution satellite imagery to visually detect the specific spatio-temporal pattern of shifting cultivation at a one-degree cell resolution worldwide. The accuracy levels of our classification were high with an overall accuracy above 87%. We estimate the current global extent of shifting cultivation and compare it to other current global mapping endeavors as well as results of literature searches. Based on an expert survey, we make a first attempt at estimating past trends as well as possible future trends in the global distribution of shifting cultivation until the end of the 21st century. With 62% of the investigated one-degree cells in the humid and sub-humid tropics currently showing signs of shifting cultivation—the majority in the Americas (41%) and Africa (37%)—this form of cultivation remains widespread, and it would be wrong to speak of its general global demise in the last decades. We estimate that shifting cultivation landscapes currently cover roughly 280 million hectares worldwide, including both cultivated fields and fallows. While only an approximation, this estimate is clearly smaller than the areas mentioned in the literature which range up to 1,000 million hectares. Based on our expert survey and historical trends we estimate a possible strong decrease in shifting cultivation over the next decades, raising issues of livelihood security and resilience among people currently depending on shifting cultivation. PMID:28886132

  2. A global view of shifting cultivation: Recent, current, and future extent

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heinimann, Andreas; Mertz, Ole; Frolking, Steve

    Mosaic landscapes under shifting cultivation, with their dynamic mix of managed and natural land covers, often fall through the cracks in remote sensing-based land cover and land use classifications, as these are unable to adequately capture such landscapes' dynamic nature and complex spectral and spatial signatures. But information about such landscapes is urgently needed to improve the outcomes of global earth system modelling and large-scale carbon and greenhouse gas accounting. This study combines existing global Landsat-based deforestation data covering the years 2000 to 2014 with very high-resolution satellite imagery to visually detect the specific spatio-temporal pattern of shifting cultivation atmore » a one-degree cell resolution worldwide. The accuracy levels of our classification were high with an overall accuracy above 87%. We estimate the current global extent of shifting cultivation and compare it to other current global mapping endeavors as well as results of literature searches. Based on an expert survey, we make a first attempt at estimating past trends as well as possible future trends in the global distribution of shifting cultivation until the end of the 21 st century. With 62% of the investigated one-degree cells in the humid and sub-humid tropics currently showing signs of shifting cultivation$-$the majority in the Americas (41%) and Africa (37%)$-$this form of cultivation remains widespread, and it would be wrong to speak of its general global demise in the last decades. We estimate that shifting cultivation landscapes currently cover roughly 280 million hectares worldwide, including both cultivated fields and fallows. While only an approximation, this estimate is clearly smaller than the areas mentioned in the literature which range up to 1,000 million hectares. Based on our expert survey and historical trends we estimate a possible strong decrease in shifting cultivation over the next decades, raising issues of livelihood security and resilience among people currently depending on shifting cultivation.« less

  3. A global view of shifting cultivation: Recent, current, and future extent

    DOE PAGES

    Heinimann, Andreas; Mertz, Ole; Frolking, Steve; ...

    2017-09-08

    Mosaic landscapes under shifting cultivation, with their dynamic mix of managed and natural land covers, often fall through the cracks in remote sensing-based land cover and land use classifications, as these are unable to adequately capture such landscapes' dynamic nature and complex spectral and spatial signatures. But information about such landscapes is urgently needed to improve the outcomes of global earth system modelling and large-scale carbon and greenhouse gas accounting. This study combines existing global Landsat-based deforestation data covering the years 2000 to 2014 with very high-resolution satellite imagery to visually detect the specific spatio-temporal pattern of shifting cultivation atmore » a one-degree cell resolution worldwide. The accuracy levels of our classification were high with an overall accuracy above 87%. We estimate the current global extent of shifting cultivation and compare it to other current global mapping endeavors as well as results of literature searches. Based on an expert survey, we make a first attempt at estimating past trends as well as possible future trends in the global distribution of shifting cultivation until the end of the 21 st century. With 62% of the investigated one-degree cells in the humid and sub-humid tropics currently showing signs of shifting cultivation$-$the majority in the Americas (41%) and Africa (37%)$-$this form of cultivation remains widespread, and it would be wrong to speak of its general global demise in the last decades. We estimate that shifting cultivation landscapes currently cover roughly 280 million hectares worldwide, including both cultivated fields and fallows. While only an approximation, this estimate is clearly smaller than the areas mentioned in the literature which range up to 1,000 million hectares. Based on our expert survey and historical trends we estimate a possible strong decrease in shifting cultivation over the next decades, raising issues of livelihood security and resilience among people currently depending on shifting cultivation.« less

  4. Using FIESTA , an R-based tool for analysts, to look at temporal trends in forest estimates

    Treesearch

    Tracey S. Frescino; Paul L. Patterson; Elizabeth A. Freeman; Gretchen G. Moisen

    2012-01-01

    FIESTA (Forest Inventory Estimation for Analysis) is a user-friendly R package that supports the production of estimates for forest resources based on procedures from Bechtold and Patterson (2005). The package produces output consistent with current tools available for the Forest Inventory and Analysis National Program, such as FIDO (Forest Inventory Data Online) and...

  5. Radio Science from an Optical Communications Signal

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Asmar, Sami; Oudrhiri, Kamal

    2013-01-01

    NASA is currently developing the capability to deploy deep space optical communications links. This creates the opportunity to utilize the optical link to obtain range, doppler, and signal intensity estimates. These may, in turn, be used to complement or extend the capabilities of current radio science. In this paper we illustrate the achievable precision in estimating range, doppler, and received signal intensity of an non-coherent optical link (the current state-of-the-art for a deep-space link). We provide a joint estimation algorithm with performance close to the bound. We draw comparisons to estimates based on a coherent radio frequency signal, illustrating that large gains in either precision or observation time are possible with an optical link.

  6. 3-D direct current resistivity anisotropic modelling by goal-oriented adaptive finite element methods

    NASA Astrophysics Data System (ADS)

    Ren, Zhengyong; Qiu, Lewen; Tang, Jingtian; Wu, Xiaoping; Xiao, Xiao; Zhou, Zilong

    2018-01-01

    Although accurate numerical solvers for 3-D direct current (DC) isotropic resistivity models are current available even for complicated models with topography, reliable numerical solvers for the anisotropic case are still an open question. This study aims to develop a novel and optimal numerical solver for accurately calculating the DC potentials for complicated models with arbitrary anisotropic conductivity structures in the Earth. First, a secondary potential boundary value problem is derived by considering the topography and the anisotropic conductivity. Then, two a posteriori error estimators with one using the gradient-recovery technique and one measuring the discontinuity of the normal component of current density are developed for the anisotropic cases. Combing the goal-oriented and non-goal-oriented mesh refinements and these two error estimators, four different solving strategies are developed for complicated DC anisotropic forward modelling problems. A synthetic anisotropic two-layer model with analytic solutions verified the accuracy of our algorithms. A half-space model with a buried anisotropic cube and a mountain-valley model are adopted to test the convergence rates of these four solving strategies. We found that the error estimator based on the discontinuity of current density shows better performance than the gradient-recovery based a posteriori error estimator for anisotropic models with conductivity contrasts. Both error estimators working together with goal-oriented concepts can offer optimal mesh density distributions and highly accurate solutions.

  7. Atlanta congestion reduction demonstration. National evaluation : travel demand management (TDM) data test plan.

    DOT National Transportation Integrated Search

    2001-01-01

    Internet-based Advanced Traveler Information Services (ATIS) provide the urban traveler with estimated travel times based on current roadway congestion. Survey research indicates that the vast majority of current ATIS users are satisfied consumers wh...

  8. Alternating steady state free precession for estimation of current-induced magnetic flux density: A feasibility study.

    PubMed

    Lee, Hyunyeol; Jeong, Woo Chul; Kim, Hyung Joong; Woo, Eung Je; Park, Jaeseok

    2016-05-01

    To develop a novel, current-controlled alternating steady-state free precession (SSFP)-based conductivity imaging method and corresponding MR signal models to estimate current-induced magnetic flux density (Bz ) and conductivity distribution. In the proposed method, an SSFP pulse sequence, which is in sync with alternating current pulses, produces dual oscillating steady states while yielding nonlinear relation between signal phase and Bz . A ratiometric signal model between the states was analytically derived using the Bloch equation, wherein Bz was estimated by solving a nonlinear inverse problem for conductivity estimation. A theoretical analysis on the signal-to-noise ratio of Bz was given. Numerical and experimental studies were performed using SSFP-FID and SSFP-ECHO with current pulses positioned either before or after signal encoding to investigate the feasibility of the proposed method in conductivity estimation. Given all SSFP variants herein, SSFP-FID with alternating current pulses applied before signal encoding exhibits the highest Bz signal-to-noise ratio and conductivity contrast. Additionally, compared with conventional conductivity imaging, the proposed method benefits from rapid SSFP acquisition without apparent loss of conductivity contrast. We successfully demonstrated the feasibility of the proposed method in estimating current-induced Bz and conductivity distribution. It can be a promising, rapid imaging strategy for quantitative conductivity imaging. © 2015 Wiley Periodicals, Inc.

  9. Modeling Speed-Accuracy Tradeoff in Adaptive System for Practicing Estimation

    ERIC Educational Resources Information Center

    Nižnan, Juraj

    2015-01-01

    Estimation is useful in situations where an exact answer is not as important as a quick answer that is good enough. A web-based adaptive system for practicing estimates is currently being developed. We propose a simple model for estimating student's latent skill of estimation. This model combines a continuous measure of correctness and response…

  10. Depth-averaged instantaneous currents in a tidally dominated shelf sea from glider observations

    NASA Astrophysics Data System (ADS)

    Merckelbach, Lucas

    2016-12-01

    Ocean gliders have become ubiquitous observation platforms in the ocean in recent years. They are also increasingly used in coastal environments. The coastal observatory system COSYNA has pioneered the use of gliders in the North Sea, a shallow tidally energetic shelf sea. For operational reasons, the gliders operated in the North Sea are programmed to resurface every 3-5 h. The glider's dead-reckoning algorithm yields depth-averaged currents, averaged in time over each subsurface interval. Under operational conditions these averaged currents are a poor approximation of the instantaneous tidal current. In this work an algorithm is developed that estimates the instantaneous current (tidal and residual) from glider observations only. The algorithm uses a first-order Butterworth low pass filter to estimate the residual current component, and a Kalman filter based on the linear shallow water equations for the tidal component. A comparison of data from a glider experiment with current data from an acoustic Doppler current profilers deployed nearby shows that the standard deviations for the east and north current components are better than 7 cm s-1 in near-real-time mode and improve to better than 6 cm s-1 in delayed mode, where the filters can be run forward and backward. In the near-real-time mode the algorithm provides estimates of the currents that the glider is expected to encounter during its next few dives. Combined with a behavioural and dynamic model of the glider, this yields predicted trajectories, the information of which is incorporated in warning messages issued to ships by the (German) authorities. In delayed mode the algorithm produces useful estimates of the depth-averaged currents, which can be used in (process-based) analyses in case no other source of measured current information is available.

  11. River runoff estimates based on remotely sensed surface velocities

    NASA Astrophysics Data System (ADS)

    Grünler, Steffen; Stammer, Detlef; Romeiser, Roland

    2010-05-01

    One promising technique for river runoff estimates from space is the retrieval of surface currents on the basis of synthetic aperture radar along-track interferometry (ATI). The German satellite TerraSAR-X, which was launched in June 2007, will permit ATI measurements in an experimental mode. Based on numerical simulations, we present findings of a research project in which the potential of satellite measurements of various parameters with different temporal and spatial sampling characteristics is evaluated. A sampling strategy for river runoff estimates is developed. We address the achievable accuracy and limitations of such estimates for different local flow conditions at selected test site. High-resolution three-dimensional current fields in the Elbe river (Germany) from a numerical model are used as reference data set and input for simulations of a variety of possible measuring and data interpretation strategies to be evaluated. Addressing the problem of aliasing we removed tidal signals from the sampling data. Discharge estimates on the basis of measured surface current fields and river widths from TerraSAR-X are successfully simulated. The differences of the resulted net discharge estimate are between 30-55% for a required continuously observation period of one year. We discuss the applicability of the measuring strategies to a number of major rivers. Further we show results of runoff estimates by the retrieval of surface current fields by real TerraSAR-X ATI data (AS mode) for the Elbe river study area.

  12. Angular velocity estimation based on star vector with improved current statistical model Kalman filter.

    PubMed

    Zhang, Hao; Niu, Yanxiong; Lu, Jiazhen; Zhang, He

    2016-11-20

    Angular velocity information is a requisite for a spacecraft guidance, navigation, and control system. In this paper, an approach for angular velocity estimation based merely on star vector measurement with an improved current statistical model Kalman filter is proposed. High-precision angular velocity estimation can be achieved under dynamic conditions. The amount of calculation is also reduced compared to a Kalman filter. Different trajectories are simulated to test this approach, and experiments with real starry sky observation are implemented for further confirmation. The estimation accuracy is proved to be better than 10-4  rad/s under various conditions. Both the simulation and the experiment demonstrate that the described approach is effective and shows an excellent performance under both static and dynamic conditions.

  13. pathChirp: Efficient Available Bandwidth Estimation for Network Paths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cottrell, Les

    2003-04-30

    This paper presents pathChirp, a new active probing tool for estimating the available bandwidth on a communication network path. Based on the concept of ''self-induced congestion,'' pathChirp features an exponential flight pattern of probes we call a chirp. Packet chips offer several significant advantages over current probing schemes based on packet pairs or packet trains. By rapidly increasing the probing rate within each chirp, pathChirp obtains a rich set of information from which to dynamically estimate the available bandwidth. Since it uses only packet interarrival times for estimation, pathChirp does not require synchronous nor highly stable clocks at the sendermore » and receiver. We test pathChirp with simulations and Internet experiments and find that it provides good estimates of the available bandwidth while using only a fraction of the number of probe bytes that current state-of-the-art techniques use.« less

  14. Use of Flood Seasonality in Pooling-Group Formation and Quantile Estimation: An Application in Great Britain

    NASA Astrophysics Data System (ADS)

    Formetta, Giuseppe; Bell, Victoria; Stewart, Elizabeth

    2018-02-01

    Regional flood frequency analysis is one of the most commonly applied methods for estimating extreme flood events at ungauged sites or locations with short measurement records. It is based on: (i) the definition of a homogeneous group (pooling-group) of catchments, and on (ii) the use of the pooling-group data to estimate flood quantiles. Although many methods to define a pooling-group (pooling schemes, PS) are based on catchment physiographic similarity measures, in the last decade methods based on flood seasonality similarity have been contemplated. In this paper, two seasonality-based PS are proposed and tested both in terms of the homogeneity of the pooling-groups they generate and in terms of the accuracy in estimating extreme flood events. The method has been applied in 420 catchments in Great Britain (considered as both gauged and ungauged) and compared against the current Flood Estimation Handbook (FEH) PS. Results for gauged sites show that, compared to the current PS, the seasonality-based PS performs better both in terms of homogeneity of the pooling-group and in terms of the accuracy of flood quantile estimates. For ungauged locations, a national-scale hydrological model has been used for the first time to quantify flood seasonality. Results show that in 75% of the tested locations the seasonality-based PS provides an improvement in the accuracy of the flood quantile estimates. The remaining 25% were located in highly urbanized, groundwater-dependent catchments. The promising results support the aspiration that large-scale hydrological models complement traditional methods for estimating design floods.

  15. Evaluating the control of HPAIV H5N1 in Vietnam: virus transmission within infected flocks reported before and after vaccination.

    PubMed

    Soares Magalhães, Ricardo J; Pfeiffer, Dirk U; Otte, Joachim

    2010-06-05

    Currently, the highly pathogenic avian influenza virus (HPAIV) of the subtype H5N1 is believed to have reached an endemic cycle in Vietnam. We used routine surveillance data on HPAIV H5N1 poultry outbreaks in Vietnam to estimate and compare the within-flock reproductive number of infection (R0) for periods before (second epidemic wave, 2004-5; depopulation-based disease control) and during (fourth epidemic wave, beginning 2007; vaccination-based disease control) vaccination. Our results show that infected premises (IPs) in the initial (exponential) phases of outbreak periods have the highest R0 estimates. The IPs reported during the outbreak period when depopulation-based disease control was implemented had higher R0 estimates than IPs reported during the outbreak period when vaccination-based disease control was used. In the latter period, in some flocks of a defined size and species composition, within-flock transmission estimates were not significantly below the threshold for transmission (R0 < 1). Our results indicate that the current control policy based on depopulation plus vaccination has protected the majority of poultry flocks against infection. However, in some flocks the determinants associated with suboptimal protection need to be further investigated as these may explain the current pattern of infection in animal and human populations.

  16. Robust Diagnosis Method Based on Parameter Estimation for an Interturn Short-Circuit Fault in Multipole PMSM under High-Speed Operation.

    PubMed

    Lee, Jewon; Moon, Seokbae; Jeong, Hyeyun; Kim, Sang Woo

    2015-11-20

    This paper proposes a diagnosis method for a multipole permanent magnet synchronous motor (PMSM) under an interturn short circuit fault. Previous works in this area have suffered from the uncertainties of the PMSM parameters, which can lead to misdiagnosis. The proposed method estimates the q-axis inductance (Lq) of the faulty PMSM to solve this problem. The proposed method also estimates the faulty phase and the value of G, which serves as an index of the severity of the fault. The q-axis current is used to estimate the faulty phase, the values of G and Lq. For this reason, two open-loop observers and an optimization method based on a particle-swarm are implemented. The q-axis current of a healthy PMSM is estimated by the open-loop observer with the parameters of a healthy PMSM. The Lq estimation significantly compensates for the estimation errors in high-speed operation. The experimental results demonstrate that the proposed method can estimate the faulty phase, G, and Lq besides exhibiting robustness against parameter uncertainties.

  17. River Runoff Estimates on the Basis of Satellite-Derived Surface Currents and Water Levels

    NASA Astrophysics Data System (ADS)

    Gruenler, S.; Romeiser, R.; Stammer, D.

    2007-12-01

    One promising technique for river runoff estimates from space is the retrieval of surface currents on the basis of synthetic aperture radar along-track interferometry (ATI). The German satellite TerraSAR-X, which was launched in June 2007, permits current measurements by ATI in an experimental mode of operation. Based on numerical simulations, we present first findings of a research project in which the potential of satellite measurements of various parameters with different temporal and spatial sampling characteristics is evaluated and a dedicated data synthesis system for river discharge estimates is developed. We address the achievable accuracy and limitations of such estimates for different local flow conditions at selected test sites. High-resolution three- dimensional current fields in the Elbe river (Germany) from a numerical model of the German Federal Waterways Engineering and Research Institute (BAW) are used as reference data set and input for simulations of a variety of possible measuring and data interpretation strategies to be evaluated. For example, runoff estimates on the basis of measured surface current fields and river widths from TerraSAR-X and water levels from radar altimetry are simulated. Despite the simplicity of some of the applied methods, the results provide quite comprehensive pictures of the Elbe river runoff dynamics. Although the satellite-based river runoff estimates exhibit a lower accuracy in comparison to traditional gauge measurements, the proposed measuring strategies are quite promising for the monitoring of river discharge dynamics in regions where only sparse in-situ measurements are available. We discuss the applicability to a number of major rivers around the world.

  18. Pricing Medicare's diagnosis-related groups: Charges versus estimated costs

    PubMed Central

    Price, Kurt F.

    1989-01-01

    Hospital payments under Medicare's prospective payment system (PPS) are based on prices established for 474 diagnosis-related groups (DRG's). Previous analyses using 1981 data demonstrated that DRG prices based on charges alone were not that different from prices calculated from estimated costs. Data for 1986 were used in this study to show that the differences between the two sets of DRG prices are much larger than previously reported. If DRG prices were once again based on estimated costs instead of the current charge-based prices, payments would be significantly redistributed. PMID:10313356

  19. United States Data Center Energy Usage Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shehabi, Arman; Smith, Sarah; Sartor, Dale

    This report estimates historical data center electricity consumption back to 2000, relying on previous studies and historical shipment data, and forecasts consumption out to 2020 based on new trends and the most recent data available. Figure ES-1 provides an estimate of total U.S. data center electricity use (servers, storage, network equipment, and infrastructure) from 2000-2020. In 2014, data centers in the U.S. consumed an estimated 70 billion kWh, representing about 1.8% of total U.S. electricity consumption. Current study results show data center electricity consumption increased by about 4% from 2010-2014, a large shift from the 24% percent increase estimated frommore » 2005-2010 and the nearly 90% increase estimated from 2000-2005. Energy use is expected to continue slightly increasing in the near future, increasing 4% from 2014-2020, the same rate as the past five years. Based on current trend estimates, U.S. data centers are projected to consume approximately 73 billion kWh in 2020.« less

  20. A robust vision-based sensor fusion approach for real-time pose estimation.

    PubMed

    Assa, Akbar; Janabi-Sharifi, Farrokh

    2014-02-01

    Object pose estimation is of great importance to many applications, such as augmented reality, localization and mapping, motion capture, and visual servoing. Although many approaches based on a monocular camera have been proposed, only a few works have concentrated on applying multicamera sensor fusion techniques to pose estimation. Higher accuracy and enhanced robustness toward sensor defects or failures are some of the advantages of these schemes. This paper presents a new Kalman-based sensor fusion approach for pose estimation that offers higher accuracy and precision, and is robust to camera motion and image occlusion, compared to its predecessors. Extensive experiments are conducted to validate the superiority of this fusion method over currently employed vision-based pose estimation algorithms.

  1. Current-induced alternating reversed dual-echo-steady-state for joint estimation of tissue relaxation and electrical properties.

    PubMed

    Lee, Hyunyeol; Sohn, Chul-Ho; Park, Jaeseok

    2017-07-01

    To develop a current-induced, alternating reversed dual-echo-steady-state-based magnetic resonance electrical impedance tomography for joint estimation of tissue relaxation and electrical properties. The proposed method reverses the readout gradient configuration of conventional, in which steady-state-free-precession (SSFP)-ECHO is produced earlier than SSFP-free-induction-decay (FID) while alternating current pulses are applied in between the two SSFPs to secure high sensitivity of SSFP-FID to injection current. Additionally, alternating reversed dual-echo-steady-state signals are modulated by employing variable flip angles over two orthogonal injections of current pulses. Ratiometric signal models are analytically constructed, from which T 1 , T 2 , and current-induced B z are jointly estimated by solving a nonlinear inverse problem for conductivity reconstruction. Numerical simulations and experimental studies are performed to investigate the feasibility of the proposed method in estimating relaxation parameters and conductivity. The proposed method, if compared with conventional magnetic resonance electrical impedance tomography, enables rapid data acquisition and simultaneous estimation of T 1 , T 2 , and current-induced B z , yielding a comparable level of signal-to-noise ratio in the parameter estimates while retaining a relative conductivity contrast. We successfully demonstrated the feasibility of the proposed method in jointly estimating tissue relaxation parameters as well as conductivity distributions. It can be a promising, rapid imaging strategy for quantitative conductivity estimation. Magn Reson Med 78:107-120, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  2. JPL IGS Analysis Center Report, 2001-2003

    NASA Technical Reports Server (NTRS)

    Heflin, M. B.; Bar-Sever, Y. E.; Jefferson, D. C.; Meyer, R. F.; Newport, B. J.; Vigue-Rodi, Y.; Webb, F. H.; Zumberge, J. F.

    2004-01-01

    Three GPS orbit and clock products are currently provided by JPL for consideration by the IGS. Each differs in its latency and quality, with later results being more accurate. Results are typically available in both IGS and GIPSY formats via anonymous ftp. Current performance based on comparisons with the IGS final products is summarized. Orbit performance was determined by computing the 3D RMS difference between each JPL product and the IGS final orbits based on 15 minute estimates from the sp3 files. Clock performance was computed as the RMS difference after subtracting a linear trend based on 15 minute estimates from the sp3 files.

  3. A TRMM/GPM retrieval of the total mean generator current for the global electric circuit

    NASA Astrophysics Data System (ADS)

    Peterson, Michael; Deierling, Wiebke; Liu, Chuntao; Mach, Douglas; Kalb, Christina

    2017-09-01

    A specialized satellite version of the passive microwave electric field retrieval algorithm (Peterson et al., 2015) is applied to observations from the Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Measurement (GPM) satellites to estimate the generator current for the Global Electric Circuit (GEC) and compute its temporal variability. By integrating retrieved Wilson currents from electrified clouds across the globe, we estimate a total mean current of between 1.4 kA (assuming the 7% fraction of electrified clouds producing downward currents measured by the ER-2 is representative) to 1.6 kA (assuming all electrified clouds contribute to the GEC). These current estimates come from all types of convective weather without preference, including Electrified Shower Clouds (ESCs). The diurnal distribution of the retrieved generator current is in excellent agreement with the Carnegie curve (RMS difference: 1.7%). The temporal variability of the total mean generator current ranges from 110% on semi-annual timescales (29% on an annual timescale) to 7.5% on decadal timescales with notable responses to the Madden-Julian Oscillation and El Nino Southern Oscillation. The geographical distribution of current includes significant contributions from oceanic regions in addition to the land-based tropical chimneys. The relative importance of the Americas and Asia chimneys compared to Africa is consistent with the best modern ground-based observations and further highlights the importance of ESCs for the GEC.

  4. Space Station Furnace Facility. Volume 3: Program cost estimate

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The approach used to estimate costs for the Space Station Furnace Facility (SSFF) is based on a computer program developed internally at Teledyne Brown Engineering (TBE). The program produces time-phased estimates of cost elements for each hardware component, based on experience with similar components. Engineering estimates of the degree of similarity or difference between the current project and the historical data is then used to adjust the computer-produced cost estimate and to fit it to the current project Work Breakdown Structure (WBS). The SSFF Concept as presented at the Requirements Definition Review (RDR) was used as the base configuration for the cost estimate. This program incorporates data on costs of previous projects and the allocation of those costs to the components of one of three, time-phased, generic WBS's. Input consists of a list of similar components for which cost data exist, number of interfaces with their type and complexity, identification of the extent to which previous designs are applicable, and programmatic data concerning schedules and miscellaneous data (travel, off-site assignments). Output is program cost in labor hours and material dollars, for each component, broken down by generic WBS task and program schedule phase.

  5. dPIRPLE: a joint estimation framework for deformable registration and penalized-likelihood CT image reconstruction using prior images

    NASA Astrophysics Data System (ADS)

    Dang, H.; Wang, A. S.; Sussman, Marc S.; Siewerdsen, J. H.; Stayman, J. W.

    2014-09-01

    Sequential imaging studies are conducted in many clinical scenarios. Prior images from previous studies contain a great deal of patient-specific anatomical information and can be used in conjunction with subsequent imaging acquisitions to maintain image quality while enabling radiation dose reduction (e.g., through sparse angular sampling, reduction in fluence, etc). However, patient motion between images in such sequences results in misregistration between the prior image and current anatomy. Existing prior-image-based approaches often include only a simple rigid registration step that can be insufficient for capturing complex anatomical motion, introducing detrimental effects in subsequent image reconstruction. In this work, we propose a joint framework that estimates the 3D deformation between an unregistered prior image and the current anatomy (based on a subsequent data acquisition) and reconstructs the current anatomical image using a model-based reconstruction approach that includes regularization based on the deformed prior image. This framework is referred to as deformable prior image registration, penalized-likelihood estimation (dPIRPLE). Central to this framework is the inclusion of a 3D B-spline-based free-form-deformation model into the joint registration-reconstruction objective function. The proposed framework is solved using a maximization strategy whereby alternating updates to the registration parameters and image estimates are applied allowing for improvements in both the registration and reconstruction throughout the optimization process. Cadaver experiments were conducted on a cone-beam CT testbench emulating a lung nodule surveillance scenario. Superior reconstruction accuracy and image quality were demonstrated using the dPIRPLE algorithm as compared to more traditional reconstruction methods including filtered backprojection, penalized-likelihood estimation (PLE), prior image penalized-likelihood estimation (PIPLE) without registration, and prior image penalized-likelihood estimation with rigid registration of a prior image (PIRPLE) over a wide range of sampling sparsity and exposure levels.

  6. Estimating radiation dose to organs of patients undergoing conventional and novel multidetector CT exams using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Angel, Erin

    Advances in Computed Tomography (CT) technology have led to an increase in the modality's diagnostic capabilities and therefore its utilization, which has in turn led to an increase in radiation exposure to the patient population. As a result, CT imaging currently constitutes approximately half of the collective exposure to ionizing radiation from medical procedures. In order to understand the radiation risk, it is necessary to estimate the radiation doses absorbed by patients undergoing CT imaging. The most widely accepted risk models are based on radiosensitive organ dose as opposed to whole body dose. In this research, radiosensitive organ dose was estimated using Monte Carlo based simulations incorporating detailed multidetector CT (MDCT) scanner models, specific scan protocols, and using patient models based on accurate patient anatomy and representing a range of patient sizes. Organ dose estimates were estimated for clinical MDCT exam protocols which pose a specific concern for radiosensitive organs or regions. These dose estimates include estimation of fetal dose for pregnant patients undergoing abdomen pelvis CT exams or undergoing exams to diagnose pulmonary embolism and venous thromboembolism. Breast and lung dose were estimated for patients undergoing coronary CTA imaging, conventional fixed tube current chest CT, and conventional tube current modulated (TCM) chest CT exams. The correlation of organ dose with patient size was quantified for pregnant patients undergoing abdomen/pelvis exams and for all breast and lung dose estimates presented. Novel dose reduction techniques were developed that incorporate organ location and are specifically designed to reduce close to radiosensitive organs during CT acquisition. A generalizable model was created for simulating conventional and novel attenuation-based TCM algorithms which can be used in simulations estimating organ dose for any patient model. The generalizable model is a significant contribution of this work as it lays the foundation for the future of simulating TCM using Monte Carlo methods. As a result of this research organ dose can be estimated for individual patients undergoing specific conventional MDCT exams. This research also brings understanding to conventional and novel close reduction techniques in CT and their effect on organ dose.

  7. Estimating economic impacts of timber-based industry expansion in northeastern Minnesota.

    Treesearch

    Daniel L. Erkkila; Dietmar W. Rose; Allen L. Lundgren

    1982-01-01

    Analysis of current and projected timber supplies in northeastern Minnesota indicates that expanded timber-based industrial activity could be supported. The impacts of a hypothetical industrial development scenario, including construction of waferboard plants and a wood-fueled power plant, were estimated using an input-output model. Development had noticeable impacts...

  8. Sensorless control for permanent magnet synchronous motor using a neural network based adaptive estimator

    NASA Astrophysics Data System (ADS)

    Kwon, Chung-Jin; Kim, Sung-Joong; Han, Woo-Young; Min, Won-Kyoung

    2005-12-01

    The rotor position and speed estimation of permanent-magnet synchronous motor(PMSM) was dealt with. By measuring the phase voltages and currents of the PMSM drive, two diagonally recurrent neural network(DRNN) based observers, a neural current observer and a neural velocity observer were developed. DRNN which has self-feedback of the hidden neurons ensures that the outputs of DRNN contain the whole past information of the system even if the inputs of DRNN are only the present states and inputs of the system. Thus the structure of DRNN may be simpler than that of feedforward and fully recurrent neural networks. If the backpropagation method was used for the training of the DRNN the problem of slow convergence arise. In order to reduce this problem, recursive prediction error(RPE) based learning method for the DRNN was presented. The simulation results show that the proposed approach gives a good estimation of rotor speed and position, and RPE based training has requires a shorter computation time compared to backpropagation based training.

  9. A self-sensing active magnetic bearing based on a direct current measurement approach.

    PubMed

    Niemann, Andries C; van Schoor, George; du Rand, Carel P

    2013-09-11

    Active magnetic bearings (AMBs) have become a key technology in various industrial applications. Self-sensing AMBs provide an integrated sensorless solution for position estimation, consolidating the sensing and actuating functions into a single electromagnetic transducer. The approach aims to reduce possible hardware failure points, production costs, and system complexity. Despite these advantages, self-sensing methods must address various technical challenges to maximize the performance thereof. This paper presents the direct current measurement (DCM) approach for self-sensing AMBs, denoting the direct measurement of the current ripple component. In AMB systems, switching power amplifiers (PAs) modulate the rotor position information onto the current waveform. Demodulation self-sensing techniques then use bandpass and lowpass filters to estimate the rotor position from the voltage and current signals. However, the additional phase-shift introduced by these filters results in lower stability margins. The DCM approach utilizes a novel PA switching method that directly measures the current ripple to obtain duty-cycle invariant position estimates. Demodulation filters are largely excluded to minimize additional phase-shift in the position estimates. Basic functionality and performance of the proposed self-sensing approach are demonstrated via a transient simulation model as well as a high current (10 A) experimental system. A digital implementation of amplitude modulation self-sensing serves as a comparative estimator.

  10. Assessing vaccination coverage in infants, survey studies versus the Flemish immunisation register: achieving the best of both worlds.

    PubMed

    Braeckman, Tessa; Lernout, Tinne; Top, Geert; Paeps, Annick; Roelants, Mathieu; Hoppenbrouwers, Karel; Van Damme, Pierre; Theeten, Heidi

    2014-01-09

    Infant immunisation coverage in Flanders, Belgium, is monitored through repeated coverage surveys. With the increased use of Vaccinnet, the web-based ordering system for vaccines in Flanders set up in 2004 and linked to an immunisation register, this database could become an alternative to quickly estimate vaccination coverage. To evaluate its current accuracy, coverage estimates generated from Vaccinnet alone were compared with estimates from the most recent survey (2012) that combined interview data with data from Vaccinnet and medical files. Coverage rates from registrations in Vaccinnet were systematically lower than the corresponding estimates obtained through the survey (mean difference 7.7%). This difference increased by dose number for vaccines that require multiple doses. Differences in administration date between the two sources were observed for 3.8-8.2% of registered doses. Underparticipation in Vaccinnet thus significantly impacts on the register-based immunisation coverage estimates, amplified by underregistration of administered doses among vaccinators using Vaccinnet. Therefore, survey studies, despite being labour-intensive and expensive, currently provide more complete and reliable results than register-based estimates alone in Flanders. However, further improvement of Vaccinnet's completeness will likely allow more accurate estimates in the nearby future. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. New charging strategy for lithium-ion batteries based on the integration of Taguchi method and state of charge estimation

    NASA Astrophysics Data System (ADS)

    Vo, Thanh Tu; Chen, Xiaopeng; Shen, Weixiang; Kapoor, Ajay

    2015-01-01

    In this paper, a new charging strategy of lithium-polymer batteries (LiPBs) has been proposed based on the integration of Taguchi method (TM) and state of charge estimation. The TM is applied to search an optimal charging current pattern. An adaptive switching gain sliding mode observer (ASGSMO) is adopted to estimate the SOC which controls and terminates the charging process. The experimental results demonstrate that the proposed charging strategy can successfully charge the same types of LiPBs with different capacities and cycle life. The proposed charging strategy also provides much shorter charging time, narrower temperature variation and slightly higher energy efficiency than the equivalent constant current constant voltage charging method.

  12. The Impact of AMSR-E Soil Moisture Assimilation on Evapotranspiration Estimation

    NASA Technical Reports Server (NTRS)

    Peters-Lidard, Christa D.; Kumar, Sujay; Mocko, David; Tian, Yudong

    2012-01-01

    An assessment ofETestimates for current LDAS systems is provided along with current research that demonstrates improvement in LSM ET estimates due to assimilating satellite-based soil moisture products. Using the Ensemble Kalman Filter in the Land Information System, we assimilate both NASA and Land Parameter Retrieval Model (LPRM) soil moisture products into the Noah LSM Version 3.2 with the North American LDAS phase 2 CNLDAS-2) forcing to mimic the NLDAS-2 configuration. Through comparisons with two global reference ET products, one based on interpolated flux tower data and one from a new satellite ET algorithm, over the NLDAS2 domain, we demonstrate improvement in ET estimates only when assimilating the LPRM soil moisture product.

  13. On the estimation of jet-induced fountain lift and additional suckdown in hover for two-jet configurations

    NASA Technical Reports Server (NTRS)

    Kuhn, Richard E.; Bellavia, David C.; Corsiglia, Victor R.; Wardwell, Douglas A.

    1991-01-01

    Currently available methods for estimating the net suckdown induced on jet V/STOL aircraft hovering in ground effect are based on a correlation of available force data and are, therefore, limited to configurations similar to those in the data base. Experience with some of these configurations has shown that both the fountain lift and additional suckdown are overestimated but these effects cancel each other for configurations within the data base. For other configurations, these effects may not cancel and the net suckdown could be grossly overestimated or underestimated. Also, present methods do not include the prediction of the pitching moments associated with the suckdown induced in ground effect. An attempt to develop a more logically based method for estimating the fountain lift and suckdown based on the jet-induced pressures is initiated. The analysis is based primarily on the data from a related family of three two-jet configurations (all using the same jet spacing) and limited data from two other two-jet configurations. The current status of the method, which includes expressions for estimating the maximum pressure induced in the fountain regions, and the sizes of the fountain and suckdown regions is presented. Correlating factors are developed to be used with these areas and pressures to estimate the fountain lift, the suckdown, and the related pitching moment increments.

  14. Remaining dischargeable time prediction for lithium-ion batteries using unscented Kalman filter

    NASA Astrophysics Data System (ADS)

    Dong, Guangzhong; Wei, Jingwen; Chen, Zonghai; Sun, Han; Yu, Xiaowei

    2017-10-01

    To overcome the range anxiety, one of the important strategies is to accurately predict the range or dischargeable time of the battery system. To accurately predict the remaining dischargeable time (RDT) of a battery, a RDT prediction framework based on accurate battery modeling and state estimation is presented in this paper. Firstly, a simplified linearized equivalent-circuit-model is developed to simulate the dynamic characteristics of a battery. Then, an online recursive least-square-algorithm method and unscented-Kalman-filter are employed to estimate the system matrices and SOC at every prediction point. Besides, a discrete wavelet transform technique is employed to capture the statistical information of past dynamics of input currents, which are utilized to predict the future battery currents. Finally, the RDT can be predicted based on the battery model, SOC estimation results and predicted future battery currents. The performance of the proposed methodology has been verified by a lithium-ion battery cell. Experimental results indicate that the proposed method can provide an accurate SOC and parameter estimation and the predicted RDT can solve the range anxiety issues.

  15. Proportion of patients needing an implantable cardioverter defibrillator on the basis of current guidelines: impact on healthcare resources in Italy and the USA. Data from the ALPHA study registry.

    PubMed

    Pedretti, Roberto F E; Curnis, Antonio; Massa, Riccardo; Morandi, Fabrizio; Tritto, Massimo; Manca, Lorenzo; Occhetta, Eraldo; Molon, Giulio; De Ferrari, Gaetano M; Sarzi Braga, Simona; Raciti, Giovanni; Klersy, Catherine; Salerno-Uriarte, Jorge A

    2010-08-01

    Implantable cardioverter defibrillators (ICD) improve survival in selected patients with left ventricular dysfunction or heart failure (HF). The objective is to estimate the number of ICD candidates and to assess the potential impact on public health expenditure in Italy and the USA. Data from 3513 consecutive patients (ALPHA study registry) were screened. A model based on international guidelines inclusion criteria and epidemiological data was used to estimate the number of eligible patients. A comparison with current ICD implant rate was done to estimate the necessary incremental rate to treat eligible patients within 5 years. Up to 54% of HF patients are estimated to be eligible for ICD implantation. An implantation policy based on guidelines would significantly increase the ICD number to 2671 implants per million inhabitants in Italy and to 4261 in the USA. An annual increment of prophylactic ICD implants of 20% in the USA and 68% in Italy would be necessary to treat all indicated patients in a 5-year timeframe. Implantable cardioverter defibrillator implantation policy based on current evidence may have significant impact on public health expenditure. Effective risk stratification may be useful in order to maximize benefit of ICD therapy and its cost-effectiveness in primary prevention.

  16. Estimation of breast dose reduction potential for organ-based tube current modulated CT with wide dose reduction arc

    NASA Astrophysics Data System (ADS)

    Fu, Wanyi; Sturgeon, Gregory M.; Agasthya, Greeshma; Segars, W. Paul; Kapadia, Anuj J.; Samei, Ehsan

    2017-03-01

    This study aimed to estimate the organ dose reduction potential for organ-dose-based tube current modulated (ODM) thoracic CT with wide dose reduction arc. Twenty-one computational anthropomorphic phantoms (XCAT, age range: 27- 75 years, weight range: 52.0-105.8 kg) were used to create a virtual patient population with clinical anatomic variations. For each phantom, two breast tissue compositions were simulated: 50/50 and 20/80 (glandular-to-adipose ratio). A validated Monte Carlo program was used to estimate the organ dose for standard tube current modulation (TCM) (SmartmA, GE Healthcare) and ODM (GE Healthcare) for a commercial CT scanner (Revolution, GE Healthcare) with explicitly modeled tube current modulation profile, scanner geometry, bowtie filtration, and source spectrum. Organ dose was determined using a typical clinical thoracic CT protocol. Both organ dose and CTDIvol-to-organ dose conversion coefficients (h factors) were compared between TCM and ODM. ODM significantly reduced all radiosensitive organ doses (p<0.01). The breast dose was reduced by 30+/-2%. For h factors, organs in the anterior region (e.g. thyroid, stomach) exhibited substantial decreases, and the medial, distributed, and posterior region either saw an increase or no significant change. The organ-dose-based tube current modulation significantly reduced organ doses especially for radiosensitive superficial anterior organs such as the breasts.

  17. Challenges in Building Disease-Based National Health Accounts

    PubMed Central

    Rosen, Allison B.; Cutler, David M.

    2012-01-01

    Background Measuring spending on diseases is critical to assessing the value of medical care. Objective To review the current state of cost of illness (COI) estimation methods, identifying their strengths, limitations and uses. We briefly describe the current National Health Expenditure Accounts (NHEA), and then go on to discuss the addition of COI estimation to the NHEA. Conclusion Recommendations are made for future research aimed at identifying the best methods for developing and using disease-based national health accounts to optimize the information available to policymakers as they struggle with difficult resource allocation decisions. PMID:19536017

  18. 45 CFR 284.11 - What definitions apply to this part?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... METHODOLOGY FOR DETERMINING WHETHER AN INCREASE IN A STATE OR TERRITORY'S CHILD POVERTY RATE IS THE RESULT OF... estimating the number and percentage of children in poverty in each State. These methods may include national estimates based on the Current Population Survey; the Small Area Income and Poverty Estimates; the annual...

  19. Battery state-of-charge estimation using approximate least squares

    NASA Astrophysics Data System (ADS)

    Unterrieder, C.; Zhang, C.; Lunglmayr, M.; Priewasser, R.; Marsili, S.; Huemer, M.

    2015-03-01

    In recent years, much effort has been spent to extend the runtime of battery-powered electronic applications. In order to improve the utilization of the available cell capacity, high precision estimation approaches for battery-specific parameters are needed. In this work, an approximate least squares estimation scheme is proposed for the estimation of the battery state-of-charge (SoC). The SoC is determined based on the prediction of the battery's electromotive force. The proposed approach allows for an improved re-initialization of the Coulomb counting (CC) based SoC estimation method. Experimental results for an implementation of the estimation scheme on a fuel gauge system on chip are illustrated. Implementation details and design guidelines are presented. The performance of the presented concept is evaluated for realistic operating conditions (temperature effects, aging, standby current, etc.). For the considered test case of a GSM/UMTS load current pattern of a mobile phone, the proposed method is able to re-initialize the CC-method with a high accuracy, while state-of-the-art methods fail to perform a re-initialization.

  20. Bias adjustment of infrared-based rainfall estimation using Passive Microwave satellite rainfall data

    NASA Astrophysics Data System (ADS)

    Karbalaee, Negar; Hsu, Kuolin; Sorooshian, Soroosh; Braithwaite, Dan

    2017-04-01

    This study explores using Passive Microwave (PMW) rainfall estimation for spatial and temporal adjustment of Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS). The PERSIANN-CCS algorithm collects information from infrared images to estimate rainfall. PERSIANN-CCS is one of the algorithms used in the Integrated Multisatellite Retrievals for GPM (Global Precipitation Mission) estimation for the time period PMW rainfall estimations are limited or not available. Continued improvement of PERSIANN-CCS will support Integrated Multisatellite Retrievals for GPM for current as well as retrospective estimations of global precipitation. This study takes advantage of the high spatial and temporal resolution of GEO-based PERSIANN-CCS estimation and the more effective, but lower sample frequency, PMW estimation. The Probability Matching Method (PMM) was used to adjust the rainfall distribution of GEO-based PERSIANN-CCS toward that of PMW rainfall estimation. The results show that a significant improvement of global PERSIANN-CCS rainfall estimation is obtained.

  1. Estimating psychiatric manpower requirements based on patients' needs.

    PubMed

    Faulkner, L R; Goldman, C R

    1997-05-01

    To provide a better understanding of the complexities of estimating psychiatric manpower requirements, the authors describe several approaches to estimation and present a method based on patients' needs. A five-step method for psychiatric manpower estimation is used, with estimates of data pertinent to each step, to calculate the total psychiatric manpower requirements for the United States. The method is also used to estimate the hours of psychiatric service per patient per year that might be available under current psychiatric practice and under a managed care scenario. Depending on assumptions about data at each step in the method, the total psychiatric manpower requirements for the U.S. population range from 2,989 to 358,696 full-time-equivalent psychiatrists. The number of available hours of psychiatric service per patient per year is 14.1 hours under current psychiatric practice and 2.8 hours under the managed care scenario. The key to psychiatric manpower estimation lies in clarifying the assumptions that underlie the specific method used. Even small differences in assumptions mean large differences in estimates. Any credible manpower estimation process must include discussions and negotiations between psychiatrists, other clinicians, administrators, and patients and families to clarify the treatment needs of patients and the roles, responsibilities, and job description of psychiatrists.

  2. Physics-based coastal current tomographic tracking using a Kalman filter.

    PubMed

    Wang, Tongchen; Zhang, Ying; Yang, T C; Chen, Huifang; Xu, Wen

    2018-05-01

    Ocean acoustic tomography can be used based on measurements of two-way travel-time differences between the nodes deployed on the perimeter of the surveying area to invert/map the ocean current inside the area. Data at different times can be related using a Kalman filter, and given an ocean circulation model, one can in principle now cast and even forecast current distribution given an initial distribution and/or the travel-time difference data on the boundary. However, an ocean circulation model requires many inputs (many of them often not available) and is unpractical for estimation of the current field. A simplified form of the discretized Navier-Stokes equation is used to show that the future velocity state is just a weighted spatial average of the current state. These weights could be obtained from an ocean circulation model, but here in a data driven approach, auto-regressive methods are used to obtain the time and space dependent weights from the data. It is shown, based on simulated data, that the current field tracked using a Kalman filter (with an arbitrary initial condition) is more accurate than that estimated by the standard methods where data at different times are treated independently. Real data are also examined.

  3. Rotor Position Sensorless Control and Its Parameter Sensitivity of Permanent Magnet Motor Based on Model Reference Adaptive System

    NASA Astrophysics Data System (ADS)

    Ohara, Masaki; Noguchi, Toshihiko

    This paper describes a new method for a rotor position sensorless control of a surface permanent magnet synchronous motor based on a model reference adaptive system (MRAS). This method features the MRAS in a current control loop to estimate a rotor speed and position by using only current sensors. This method as well as almost all the conventional methods incorporates a mathematical model of the motor, which consists of parameters such as winding resistances, inductances, and an induced voltage constant. Hence, the important thing is to investigate how the deviation of these parameters affects the estimated rotor position. First, this paper proposes a structure of the sensorless control applied in the current control loop. Next, it proves the stability of the proposed method when motor parameters deviate from the nominal values, and derives the relationship between the estimated position and the deviation of the parameters in a steady state. Finally, some experimental results are presented to show performance and effectiveness of the proposed method.

  4. System and method for quench and over-current protection of superconductor

    DOEpatents

    Huang, Xianrui; Laskaris, Evangelos Trifon; Sivasubramaniam, Kiruba Haran; Bray, James William; Ryan, David Thomas; Fogarty, James Michael; Steinbach, Albert Eugene

    2005-05-31

    A system and method for protecting a superconductor. The system may comprise a current sensor operable to detect a current flowing through the superconductor. The system may comprise a coolant temperature sensor operable to detect the temperature of a cryogenic coolant used to cool the superconductor to a superconductive state. The control circuit is operable to estimate the superconductor temperature based on the current flow and the coolant temperature. The system may also be operable to compare the estimated superconductor temperature to at least one threshold temperature and to initiate a corrective action when the superconductor temperature exceeds the at least one threshold temperature.

  5. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering

    PubMed Central

    Carmena, Jose M.

    2016-01-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain’s behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user’s motor intention during CLDA—a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to parameter initialization. Finally, the architecture extended control to tasks beyond those used for CLDA training. These results have significant implications towards the development of clinically-viable neuroprosthetics. PMID:27035820

  6. Improved estimation of random vibration loads in launch vehicles

    NASA Technical Reports Server (NTRS)

    Mehta, R.; Erwin, E.; Suryanarayan, S.; Krishna, Murali M. R.

    1993-01-01

    Random vibration induced load is an important component of the total design load environment for payload and launch vehicle components and their support structures. The current approach to random vibration load estimation is based, particularly at the preliminary design stage, on the use of Miles' equation which assumes a single degree-of-freedom (DOF) system and white noise excitation. This paper examines the implications of the use of multi-DOF system models and response calculation based on numerical integration using the actual excitation spectra for random vibration load estimation. The analytical study presented considers a two-DOF system and brings out the effects of modal mass, damping and frequency ratios on the random vibration load factor. The results indicate that load estimates based on the Miles' equation can be significantly different from the more accurate estimates based on multi-DOF models.

  7. COLE: A Web-based Tool for Interfacing with Forest Inventory Data

    Treesearch

    Patrick Proctor; Linda S. Heath; Paul C. Van Deusen; Jeffery H. Gove; James E. Smith

    2005-01-01

    We are developing an online computer program to provide forest carbon related estimates for the conterminous United States (COLE). Version 1.0 of the program features carbon estimates based on data from the USDA Forest Service Eastwide Forest Inventory database. The program allows the user to designate an area of interest, and currently provides area, growing-stock...

  8. Estimation of Standard Error of Regression Effects in Latent Regression Models Using Binder's Linearization. Research Report. ETS RR-07-09

    ERIC Educational Resources Information Center

    Li, Deping; Oranje, Andreas

    2007-01-01

    Two versions of a general method for approximating standard error of regression effect estimates within an IRT-based latent regression model are compared. The general method is based on Binder's (1983) approach, accounting for complex samples and finite populations by Taylor series linearization. In contrast, the current National Assessment of…

  9. Green Routing Fuel Saving Opportunity Assessment: A Case Study on California Large-Scale Real-World Travel Data: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Lei; Holden, Jacob; Gonder, Jeff

    New technologies, such as connected and automated vehicles, have attracted more and more researchers for improving the energy efficiency and environmental impact of current transportation systems. The green routing strategy instructs a vehicle to select the most fuel-efficient route before the vehicle departs. It benefits the current transportation system with fuel saving opportunity through identifying the greenest route. This paper introduces an evaluation framework for estimating benefits of green routing based on large-scale, real-world travel data. The framework has the capability to quantify fuel savings by estimating the fuel consumption of actual routes and comparing to routes procured by navigationmore » systems. A route-based fuel consumption estimation model, considering road traffic conditions, functional class, and road grade is proposed and used in the framework. An experiment using a large-scale data set from the California Household Travel Survey global positioning system trajectory data base indicates that 31% of actual routes have fuel savings potential with a cumulative estimated fuel savings of 12%.« less

  10. Green Routing Fuel Saving Opportunity Assessment: A Case Study on California Large-Scale Real-World Travel Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Lei; Holden, Jacob; Gonder, Jeffrey D

    New technologies, such as connected and automated vehicles, have attracted more and more researchers for improving the energy efficiency and environmental impact of current transportation systems. The green routing strategy instructs a vehicle to select the most fuel-efficient route before the vehicle departs. It benefits the current transportation system with fuel saving opportunity through identifying the greenest route. This paper introduces an evaluation framework for estimating benefits of green routing based on large-scale, real-world travel data. The framework has the capability to quantify fuel savings by estimating the fuel consumption of actual routes and comparing to routes procured by navigationmore » systems. A route-based fuel consumption estimation model, considering road traffic conditions, functional class, and road grade is proposed and used in the framework. An experiment using a large-scale data set from the California Household Travel Survey global positioning system trajectory data base indicates that 31% of actual routes have fuel savings potential with a cumulative estimated fuel savings of 12%.« less

  11. Parameter-based estimation of CT dose index and image quality using an in-house android™-based software

    NASA Astrophysics Data System (ADS)

    Mubarok, S.; Lubis, L. E.; Pawiro, S. A.

    2016-03-01

    Compromise between radiation dose and image quality is essential in the use of CT imaging. CT dose index (CTDI) is currently the primary dosimetric formalisms in CT scan, while the low and high contrast resolutions are aspects indicating the image quality. This study was aimed to estimate CTDIvol and image quality measures through a range of exposure parameters variation. CTDI measurements were performed using PMMA (polymethyl methacrylate) phantom of 16 cm diameter, while the image quality test was conducted by using catphan ® 600. CTDI measurements were carried out according to IAEA TRS 457 protocol using axial scan mode, under varied parameters of tube voltage, collimation or slice thickness, and tube current. Image quality test was conducted accordingly under the same exposure parameters with CTDI measurements. An Android™ based software was also result of this study. The software was designed to estimate the value of CTDIvol with maximum difference compared to actual CTDIvol measurement of 8.97%. Image quality can also be estimated through CNR parameter with maximum difference to actual CNR measurement of 21.65%.

  12. Estimating organ doses from tube current modulated CT examinations using a generalized linear model.

    PubMed

    Bostani, Maryam; McMillan, Kyle; Lu, Peiyun; Kim, Grace Hyun J; Cody, Dianna; Arbique, Gary; Greenberg, S Bruce; DeMarco, John J; Cagnon, Chris H; McNitt-Gray, Michael F

    2017-04-01

    Currently, available Computed Tomography dose metrics are mostly based on fixed tube current Monte Carlo (MC) simulations and/or physical measurements such as the size specific dose estimate (SSDE). In addition to not being able to account for Tube Current Modulation (TCM), these dose metrics do not represent actual patient dose. The purpose of this study was to generate and evaluate a dose estimation model based on the Generalized Linear Model (GLM), which extends the ability to estimate organ dose from tube current modulated examinations by incorporating regional descriptors of patient size, scanner output, and other scan-specific variables as needed. The collection of a total of 332 patient CT scans at four different institutions was approved by each institution's IRB and used to generate and test organ dose estimation models. The patient population consisted of pediatric and adult patients and included thoracic and abdomen/pelvis scans. The scans were performed on three different CT scanner systems. Manual segmentation of organs, depending on the examined anatomy, was performed on each patient's image series. In addition to the collected images, detailed TCM data were collected for all patients scanned on Siemens CT scanners, while for all GE and Toshiba patients, data representing z-axis-only TCM, extracted from the DICOM header of the images, were used for TCM simulations. A validated MC dosimetry package was used to perform detailed simulation of CT examinations on all 332 patient models to estimate dose to each segmented organ (lungs, breasts, liver, spleen, and kidneys), denoted as reference organ dose values. Approximately 60% of the data were used to train a dose estimation model, while the remaining 40% was used to evaluate performance. Two different methodologies were explored using GLM to generate a dose estimation model: (a) using the conventional exponential relationship between normalized organ dose and size with regional water equivalent diameter (WED) and regional CTDI vol as variables and (b) using the same exponential relationship with the addition of categorical variables such as scanner model and organ to provide a more complete estimate of factors that may affect organ dose. Finally, estimates from generated models were compared to those obtained from SSDE and ImPACT. The Generalized Linear Model yielded organ dose estimates that were significantly closer to the MC reference organ dose values than were organ doses estimated via SSDE or ImPACT. Moreover, the GLM estimates were better than those of SSDE or ImPACT irrespective of whether or not categorical variables were used in the model. While the improvement associated with a categorical variable was substantial in estimating breast dose, the improvement was minor for other organs. The GLM approach extends the current CT dose estimation methods by allowing the use of additional variables to more accurately estimate organ dose from TCM scans. Thus, this approach may be able to overcome the limitations of current CT dose metrics to provide more accurate estimates of patient dose, in particular, dose to organs with considerable variability across the population. © 2017 American Association of Physicists in Medicine.

  13. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    PubMed Central

    Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu

    2016-01-01

    Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127

  14. Shrinkage estimation of effect sizes as an alternative to hypothesis testing followed by estimation in high-dimensional biology: applications to differential gene expression.

    PubMed

    Montazeri, Zahra; Yanofsky, Corey M; Bickel, David R

    2010-01-01

    Research on analyzing microarray data has focused on the problem of identifying differentially expressed genes to the neglect of the problem of how to integrate evidence that a gene is differentially expressed with information on the extent of its differential expression. Consequently, researchers currently prioritize genes for further study either on the basis of volcano plots or, more commonly, according to simple estimates of the fold change after filtering the genes with an arbitrary statistical significance threshold. While the subjective and informal nature of the former practice precludes quantification of its reliability, the latter practice is equivalent to using a hard-threshold estimator of the expression ratio that is not known to perform well in terms of mean-squared error, the sum of estimator variance and squared estimator bias. On the basis of two distinct simulation studies and data from different microarray studies, we systematically compared the performance of several estimators representing both current practice and shrinkage. We find that the threshold-based estimators usually perform worse than the maximum-likelihood estimator (MLE) and they often perform far worse as quantified by estimated mean-squared risk. By contrast, the shrinkage estimators tend to perform as well as or better than the MLE and never much worse than the MLE, as expected from what is known about shrinkage. However, a Bayesian measure of performance based on the prior information that few genes are differentially expressed indicates that hard-threshold estimators perform about as well as the local false discovery rate (FDR), the best of the shrinkage estimators studied. Based on the ability of the latter to leverage information across genes, we conclude that the use of the local-FDR estimator of the fold change instead of informal or threshold-based combinations of statistical tests and non-shrinkage estimators can be expected to substantially improve the reliability of gene prioritization at very little risk of doing so less reliably. Since the proposed replacement of post-selection estimates with shrunken estimates applies as well to other types of high-dimensional data, it could also improve the analysis of SNP data from genome-wide association studies.

  15. A Robust Adaptive Unscented Kalman Filter for Nonlinear Estimation with Uncertain Noise Covariance.

    PubMed

    Zheng, Binqi; Fu, Pengcheng; Li, Baoqing; Yuan, Xiaobing

    2018-03-07

    The Unscented Kalman filter (UKF) may suffer from performance degradation and even divergence while mismatch between the noise distribution assumed as a priori by users and the actual ones in a real nonlinear system. To resolve this problem, this paper proposes a robust adaptive UKF (RAUKF) to improve the accuracy and robustness of state estimation with uncertain noise covariance. More specifically, at each timestep, a standard UKF will be implemented first to obtain the state estimations using the new acquired measurement data. Then an online fault-detection mechanism is adopted to judge if it is necessary to update current noise covariance. If necessary, innovation-based method and residual-based method are used to calculate the estimations of current noise covariance of process and measurement, respectively. By utilizing a weighting factor, the filter will combine the last noise covariance matrices with the estimations as the new noise covariance matrices. Finally, the state estimations will be corrected according to the new noise covariance matrices and previous state estimations. Compared with the standard UKF and other adaptive UKF algorithms, RAUKF converges faster to the actual noise covariance and thus achieves a better performance in terms of robustness, accuracy, and computation for nonlinear estimation with uncertain noise covariance, which is demonstrated by the simulation results.

  16. A Robust Adaptive Unscented Kalman Filter for Nonlinear Estimation with Uncertain Noise Covariance

    PubMed Central

    Zheng, Binqi; Yuan, Xiaobing

    2018-01-01

    The Unscented Kalman filter (UKF) may suffer from performance degradation and even divergence while mismatch between the noise distribution assumed as a priori by users and the actual ones in a real nonlinear system. To resolve this problem, this paper proposes a robust adaptive UKF (RAUKF) to improve the accuracy and robustness of state estimation with uncertain noise covariance. More specifically, at each timestep, a standard UKF will be implemented first to obtain the state estimations using the new acquired measurement data. Then an online fault-detection mechanism is adopted to judge if it is necessary to update current noise covariance. If necessary, innovation-based method and residual-based method are used to calculate the estimations of current noise covariance of process and measurement, respectively. By utilizing a weighting factor, the filter will combine the last noise covariance matrices with the estimations as the new noise covariance matrices. Finally, the state estimations will be corrected according to the new noise covariance matrices and previous state estimations. Compared with the standard UKF and other adaptive UKF algorithms, RAUKF converges faster to the actual noise covariance and thus achieves a better performance in terms of robustness, accuracy, and computation for nonlinear estimation with uncertain noise covariance, which is demonstrated by the simulation results. PMID:29518960

  17. FPGA-based fused smart-sensor for tool-wear area quantitative estimation in CNC machine inserts.

    PubMed

    Trejo-Hernandez, Miguel; Osornio-Rios, Roque Alfredo; de Jesus Romero-Troncoso, Rene; Rodriguez-Donate, Carlos; Dominguez-Gonzalez, Aurelio; Herrera-Ruiz, Gilberto

    2010-01-01

    Manufacturing processes are of great relevance nowadays, when there is a constant claim for better productivity with high quality at low cost. The contribution of this work is the development of a fused smart-sensor, based on FPGA to improve the online quantitative estimation of flank-wear area in CNC machine inserts from the information provided by two primary sensors: the monitoring current output of a servoamplifier, and a 3-axis accelerometer. Results from experimentation show that the fusion of both parameters makes it possible to obtain three times better accuracy when compared with the accuracy obtained from current and vibration signals, individually used.

  18. Development of new demi-span equations from a nationally representative sample of adults to estimate maximal adult height.

    PubMed

    Hirani, Vasant; Tabassum, Faiza; Aresu, Maria; Mindell, Jennifer

    2010-08-01

    Various measures have been used to estimate height when assessing nutritional status. Current equations to obtain demi-span equivalent height (DEH(Bassey)) are based on a small sample from a single study. The objectives of this study were to develop more robust DEH equations from a large number of men (n = 591) and women (n = 830) aged 25-45 y from a nationally representative cross-sectional sample (Health Survey for England 2007). Sex-specific regression equations were produced from young adults' (aged 25-45 y) measured height and demi-span to estimate new DEH equations (DEH(new)). DEH in people aged >or= 65 y was calculated using DEH(new). DEH(new) estimated current height in people aged 25-45 y with a mean difference of 0.04 in men (P = 0.80) and -0.29 in women (P = 0.05). Height, demi-span, DEH(new), and DEH(Bassey) declined by age group in both sexes aged >or=65 y (P < 0.05); DEH were larger than the measured height for all age groups (mean difference between DEH(new) and current height was -2.64 in men and -3.16 in women; both P < 0.001). Comparisons of DEH estimates showed good agreement, but DEH(new) was significantly higher than DEH(Bassey) in each age and sex group in older people. The new equations that are based on a large, randomly selected, nationally representative sample of young adults are more robust for predicting current height in young adults when height measurements are unavailable and can be used in the future to predict maximal adult height more accurately in currently young adults as they age.

  19. Cost of fetal alcohol spectrum disorder diagnosis in Canada.

    PubMed

    Popova, Svetlana; Lange, Shannon; Burd, Larry; Chudley, Albert E; Clarren, Sterling K; Rehm, Jürgen

    2013-01-01

    Fetal Alcohol Spectrum Disorder (FASD) is underdiagnosed in Canada. The diagnosis of FASD is not simple and currently, the recommendation is that a comprehensive, multidisciplinary assessment of the individual be done. The purpose of this study was to estimate the annual cost of FASD diagnosis on Canadian society. The diagnostic process breakdown was based on recommendations from the Fetal Alcohol Spectrum Disorder Canadian Guidelines for Diagnosis. The per person cost of diagnosis was calculated based on the number of hours (estimated based on expert opinion) required by each specialist involved in the diagnostic process. The average rate per hour for each respective specialist was estimated based on hourly costs across Canada. Based on the existing clinical capacity of all FASD multidisciplinary clinics in Canada, obtained from the 2005 and 2011 surveys conducted by the Canada Northwest FASD Research Network, the number of FASD cases diagnosed per year in Canada was estimated. The per person cost of FASD diagnosis was then applied to the number of cases diagnosed per year in Canada in order to calculated the overall annual cost. Using the most conservative approach, it was estimated that an FASD evaluation requires 32 to 47 hours for one individual to be screened, referred, admitted, and diagnosed with an FASD diagnosis, which results in a total cost of $3,110 to $4,570 per person. The total cost of FASD diagnostic services in Canada ranges from $3.6 to $5.2 million (lower estimate), up to $5.0 to $7.3 million (upper estimate) per year. As a result of using the most conservative approach, the cost of FASD diagnostic services presented in the current study is most likely underestimated. The reasons for this likelihood and the limitations of the study are discussed.

  20. Methods of albumin estimation in clinical biochemistry: Past, present, and future.

    PubMed

    Kumar, Deepak; Banerjee, Dibyajyoti

    2017-06-01

    Estimation of serum and urinary albumin is routinely performed in clinical biochemistry laboratories. In the past, precipitation-based methods were popular for estimation of human serum albumin (HSA). Currently, dye-binding or immunochemical methods are widely practiced. Each of these methods has its limitations. Research endeavors to overcome such limitations are on-going. The current trends in methodological aspects of albumin estimation guiding the field have not been reviewed. Therefore, it is the need of the hour to review several aspects of albumin estimation. The present review focuses on the modern trends of research from a conceptual point of view and gives an overview of recent developments to offer the readers a comprehensive understanding of the subject. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Mental Disorder Symptoms among Public Safety Personnel in Canada.

    PubMed

    Carleton, R Nicholas; Afifi, Tracie O; Turner, Sarah; Taillieu, Tamara; Duranceau, Sophie; LeBouthillier, Daniel M; Sareen, Jitender; Ricciardelli, Rose; MacPhee, Renee S; Groll, Dianne; Hozempa, Kadie; Brunet, Alain; Weekes, John R; Griffiths, Curt T; Abrams, Kelly J; Jones, Nicholas A; Beshai, Shadi; Cramm, Heidi A; Dobson, Keith S; Hatcher, Simon; Keane, Terence M; Stewart, Sherry H; Asmundson, Gordon J G

    2018-01-01

    Canadian public safety personnel (PSP; e.g., correctional workers, dispatchers, firefighters, paramedics, police officers) are exposed to potentially traumatic events as a function of their work. Such exposures contribute to the risk of developing clinically significant symptoms related to mental disorders. The current study was designed to provide estimates of mental disorder symptom frequencies and severities for Canadian PSP. An online survey was made available in English or French from September 2016 to January 2017. The survey assessed current symptoms, and participation was solicited from national PSP agencies and advocacy groups. Estimates were derived using well-validated screening measures. There were 5813 participants (32.5% women) who were grouped into 6 categories (i.e., call center operators/dispatchers, correctional workers, firefighters, municipal/provincial police, paramedics, Royal Canadian Mounted Police). Substantial proportions of participants reported current symptoms consistent with 1 (i.e., 15.1%) or more (i.e., 26.7%) mental disorders based on the screening measures. There were significant differences across PSP categories with respect to proportions screening positive based on each measure. The estimated proportion of PSP reporting current symptom clusters consistent with 1 or more mental disorders appears higher than previously published estimates for the general population; however, direct comparisons are impossible because of methodological differences. The available data suggest that Canadian PSP experience substantial and heterogeneous difficulties with mental health and underscore the need for a rigorous epidemiologic study and category-specific solutions.

  2. Artificial Neural Network Based Group Contribution Method for Estimating Cetane and Octane Numbers of Hydrocarbons and Oxygenated Organic Compounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kubic, William Louis; Jenkins, Rhodri W.; Moore, Cameron M.

    Chemical pathways for converting biomass into fuels produce compounds for which key physical and chemical property data are unavailable. We developed an artificial neural network based group contribution method for estimating cetane and octane numbers that captures the complex dependence of fuel properties of pure compounds on chemical structure and is statistically superior to current methods.

  3. Artificial Neural Network Based Group Contribution Method for Estimating Cetane and Octane Numbers of Hydrocarbons and Oxygenated Organic Compounds

    DOE PAGES

    Kubic, William Louis; Jenkins, Rhodri W.; Moore, Cameron M.; ...

    2017-09-28

    Chemical pathways for converting biomass into fuels produce compounds for which key physical and chemical property data are unavailable. We developed an artificial neural network based group contribution method for estimating cetane and octane numbers that captures the complex dependence of fuel properties of pure compounds on chemical structure and is statistically superior to current methods.

  4. Estimating Radiation Dose Metrics for Patients Undergoing Tube Current Modulation CT Scans

    NASA Astrophysics Data System (ADS)

    McMillan, Kyle Lorin

    Computed tomography (CT) has long been a powerful tool in the diagnosis of disease, identification of tumors and guidance of interventional procedures. With CT examinations comes the concern of radiation exposure and the associated risks. In order to properly understand those risks on a patient-specific level, organ dose must be quantified for each CT scan. Some of the most widely used organ dose estimates are derived from fixed tube current (FTC) scans of a standard sized idealized patient model. However, in current clinical practice, patient size varies from neonates weighing just a few kg to morbidly obese patients weighing over 200 kg, and nearly all CT exams are performed with tube current modulation (TCM), a scanning technique that adjusts scanner output according to changes in patient attenuation. Methods to account for TCM in CT organ dose estimates have been previously demonstrated, but these methods are limited in scope and/or restricted to idealized TCM profiles that are not based on physical observations and not scanner specific (e.g. don't account for tube limits, scanner-specific effects, etc.). The goal of this work was to develop methods to estimate organ doses to patients undergoing CT scans that take into account both the patient size as well as the effects of TCM. This work started with the development and validation of methods to estimate scanner-specific TCM schemes for any voxelized patient model. An approach was developed to generate estimated TCM schemes that match actual TCM schemes that would have been acquired on the scanner for any patient model. Using this approach, TCM schemes were then generated for a variety of body CT protocols for a set of reference voxelized phantoms for which TCM information does not currently exist. These are whole body patient models representing a variety of sizes, ages and genders that have all radiosensitive organs identified. TCM schemes for these models facilitated Monte Carlo-based estimates of fully-, partially- and indirectly-irradiated organ dose from TCM CT exams. By accounting for the effects of patient size in the organ dose estimates, a comprehensive set of patient-specific dose estimates from TCM CT exams was developed. These patient-specific organ dose estimates from TCM CT exams will provide a more complete understanding of the dose impact and risks associated with modern body CT scanning protocols.

  5. Estimating Evapotranspiration with Land Data Assimilation Systems

    NASA Technical Reports Server (NTRS)

    Peters-Lidard, C. D.; Kumar, S. V.; Mocko, D. M.; Tian, Y.

    2011-01-01

    Advancements in both land surface models (LSM) and land surface data assimilation, especially over the last decade, have substantially advanced the ability of land data assimilation systems (LDAS) to estimate evapotranspiration (ET). This article provides a historical perspective on international LSM intercomparison efforts and the development of LDAS systems, both of which have improved LSM ET skill. In addition, an assessment of ET estimates for current LDAS systems is provided along with current research that demonstrates improvement in LSM ET estimates due to assimilating satellite-based soil moisture products. Using the Ensemble Kalman Filter in the Land Information System, we assimilate both NASA and Land Parameter Retrieval Model (LPRM) soil moisture products into the Noah LSM Version 3.2 with the North American LDAS phase 2 (NLDAS-2) forcing to mimic the NLDAS-2 configuration. Through comparisons with two global reference ET products, one based on interpolated flux tower data and one from a new satellite ET algorithm, over the NLDAS2 domain, we demonstrate improvement in ET estimates only when assimilating the LPRM soil moisture product.

  6. Multi-species genetic connectivity in a terrestrial habitat network.

    PubMed

    Marrotte, Robby R; Bowman, Jeff; Brown, Michael G C; Cordes, Chad; Morris, Kimberley Y; Prentice, Melanie B; Wilson, Paul J

    2017-01-01

    Habitat fragmentation reduces genetic connectivity for multiple species, yet conservation efforts tend to rely heavily on single-species connectivity estimates to inform land-use planning. Such conservation activities may benefit from multi-species connectivity estimates, which provide a simple and practical means to mitigate the effects of habitat fragmentation for a larger number of species. To test the validity of a multi-species connectivity model, we used neutral microsatellite genetic datasets of Canada lynx ( Lynx canadensis ), American marten ( Martes americana ), fisher ( Pekania pennanti ), and southern flying squirrel ( Glaucomys volans ) to evaluate multi-species genetic connectivity across Ontario, Canada. We used linear models to compare node-based estimates of genetic connectivity for each species to point-based estimates of landscape connectivity (current density) derived from circuit theory. To our knowledge, we are the first to evaluate current density as a measure of genetic connectivity. Our results depended on landscape context: habitat amount was more important than current density in explaining multi-species genetic connectivity in the northern part of our study area, where habitat was abundant and fragmentation was low. In the south however, where fragmentation was prevalent, genetic connectivity was correlated with current density. Contrary to our expectations however, locations with a high probability of movement as reflected by high current density were negatively associated with gene flow. Subsequent analyses of circuit theory outputs showed that high current density was also associated with high effective resistance, underscoring that the presence of pinch points is not necessarily indicative of gene flow. Overall, our study appears to provide support for the hypothesis that landscape pattern is important when habitat amount is low. We also conclude that while current density is proportional to the probability of movement per unit area, this does not imply increased gene flow, since high current density tends to be a result of neighbouring pixels with high cost of movement (e.g., low habitat amount). In other words, pinch points with high current density appear to constrict gene flow.

  7. Induction machine bearing faults detection based on a multi-dimensional MUSIC algorithm and maximum likelihood estimation.

    PubMed

    Elbouchikhi, Elhoussin; Choqueuse, Vincent; Benbouzid, Mohamed

    2016-07-01

    Condition monitoring of electric drives is of paramount importance since it contributes to enhance the system reliability and availability. Moreover, the knowledge about the fault mode behavior is extremely important in order to improve system protection and fault-tolerant control. Fault detection and diagnosis in squirrel cage induction machines based on motor current signature analysis (MCSA) has been widely investigated. Several high resolution spectral estimation techniques have been developed and used to detect induction machine abnormal operating conditions. This paper focuses on the application of MCSA for the detection of abnormal mechanical conditions that may lead to induction machines failure. In fact, this paper is devoted to the detection of single-point defects in bearings based on parametric spectral estimation. A multi-dimensional MUSIC (MD MUSIC) algorithm has been developed for bearing faults detection based on bearing faults characteristic frequencies. This method has been used to estimate the fundamental frequency and the fault related frequency. Then, an amplitude estimator of the fault characteristic frequencies has been proposed and fault indicator has been derived for fault severity measurement. The proposed bearing faults detection approach is assessed using simulated stator currents data, issued from a coupled electromagnetic circuits approach for air-gap eccentricity emulating bearing faults. Then, experimental data are used for validation purposes. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Downlink Training Techniques for FDD Massive MIMO Systems: Open-Loop and Closed-Loop Training With Memory

    NASA Astrophysics Data System (ADS)

    Choi, Junil; Love, David J.; Bidigare, Patrick

    2014-10-01

    The concept of deploying a large number of antennas at the base station, often called massive multiple-input multiple-output (MIMO), has drawn considerable interest because of its potential ability to revolutionize current wireless communication systems. Most literature on massive MIMO systems assumes time division duplexing (TDD), although frequency division duplexing (FDD) dominates current cellular systems. Due to the large number of transmit antennas at the base station, currently standardized approaches would require a large percentage of the precious downlink and uplink resources in FDD massive MIMO be used for training signal transmissions and channel state information (CSI) feedback. To reduce the overhead of the downlink training phase, we propose practical open-loop and closed-loop training frameworks in this paper. We assume the base station and the user share a common set of training signals in advance. In open-loop training, the base station transmits training signals in a round-robin manner, and the user successively estimates the current channel using long-term channel statistics such as temporal and spatial correlations and previous channel estimates. In closed-loop training, the user feeds back the best training signal to be sent in the future based on channel prediction and the previously received training signals. With a small amount of feedback from the user to the base station, closed-loop training offers better performance in the data communication phase, especially when the signal-to-noise ratio is low, the number of transmit antennas is large, or prior channel estimates are not accurate at the beginning of the communication setup, all of which would be mostly beneficial for massive MIMO systems.

  9. COMPUTER SUPPORT SYSTEMS FOR ESTIMATING CHEMICAL TOXICITY: PRESENT CAPABILITIES AND FUTURE TRENDS

    EPA Science Inventory

    Computer Support Systems for Estimating Chemical Toxicity: Present Capabilities and Future Trends

    A wide variety of computer-based artificial intelligence (AI) and decision support systems exist currently to aid in the assessment of toxicity for environmental chemicals. T...

  10. Shipborne LF-VLF oceanic lightning observations and modeling

    NASA Astrophysics Data System (ADS)

    Zoghzoghy, F. G.; Cohen, M. B.; Said, R. K.; Lehtinen, N. G.; Inan, U. S.

    2015-10-01

    Approximately 90% of natural lightning occurs over land, but recent observations, using Global Lightning Detection (GLD360) geolocation peak current estimates and satellite optical data, suggested that cloud-to-ground flashes are on average stronger over the ocean. We present initial statistics from a novel experiment using a Low Frequency (LF) magnetic field receiver system installed aboard the National Oceanic Atmospheric Agency (NOAA) Ronald W. Brown research vessel that allowed the detection of impulsive radio emissions from deep-oceanic discharges at short distances. Thousands of LF waveforms were recorded, facilitating the comparison of oceanic waveforms to their land counterparts. A computationally efficient electromagnetic radiation model that accounts for propagation over lossy and curved ground is constructed and compared with previously published models. We include the effects of Earth curvature on LF ground wave propagation and quantify the effects of channel-base current risetime, channel-base current falltime, and return stroke speed on the radiated LF waveforms observed at a given distance. We compare simulation results to data and conclude that previously reported larger GLD360 peak current estimates over the ocean are unlikely to fully result from differences in channel-base current risetime, falltime, or return stroke speed between ocean and land flashes.

  11. Predicting responses from Rasch measures.

    PubMed

    Linacre, John M

    2010-01-01

    There is a growing family of Rasch models for polytomous observations. Selecting a suitable model for an existing dataset, estimating its parameters and evaluating its fit is now routine. Problems arise when the model parameters are to be estimated from the current data, but used to predict future data. In particular, ambiguities in the nature of the current data, or overfit of the model to the current dataset, may mean that better fit to the current data may lead to worse fit to future data. The predictive power of several Rasch and Rasch-related models are discussed in the context of the Netflix Prize. Rasch-related models are proposed based on Singular Value Decomposition (SVD) and Boltzmann Machines.

  12. The American College of Surgeons Needs-Based Assessment of Trauma Systems: Estimates for the State of California.

    PubMed

    Uribe-Leitz, Tarsicio; Esquivel, Micaela M; Knowlton, Lisa M; Ciesla, David; Lin, Feng; Hsia, Renee Y; Spain, David A; Winchell, Robert J; Staudenmayer, Kristan L

    2017-05-01

    In 2015, the American College of Surgeons Committee on Trauma convened a consensus conference to develop the Needs-Based Assessment of Trauma Systems (NBATS) tool to assist in determining the number of trauma centers required for a region. We tested the performance of NBATS with respect to the optimal number of trauma centers needed by region in California. Trauma center data were obtained from the California Emergency Services Authority Information Systems (CEMSIS). Numbers of admitted trauma patients (ISS > 15) were obtained using statewide nonpublic admissions data from the California Office of Statewide Health Planning and Development (OSHPD), CEMSIS, and data from local emergency medical service agency (LEMSA) directors who agreed to participate in a telephone survey. Population estimates per county for 2014 were obtained from the U.S. Census. NBATS criteria used included population, transport time, community support, and number of discharges for severely injured patients (ISS > 15) at nontrauma centers and trauma centers. Estimates for the number of trauma centers per region were created for each of the three data sources and compared to the number of existing centers. A total of 62 state-designated trauma centers were identified for California: 13 (21%) Level I, 36 (58%) Level II, and 13 (11%) Level III. NBATS estimates for the total number of trauma centers in California were 27% to 47% lower compared to the number of trauma centers in existence, but this varied based on urban/rural status. NBATS estimates were lower than the current state in 70% of urban areas but were higher in almost 90% of rural areas. All data sources (OSHPD, CEMSIS, local data) produced similar results. Estimates from the NBATS tool are different from what is currently in existence in California, and differences exist based on whether the region is rural or urban. Findings from the current study can help inform future iterations of the NBATS tool. Economic, level V.

  13. An adaptive ARX model to estimate the RUL of aluminum plates based on its crack growth

    NASA Astrophysics Data System (ADS)

    Barraza-Barraza, Diana; Tercero-Gómez, Víctor G.; Beruvides, Mario G.; Limón-Robles, Jorge

    2017-01-01

    A wide variety of Condition-Based Maintenance (CBM) techniques deal with the problem of predicting the time for an asset fault. Most statistical approaches rely on historical failure data that might not be available in several practical situations. To address this issue, practitioners might require the use of self-starting approaches that consider only the available knowledge about the current degradation process and the asset operating context to update the prognostic model. Some authors use Autoregressive (AR) models for this purpose that are adequate when the asset operating context is constant, however, if it is variable, the accuracy of the models can be affected. In this paper, three autoregressive models with exogenous variables (ARX) were constructed, and their capability to estimate the remaining useful life (RUL) of a process was evaluated following the case of the aluminum crack growth problem. An existing stochastic model of aluminum crack growth was implemented and used to assess RUL estimation performance of the proposed ARX models through extensive Monte Carlo simulations. Point and interval estimations were made based only on individual history, behavior, operating conditions and failure thresholds. Both analytic and bootstrapping techniques were used in the estimation process. Finally, by including recursive parameter estimation and a forgetting factor, the ARX methodology adapts to changing operating conditions and maintain the focus on the current degradation level of an asset.

  14. Indirect rotor position sensing in real time for brushless permanent magnet motor drives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ertugrul, N.; Acarnley, P.P.

    1998-07-01

    This paper describes a modern solution to real-time rotor position estimation of brushless permanent magnet (PM) motor drives. The position estimation scheme, based on flux linkage and line-current estimation, is implemented in real time by using the abc reference frame, and it is tested dynamically. The position estimation model of the test motor, development of hardware, and basic operation of the digital signal processor (DSP) are discussed. The overall position estimation strategy is accomplished with a fast DSP (TMS320C30). The method is a shaft position sensorless method that is applicable to a wide range of excitation types in brushless PMmore » motors without any restriction on the motor model and the current excitation. Both rectangular and sinewave-excited brushless PM motor drives are examined, and the results are given to demonstrate the effectiveness of the method with dynamic loads in closed estimated position loop.« less

  15. Ring Current Pressure Estimation withRAM-SCB using Data Assimilation and VanAllen Probe Flux Data

    NASA Astrophysics Data System (ADS)

    Godinez, H. C.; Yu, Y.; Henderson, M. G.; Larsen, B.; Jordanova, V.

    2015-12-01

    Capturing and subsequently modeling the influence of tail plasma injections on the inner magnetosphere is particularly important for understanding the formation and evolution of Earth's ring current. In this study, the ring current distribution is estimated with the Ring Current-Atmosphere Interactions Model with Self-Consistent Magnetic field (RAM-SCB) using, for the first time, data assimilation techniques and particle flux data from the Van Allen Probes. The state of the ring current within the RAM-SCB is corrected via an ensemble based data assimilation technique by using proton flux from one of the Van Allen Probes, to capture the enhancement of ring current following an isolated substorm event on July 18 2013. The results show significant improvement in the estimation of the ring current particle distributions in the RAM-SCB model, leading to better agreement with observations. This newly implemented data assimilation technique in the global modeling of the ring current thus provides a promising tool to better characterize the effect of substorm injections in the near-Earth regions. The work is part of the Space Hazards Induced near Earth by Large, Dynamic Storms (SHIELDS) project in Los Alamos National Laboratory.

  16. River Discharge and Bathymetry Estimation from Hydraulic Inversion of Surface Currents and Water Surface Elevation Observations

    NASA Astrophysics Data System (ADS)

    Simeonov, J.; Holland, K. T.

    2015-12-01

    We developed an inversion model for river bathymetry and discharge estimation based on measurements of surface currents, water surface elevation and shoreline coordinates. The model uses a simplification of the 2D depth-averaged steady shallow water equations based on a streamline following system of coordinates and assumes spatially uniform bed friction coefficient and eddy viscosity. The spatial resolution of the predicted bathymetry is related to the resolution of the surface currents measurements. The discharge is determined by minimizing the difference between the predicted and the measured streamwise variation of the total head. The inversion model was tested using in situ and remote sensing measurements of the Kootenai River east of Bonners Ferry, ID. The measurements were obtained in August 2010 when the discharge was about 223 m3/s and the maximum river depth was about 6.5 m. Surface currents covering a 10 km reach with 8 m spatial resolution were estimated from airborne infrared video and were converted to depth-averaged currents using acoustic Doppler current profiler (ADCP) measurements along eight cross-stream transects. The streamwise profile of the water surface elevation was measured using real-time kinematic GPS from a drifting platform. The value of the friction coefficient was obtained from forward calibration simulations that minimized the difference between the predicted and measured velocity and water level along the river thalweg. The predicted along/cross-channel water depth variation was compared to the depth measured with a multibeam echo sounder. The rms error between the measured and predicted depth along the thalweg was found to be about 60cm and the estimated discharge was 5% smaller than the discharge measured by the ADCP.

  17. Gibbs Sampler-Based λ-Dynamics and Rao-Blackwell Estimator for Alchemical Free Energy Calculation.

    PubMed

    Ding, Xinqiang; Vilseck, Jonah Z; Hayes, Ryan L; Brooks, Charles L

    2017-06-13

    λ-dynamics is a generalized ensemble method for alchemical free energy calculations. In traditional λ-dynamics, the alchemical switch variable λ is treated as a continuous variable ranging from 0 to 1 and an empirical estimator is utilized to approximate the free energy. In the present article, we describe an alternative formulation of λ-dynamics that utilizes the Gibbs sampler framework, which we call Gibbs sampler-based λ-dynamics (GSLD). GSLD, like traditional λ-dynamics, can be readily extended to calculate free energy differences between multiple ligands in one simulation. We also introduce a new free energy estimator, the Rao-Blackwell estimator (RBE), for use in conjunction with GSLD. Compared with the current empirical estimator, the advantage of RBE is that RBE is an unbiased estimator and its variance is usually smaller than the current empirical estimator. We also show that the multistate Bennett acceptance ratio equation or the unbinned weighted histogram analysis method equation can be derived using the RBE. We illustrate the use and performance of this new free energy computational framework by application to a simple harmonic system as well as relevant calculations of small molecule relative free energies of solvation and binding to a protein receptor. Our findings demonstrate consistent and improved performance compared with conventional alchemical free energy methods.

  18. Convolution-based estimation of organ dose in tube current modulated CT

    NASA Astrophysics Data System (ADS)

    Tian, Xiaoyu; Segars, W. Paul; Dixon, Robert L.; Samei, Ehsan

    2016-05-01

    Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460-7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18-70 years, weight range: 60-180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients ({{h}\\text{Organ}} ) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} with the organ dose coefficients ({{h}\\text{Organ}} ). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled. The discrepancy between the estimated organ dose and dose simulated using TCM Monte Carlo program was quantified. We further compared the convolution-based organ dose estimation method with two other strategies with different approaches of quantifying the irradiation field. The proposed convolution-based estimation method showed good accuracy with the organ dose simulated using the TCM Monte Carlo simulation. The average percentage error (normalized by CTDIvol) was generally within 10% across all organs and modulation profiles, except for organs located in the pelvic and shoulder regions. This study developed an improved method that accurately quantifies the irradiation field under TCM scans. The results suggested that organ dose could be estimated in real-time both prospectively (with the localizer information only) and retrospectively (with acquired CT data).

  19. Energy-Related Carbon Dioxide Emissions in U.S. Manufacturing

    EIA Publications

    2006-01-01

    Based on the Manufacturing Energy Consumption Survey (MECS) conducted by the U.S. Department of Energy, Energy Information Administration (EIA), this paper presents historical energy-related carbon dioxide emission estimates for energy-intensive sub-sectors and 23 industries. Estimates are based on surveys of more than 15,000 manufacturing plants in 1991, 1994, 1998, and 2002. EIA is currently developing its collection of manufacturing data for 2006.

  20. Organ dose conversion coefficients for tube current modulated CT protocols for an adult population

    NASA Astrophysics Data System (ADS)

    Fu, Wanyi; Tian, Xiaoyu; Sahbaee, Pooyan; Zhang, Yakun; Segars, William Paul; Samei, Ehsan

    2016-03-01

    In computed tomography (CT), patient-specific organ dose can be estimated using pre-calculated organ dose conversion coefficients (organ dose normalized by CTDIvol, h factor) database, taking into account patient size and scan coverage. The conversion coefficients have been previously estimated for routine body protocol classes, grouped by scan coverage, across an adult population for fixed tube current modulated CT. The coefficients, however, do not include the widely utilized tube current (mA) modulation scheme, which significantly impacts organ dose. This study aims to extend the h factors and the corresponding dose length product (DLP) to create effective dose conversion coefficients (k factor) database incorporating various tube current modulation strengths. Fifty-eight extended cardiac-torso (XCAT) phantoms were included in this study representing population anatomy variation in clinical practice. Four mA profiles, representing weak to strong mA dependency on body attenuation, were generated for each phantom and protocol class. A validated Monte Carlo program was used to simulate the organ dose. The organ dose and effective dose was further normalized by CTDIvol and DLP to derive the h factors and k factors, respectively. The h factors and k factors were summarized in an exponential regression model as a function of body size. Such a population-based mathematical model can provide a comprehensive organ dose estimation given body size and CTDIvol. The model was integrated into an iPhone app XCATdose version 2, enhancing the 1st version based upon fixed tube current modulation. With the organ dose calculator, physicists, physicians, and patients can conveniently estimate organ dose.

  1. Americans misperceive racial economic equality

    PubMed Central

    Kraus, Michael W.; Rucker, Julian M.; Richeson, Jennifer A.

    2017-01-01

    The present research documents the widespread misperception of race-based economic equality in the United States. Across four studies (n = 1,377) sampling White and Black Americans from the top and bottom of the national income distribution, participants overestimated progress toward Black–White economic equality, largely driven by estimates of greater current equality than actually exists according to national statistics. Overestimates of current levels of racial economic equality, on average, outstripped reality by roughly 25% and were predicted by greater belief in a just world and social network racial diversity (among Black participants). Whereas high-income White respondents tended to overestimate racial economic equality in the past, Black respondents, on average, underestimated the degree of past racial economic equality. Two follow-up experiments further revealed that making societal racial discrimination salient increased the accuracy of Whites’ estimates of Black–White economic equality, whereas encouraging Whites to anchor their estimates on their own circumstances increased their tendency to overestimate current racial economic equality. Overall, these findings suggest a profound misperception of and unfounded optimism regarding societal race-based economic equality—a misperception that is likely to have any number of important policy implications. PMID:28923915

  2. Americans misperceive racial economic equality.

    PubMed

    Kraus, Michael W; Rucker, Julian M; Richeson, Jennifer A

    2017-09-26

    The present research documents the widespread misperception of race-based economic equality in the United States. Across four studies ( n = 1,377) sampling White and Black Americans from the top and bottom of the national income distribution, participants overestimated progress toward Black-White economic equality, largely driven by estimates of greater current equality than actually exists according to national statistics. Overestimates of current levels of racial economic equality, on average, outstripped reality by roughly 25% and were predicted by greater belief in a just world and social network racial diversity (among Black participants). Whereas high-income White respondents tended to overestimate racial economic equality in the past, Black respondents, on average, underestimated the degree of past racial economic equality. Two follow-up experiments further revealed that making societal racial discrimination salient increased the accuracy of Whites' estimates of Black-White economic equality, whereas encouraging Whites to anchor their estimates on their own circumstances increased their tendency to overestimate current racial economic equality. Overall, these findings suggest a profound misperception of and unfounded optimism regarding societal race-based economic equality-a misperception that is likely to have any number of important policy implications.

  3. Adaptive Control of Four-Leg VSC Based DSTATCOM in Distribution System

    NASA Astrophysics Data System (ADS)

    Singh, Bhim; Arya, Sabha Raj

    2014-01-01

    This work discusses an experimental performance of a four-leg Distribution Static Compensator (DSTATCOM) using an adaptive filter based approach. It is used for estimation of reference supply currents through extracting the fundamental active power components of three-phase distorted load currents. This control algorithm is implemented on an assembled DSTATCOM for harmonics elimination, neutral current compensation and load balancing, under nonlinear loads. Experimental results are discussed, and it is noticed that DSTATCOM is effective solution to perform satisfactory performance under load dynamics.

  4. A fuel-based approach to estimating motor vehicle exhaust emissions

    NASA Astrophysics Data System (ADS)

    Singer, Brett Craig

    Motor vehicles contribute significantly to air pollution problems; accurate motor vehicle emission inventories are therefore essential to air quality planning. Current travel-based inventory models use emission factors measured from potentially biased vehicle samples and predict fleet-average emissions which are often inconsistent with on-road measurements. This thesis presents a fuel-based inventory approach which uses emission factors derived from remote sensing or tunnel-based measurements of on-road vehicles. Vehicle activity is quantified by statewide monthly fuel sales data resolved to the air basin level. Development of the fuel-based approach includes (1) a method for estimating cold start emission factors, (2) an analysis showing that fuel-normalized emission factors are consistent over a range of positive vehicle loads and that most fuel use occurs during loaded-mode driving, (3) scaling factors relating infrared hydrocarbon measurements to total exhaust volatile organic compound (VOC) concentrations, and (4) an analysis showing that economic factors should be considered when selecting on-road sampling sites. The fuel-based approach was applied to estimate carbon monoxide (CO) emissions from warmed-up vehicles in the Los Angeles area in 1991, and CO and VOC exhaust emissions for Los Angeles in 1997. The fuel-based CO estimate for 1991 was higher by a factor of 2.3 +/- 0.5 than emissions predicted by California's MVEI 7F model. Fuel-based inventory estimates for 1997 were higher than those of California's updated MVEI 7G model by factors of 2.4 +/- 0.2 for CO and 3.5 +/- 0.6 for VOC. Fuel-based estimates indicate a 20% decrease in the mass of CO emitted, despite an 8% increase in fuel use between 1991 and 1997; official inventory models predict a 50% decrease in CO mass emissions during the same period. Cold start CO and VOC emission factors derived from parking garage measurements were lower than those predicted by the MVEI 7G model. Current inventories in California appear to understate total exhaust CO and VOC emissions, while overstating the importance of cold start emissions. The fuel-based approach yields robust, independent, and accurate estimates of on-road vehicle emissions. Fuel-based estimates should be used to validate or adjust official vehicle emission inventories before society embarks on new, more costly air pollution control programs.

  5. FPGA-Based Fused Smart-Sensor for Tool-Wear Area Quantitative Estimation in CNC Machine Inserts

    PubMed Central

    Trejo-Hernandez, Miguel; Osornio-Rios, Roque Alfredo; de Jesus Romero-Troncoso, Rene; Rodriguez-Donate, Carlos; Dominguez-Gonzalez, Aurelio; Herrera-Ruiz, Gilberto

    2010-01-01

    Manufacturing processes are of great relevance nowadays, when there is a constant claim for better productivity with high quality at low cost. The contribution of this work is the development of a fused smart-sensor, based on FPGA to improve the online quantitative estimation of flank-wear area in CNC machine inserts from the information provided by two primary sensors: the monitoring current output of a servoamplifier, and a 3-axis accelerometer. Results from experimentation show that the fusion of both parameters makes it possible to obtain three times better accuracy when compared with the accuracy obtained from current and vibration signals, individually used. PMID:22319304

  6. Estimated Daily Average Per Capita Water Ingestion by Child and Adult Age Categories Based on USDA's 1994-96 and 1998 Continuing Survey of Food Intakes by Individuals (Journal Article)

    EPA Science Inventory

    Current water ingestion estimates are important for the assessment of risk to human populations of exposure to water-borne pollutants. This paper reports mean and percentile estimates of the distributions of daily average per capita water ingestion for 12 age range groups. The a...

  7. Capital cost estimate

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The capital cost estimate for the nuclear process heat source (NPHS) plant was made by: (1) using costs from the current commercial HTGR for electricity production as a base for items that are essentially the same and (2) development of new estimates for modified or new equipment that is specifically for the process heat application. Results are given in tabular form and cover the total investment required for each process temperature studied.

  8. Application of Real Options Theory to Software Engineering for Strategic Decision Making in Software Related Capital Investments

    DTIC Science & Technology

    2008-12-01

    between our current project and the historical projects. Therefore to refine the historical volatility estimate of the previously completed software... historical volatility estimates obtained in the form of beliefs and plausibility based on subjective probabilities that take into consideration unique

  9. SPECIFYING PHYSIOLOGICAL PARAMETERS FOR THE KINETICS OF INHALED TOLUENE IN RATS PERFORMING THE VISUAL SIGNAL DETECTION TASK (SDT).

    EPA Science Inventory

    A physiologically-based pharmacokinetic (PBPK) model is being developed to estimate the dosimetry of toluene in rats inhaling the VOC under various experimental conditions. The effects of physical activity are currently being estimated utilizing a three-step process. First, we d...

  10. PERFORMANCE AND COST OF MERCURY AND MULTIPOLLUTANT EMISSION CONTROL TECHNOLOGY APPLICATIONS ON ELECTRIC UTILITY BOILERS

    EPA Science Inventory

    The report presents estimates of the performance and cost of both powdered activated carbon (PAC) and multipollutant control technologies that may be useful in controlling mercury emissions. Based on currently available data, cost estimates for PAC injection range are 0.03-3.096 ...

  11. Online Cross-Validation-Based Ensemble Learning

    PubMed Central

    Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark

    2017-01-01

    Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. PMID:28474419

  12. Improved battery parameter estimation method considering operating scenarios for HEV/EV applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Jufeng; Xia, Bing; Shang, Yunlong

    This study presents an improved battery parameter estimation method based on typical operating scenarios in hybrid electric vehicles and pure electric vehicles. Compared with the conventional estimation methods, the proposed method takes both the constant-current charging and the dynamic driving scenarios into account, and two separate sets of model parameters are estimated through different parts of the pulse-rest test. The model parameters for the constant-charging scenario are estimated from the data in the pulse-charging periods, while the model parameters for the dynamic driving scenario are estimated from the data in the rest periods, and the length of the fitted datasetmore » is determined by the spectrum analysis of the load current. In addition, the unsaturated phenomenon caused by the long-term resistor-capacitor (RC) network is analyzed, and the initial voltage expressions of the RC networks in the fitting functions are improved to ensure a higher model fidelity. Simulation and experiment results validated the feasibility of the developed estimation method.« less

  13. Improved battery parameter estimation method considering operating scenarios for HEV/EV applications

    DOE PAGES

    Yang, Jufeng; Xia, Bing; Shang, Yunlong; ...

    2016-12-22

    This study presents an improved battery parameter estimation method based on typical operating scenarios in hybrid electric vehicles and pure electric vehicles. Compared with the conventional estimation methods, the proposed method takes both the constant-current charging and the dynamic driving scenarios into account, and two separate sets of model parameters are estimated through different parts of the pulse-rest test. The model parameters for the constant-charging scenario are estimated from the data in the pulse-charging periods, while the model parameters for the dynamic driving scenario are estimated from the data in the rest periods, and the length of the fitted datasetmore » is determined by the spectrum analysis of the load current. In addition, the unsaturated phenomenon caused by the long-term resistor-capacitor (RC) network is analyzed, and the initial voltage expressions of the RC networks in the fitting functions are improved to ensure a higher model fidelity. Simulation and experiment results validated the feasibility of the developed estimation method.« less

  14. Intensity information extraction in Geiger mode detector array based three-dimensional imaging applications

    NASA Astrophysics Data System (ADS)

    Wang, Fei

    2013-09-01

    Geiger-mode detectors have single photon sensitivity and picoseconds timing resolution, which make it a good candidate for low light level ranging applications, especially in the case of flash three dimensional imaging applications where the received laser power is extremely limited. Another advantage of Geiger-mode APD is their capability of large output current which can drive CMOS timing circuit directly, which means that larger format focal plane arrays can be easily fabricated using the mature CMOS technology. However Geiger-mode detector based FPAs can only measure the range information of a scene but not the reflectivity. Reflectivity is a major characteristic which can help target classification and identification. According to Poisson statistic nature, detection probability is tightly connected to the incident number of photon. Employing this relation, a signal intensity estimation method based on probability inversion is proposed. Instead of measuring intensity directly, several detections are conducted, then the detection probability is obtained and the intensity is estimated using this method. The relation between the estimator's accuracy, measuring range and number of detections are discussed based on statistical theory. Finally Monte-Carlo simulation is conducted to verify the correctness of this theory. Using 100 times of detection, signal intensity equal to 4.6 photons per detection can be measured using this method. With slight modification of measuring strategy, intensity information can be obtained using current Geiger-mode detector based FPAs, which can enrich the information acquired and broaden the application field of current technology.

  15. Statistical average estimates of high latitude field-aligned currents from the STARE and SABRE coherent VHF radar systems

    NASA Astrophysics Data System (ADS)

    Kosch, M. J.; Nielsen, E.

    Two bistatic VHF radar systems, STARE and SABRE, have been employed to estimate ionospheric electric fields in the geomagnetic latitude range 61.1 - 69.3° (geographic latitude range 63.8 - 72.6°) over northern Scandinavia. 173 days of good backscatter from all four radars have been analysed during the period 1982 to 1986, from which the average ionospheric divergence electric field versus latitude and time is calculated. The average magnetic field-aligned currents are computed using an AE-dependent empirical model of the ionospheric conductance. Statistical Birkeland current estimates are presented for high and low values of the Kp and AE indices as well as positive and negative orientations of the IMF B z component. The results compare very favourably to other ground-based and satellite measurements.

  16. Iterative Bayesian Estimation of Travel Times on Urban Arterials: Fusing Loop Detector and Probe Vehicle Data.

    PubMed

    Liu, Kai; Cui, Meng-Ying; Cao, Peng; Wang, Jiang-Bo

    2016-01-01

    On urban arterials, travel time estimation is challenging especially from various data sources. Typically, fusing loop detector data and probe vehicle data to estimate travel time is a troublesome issue while considering the data issue of uncertain, imprecise and even conflicting. In this paper, we propose an improved data fusing methodology for link travel time estimation. Link travel times are simultaneously pre-estimated using loop detector data and probe vehicle data, based on which Bayesian fusion is then applied to fuse the estimated travel times. Next, Iterative Bayesian estimation is proposed to improve Bayesian fusion by incorporating two strategies: 1) substitution strategy which replaces the lower accurate travel time estimation from one sensor with the current fused travel time; and 2) specially-designed conditions for convergence which restrict the estimated travel time in a reasonable range. The estimation results show that, the proposed method outperforms probe vehicle data based method, loop detector based method and single Bayesian fusion, and the mean absolute percentage error is reduced to 4.8%. Additionally, iterative Bayesian estimation performs better for lighter traffic flows when the variability of travel time is practically higher than other periods.

  17. Iterative Bayesian Estimation of Travel Times on Urban Arterials: Fusing Loop Detector and Probe Vehicle Data

    PubMed Central

    Cui, Meng-Ying; Cao, Peng; Wang, Jiang-Bo

    2016-01-01

    On urban arterials, travel time estimation is challenging especially from various data sources. Typically, fusing loop detector data and probe vehicle data to estimate travel time is a troublesome issue while considering the data issue of uncertain, imprecise and even conflicting. In this paper, we propose an improved data fusing methodology for link travel time estimation. Link travel times are simultaneously pre-estimated using loop detector data and probe vehicle data, based on which Bayesian fusion is then applied to fuse the estimated travel times. Next, Iterative Bayesian estimation is proposed to improve Bayesian fusion by incorporating two strategies: 1) substitution strategy which replaces the lower accurate travel time estimation from one sensor with the current fused travel time; and 2) specially-designed conditions for convergence which restrict the estimated travel time in a reasonable range. The estimation results show that, the proposed method outperforms probe vehicle data based method, loop detector based method and single Bayesian fusion, and the mean absolute percentage error is reduced to 4.8%. Additionally, iterative Bayesian estimation performs better for lighter traffic flows when the variability of travel time is practically higher than other periods. PMID:27362654

  18. Application of Model Based Parameter Estimation for RCS Frequency Response Calculations Using Method of Moments

    NASA Technical Reports Server (NTRS)

    Reddy, C. J.

    1998-01-01

    An implementation of the Model Based Parameter Estimation (MBPE) technique is presented for obtaining the frequency response of the Radar Cross Section (RCS) of arbitrarily shaped, three-dimensional perfect electric conductor (PEC) bodies. An Electric Field Integral Equation (EFTE) is solved using the Method of Moments (MoM) to compute the RCS. The electric current is expanded in a rational function and the coefficients of the rational function are obtained using the frequency derivatives of the EFIE. Using the rational function, the electric current on the PEC body is obtained over a frequency band. Using the electric current at different frequencies, RCS of the PEC body is obtained over a wide frequency band. Numerical results for a square plate, a cube, and a sphere are presented over a bandwidth. Good agreement between MBPE and the exact solution over the bandwidth is observed.

  19. Modeling cognitive reserve in healthy middle-aged and older adults: the Tasmanian Healthy Brain Project.

    PubMed

    Ward, David D; Summers, Mathew J; Saunders, Nichole L; Vickers, James C

    2015-04-01

    Cognitive reserve (CR) is a protective factor that supports cognition by increasing the resilience of an individual's cognitive function to the deleterious effects of cerebral lesions. A single environmental proxy indicator is often used to estimate CR (e.g. education), possibly resulting in a loss of the accuracy and predictive power of the investigation. Furthermore, while estimates of an individual's prior CR can be made, no operational measure exists to estimate dynamic change in CR resulting from exposure to new life experiences. We aimed to develop two latent measures of CR through factor analysis: prior and current, in a sample of 467 healthy older adults. The prior CR measure combined proxy measures traditionally associated with CR, while the current CR measure combined variables that had the potential to reflect dynamic change in CR due to new life experiences. Our main finding was that the analyses uncovered latent variables in hypothesized prior and current models of CR. The prior CR model supports multivariate estimation of pre-existing CR and may be applied to more accurately estimate CR in the absence of neuropathological data. The current CR model may be applied to evaluate and explore the potential benefits of CR-based interventions prior to dementia onset.

  20. Weight estimation techniques for composite airplanes in general aviation industry

    NASA Technical Reports Server (NTRS)

    Paramasivam, T.; Horn, W. J.; Ritter, J.

    1986-01-01

    Currently available weight estimation methods for general aviation airplanes were investigated. New equations with explicit material properties were developed for the weight estimation of aircraft components such as wing, fuselage and empennage. Regression analysis was applied to the basic equations for a data base of twelve airplanes to determine the coefficients. The resulting equations can be used to predict the component weights of either metallic or composite airplanes.

  1. Extrapolating existing soil organic carbon data to estimate soil organic carbon stocks below 20 cm

    Treesearch

    An-Min Wu; Cinzia Fissore; Charles H. Perry; An-Min Wu; Brent Dalzell; Barry T. Wilson

    2015-01-01

    Estimates of forest soil organic carbon stocks across the US are currently developed from expert opinion in STATSGO/SSURGO and linked to forest type. The results are reported to the US EPA as the official United States submission to the UN Framework Convention on Climate Change. Beginning in 2015, however, estimates of soil organic carbon (SOC) stocks will be based on...

  2. Buried transuranic wastes at ORNL: Review of past estimates and reconciliation with current data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trabalka, J.R.

    1997-09-01

    Inventories of buried (generally meaning disposed of) transuranic (TRU) wastes at Oak Ridge National Laboratory (ORNL) have been estimated for site remediation and waste management planning over a period of about two decades. Estimates were required because of inadequate waste characterization and incomplete disposal records. For a variety of reasons, including changing definitions of TRU wastes, differing objectives for the estimates, and poor historical data, the published results have sometimes been in conflict. The purpose of this review was (1) to attempt to explain both the rationale for and differences among the various estimates, and (2) to update the estimatesmore » based on more recent information obtained from waste characterization and from evaluations of ORNL waste data bases and historical records. The latter included information obtained from an expert panel`s review and reconciliation of inconsistencies in data identified during preparation of the ORNL input for the third revision of the Baseline Inventory Report for the Waste Isolation Pilot Plant. The results summarize current understanding of the relationship between past estimates of buried TRU wastes and provide the most up-to-date information on recorded burials thereafter. The limitations of available information on the latter and thus the need for improved waste characterization are highlighted.« less

  3. Extended active disturbance rejection controller

    NASA Technical Reports Server (NTRS)

    Tian, Gang (Inventor); Gao, Zhiqiang (Inventor)

    2012-01-01

    Multiple designs, systems, methods and processes for controlling a system or plant using an extended active disturbance rejection control (ADRC) based controller are presented. The extended ADRC controller accepts sensor information from the plant. The sensor information is used in conjunction with an extended state observer in combination with a predictor that estimates and predicts the current state of the plant and a co-joined estimate of the system disturbances and system dynamics. The extended state observer estimates and predictions are used in conjunction with a control law that generates an input to the system based in part on the extended state observer estimates and predictions as well as a desired trajectory for the plant to follow.

  4. Extended Active Disturbance Rejection Controller

    NASA Technical Reports Server (NTRS)

    Gao, Zhiqiang (Inventor); Tian, Gang (Inventor)

    2016-01-01

    Multiple designs, systems, methods and processes for controlling a system or plant using an extended active disturbance rejection control (ADRC) based controller are presented. The extended ADRC controller accepts sensor information from the plant. The sensor information is used in conjunction with an extended state observer in combination with a predictor that estimates and predicts the current state of the plant and a co-joined estimate of the system disturbances and system dynamics. The extended state observer estimates and predictions are used in conjunction with a control law that generates an input to the system based in part on the extended state observer estimates and predictions as well as a desired trajectory for the plant to follow.

  5. Extended Active Disturbance Rejection Controller

    NASA Technical Reports Server (NTRS)

    Tian, Gang (Inventor); Gao, Zhiqiang (Inventor)

    2014-01-01

    Multiple designs, systems, methods and processes for controlling a system or plant using an extended active disturbance rejection control (ADRC) based controller are presented. The extended ADRC controller accepts sensor information from the plant. The sensor information is used in conjunction with an extended state observer in combination with a predictor that estimates and predicts the current state of the plant and a co-joined estimate of the system disturbances and system dynamics. The extended state observer estimates and predictions are used in conjunction with a control law that generates an input to the system based in part on the extended state observer estimates and predictions as well as a desired trajectory for the plant to follow.

  6. Estimation of Supercapacitor Energy Storage Based on Fractional Differential Equations.

    PubMed

    Kopka, Ryszard

    2017-12-22

    In this paper, new results on using only voltage measurements on supercapacitor terminals for estimation of accumulated energy are presented. For this purpose, a study based on application of fractional-order models of supercapacitor charging/discharging circuits is undertaken. Parameter estimates of the models are then used to assess the amount of the energy accumulated in supercapacitor. The obtained results are compared with energy determined experimentally by measuring voltage and current on supercapacitor terminals. All the tests are repeated for various input signal shapes and parameters. Very high consistency between estimated and experimental results fully confirm suitability of the proposed approach and thus applicability of the fractional calculus to modelling of supercapacitor energy storage.

  7. Multistage variable probability forest volume inventory. [the Defiance Unit of the Navajo Nation

    NASA Technical Reports Server (NTRS)

    Anderson, J. E. (Principal Investigator)

    1979-01-01

    An inventory scheme based on the use of computer processed LANDSAT MSS data was developed. Output from the inventory scheme provides an estimate of the standing net saw timber volume of a major timber species on a selected forested area of the Navajo Nation. Such estimates are based on the values of parameters currently used for scaled sawlog conversion to mill output. The multistage variable probability sampling appears capable of producing estimates which compare favorably with those produced using conventional techniques. In addition, the reduction in time, manpower, and overall costs lend it to numerous applications.

  8. Projection-based motion estimation for cardiac functional analysis with high temporal resolution: a proof-of-concept study with digital phantom experiment

    NASA Astrophysics Data System (ADS)

    Suzuki, Yuki; Fung, George S. K.; Shen, Zeyang; Otake, Yoshito; Lee, Okkyun; Ciuffo, Luisa; Ashikaga, Hiroshi; Sato, Yoshinobu; Taguchi, Katsuyuki

    2017-03-01

    Cardiac motion (or functional) analysis has shown promise not only for non-invasive diagnosis of cardiovascular diseases but also for prediction of cardiac future events. Current imaging modalities has limitations that could degrade the accuracy of the analysis indices. In this paper, we present a projection-based motion estimation method for x-ray CT that estimates cardiac motion with high spatio-temporal resolution using projection data and a reference 3D volume image. The experiment using a synthesized digital phantom showed promising results for motion analysis.

  9. An estimation of Canadian population exposure to cosmic rays from air travel.

    PubMed

    Chen, Jing; Newton, Dustin

    2013-03-01

    Based on air travel statistics in 1984, it was estimated that less than 4 % of the population dose from cosmic ray exposure would result from air travel. In the present study, cosmic ray doses were calculated for more than 3,000 flights departing from more than 200 Canadian airports using actual flight profiles. Based on currently available air travel statistics, the annual per capita effective dose from air transportation is estimated to be 32 μSv for Canadians, about 10 % of the average cosmic ray dose received at ground level (310 μSv per year).

  10. A history estimate and evolutionary analysis of rabies virus variants in China.

    PubMed

    Ming, Pinggang; Yan, Jiaxin; Rayner, Simon; Meng, Shengli; Xu, Gelin; Tang, Qing; Wu, Jie; Luo, Jing; Yang, Xiaoming

    2010-03-01

    To investigate the evolutionary dynamics of rabies virus (RABV) in China, we collected and sequenced 55 isolates sampled from 14 Chinese provinces over the last 40 years and performed a coalescent-based analysis of the G gene. This revealed that the RABV currently circulating in China is composed of three main groups. Bayesian coalescent analysis estimated the date of the most recent common ancestor for the current RABV Chinese strains to be 1412 (with a 95 % confidence interval of 1006-1736). The estimated mean substitution rate for the G gene sequences (3.961x10(-4) substitutions per site per year) was in accordance with previous reports for RABV.

  11. Topogram-based tube current modulation of head computed tomography for optimizing image quality while protecting the eye lens with shielding.

    PubMed

    Lin, Ming-Fang; Chen, Chia-Yuen; Lee, Yuan-Hao; Li, Chia-Wei; Gerweck, Leo E; Wang, Hao; Chan, Wing P

    2018-01-01

    Background Multiple rounds of head computed tomography (CT) scans increase the risk of radiation-induced lens opacification. Purpose To investigate the effects of CT eye shielding and topogram-based tube current modulation (TCM) on the radiation dose received by the lens and the image quality of nasal and periorbital imaging. Material and Methods An anthropomorphic phantom was CT-scanned using either automatic tube current modulation or a fixed tube current. The lens radiation dose was estimated using cropped Gafchromic films irradiated with or without a shield over the orbit. Image quality, assessed using regions of interest drawn on the bilateral extraorbital areas and the nasal bone with a water-based marker, was evaluated using both a signal-to-noise ratio (SNR) and contrast-noise ratio (CNR). Two CT specialists independently assessed image artifacts using a three-point Likert scale. Results The estimated radiation dose received by the lens was significantly lower when barium sulfate or bismuth-antimony shields were used in conjunction with a fixed tube current (22.0% and 35.6% reduction, respectively). Topogram-based TCM mitigated the beam hardening-associated artifacts of bismuth-antimony and barium sulfate shields. This increased the SNR by 21.6% in the extraorbital region and the CNR by 7.2% between the nasal bones and extraorbital regions. The combination of topogram-based TCM and barium sulfate or bismuth-antimony shields reduced lens doses by 12.2% and 27.2%, respectively. Conclusion Image artifacts induced by the bismuth-antimony shield at a fixed tube current for lenticular radioprotection were significantly reduced by topogram-based TCM, which increased the SNR of the anthropomorphic nasal bones and periorbital tissues.

  12. Titanium and advanced composite structures for a supersonic cruise arrow wing configuration

    NASA Technical Reports Server (NTRS)

    Turner, M. J.; Hoy, J. M.

    1976-01-01

    Structural design studies were made, based on current technology and on an estimate of technology to be available in the mid 1980's, to assess the relative merits of structural concepts and materials for an advanced arrow wing configuration cruising at Mach 2.7. Preliminary studies were made to insure compliance of the configuration with general design criteria, integrate the propulsion system with the airframe, and define an efficient structural arrangement. Material and concept selection, detailed structural analysis, structural design and airplane mass analysis were completed based on current technology. Based on estimated future technology, structural sizing for strength and a preliminary assessment of the flutter of a strength designed composite structure were completed. An advanced computerized structural design system was used, in conjunction with a relatively complex finite element model, for detailed analysis and sizing of structural members.

  13. Maneuver Algorithm for Bearings-Only Target Tracking with Acceleration and Field of View Constraints

    NASA Astrophysics Data System (ADS)

    Roh, Heekun; Shim, Sang-Wook; Tahk, Min-Jea

    2018-05-01

    This paper proposes a maneuver algorithm for the agent performing target tracking with bearing angle information only. The goal of the agent is to estimate the target position and velocity based only on the bearing angle data. The methods of bearings-only target state estimation are outlined. The nature of bearings-only target tracking problem is then addressed. Based on the insight from above-mentioned properties, the maneuver algorithm for the agent is suggested. The proposed algorithm is composed of a nonlinear, hysteresis guidance law and the estimation accuracy assessment criteria based on the theory of Cramer-Rao bound. The proposed guidance law generates lateral acceleration command based on current field of view angle. The accuracy criteria supply the expected estimation variance, which acts as a terminal criterion for the proposed algorithm. The aforementioned algorithm is verified with a two-dimensional simulation.

  14. Prevalence and Trends in Lifetime Obesity in the U.S., 1988-2014.

    PubMed

    Stokes, Andrew; Ni, Yu; Preston, Samuel H

    2017-11-01

    Estimates of obesity prevalence based on current BMI are an important but incomplete indicator of the total effects of obesity on a population. In this study, data on current BMI and maximum BMI were used to estimate prevalence and trends in lifetime obesity status, defined using the categories never (maximum BMI ≤30 kg/m 2 ), former (maximum BMI ≥30 kg/m 2 and current BMI ≤30 kg/m 2 ), and current obesity (current BMI ≥30 kg/m 2 ). Prevalence was estimated for the period 2013-2014 and trends for the period 1988-2014 using data from the National Health and Nutrition Examination Survey. Predictors of lifetime weight status and the association between lifetime weight categories and prevalent disease status were also investigated using multivariable regression. A total of 50.8% of American males and 51.6% of American females were ever obese in 2013-2014. The prevalence of lifetime obesity exceeded the prevalence of current obesity by amounts that were greater for males and for older persons. The gap between the two prevalence values has risen over time. By 2013-2014, a total of 22.0% of individuals who were not currently obese had formerly been obese. For each of eight diseases considered, prevalence was higher among the formerly obese than among the never obese. A larger fraction of the population is affected by obesity and its health consequences than is suggested in prior studies based on current BMI alone. Weight history should be incorporated into routine health surveillance of the obesity epidemic for a full accounting of the effects of obesity on the U.S. Copyright © 2017 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  15. R package to estimate intracluster correlation coefficient with confidence interval for binary data.

    PubMed

    Chakraborty, Hrishikesh; Hossain, Akhtar

    2018-03-01

    The Intracluster Correlation Coefficient (ICC) is a major parameter of interest in cluster randomized trials that measures the degree to which responses within the same cluster are correlated. There are several types of ICC estimators and its confidence intervals (CI) suggested in the literature for binary data. Studies have compared relative weaknesses and advantages of ICC estimators as well as its CI for binary data and suggested situations where one is advantageous in practical research. The commonly used statistical computing systems currently facilitate estimation of only a very few variants of ICC and its CI. To address the limitations of current statistical packages, we developed an R package, ICCbin, to facilitate estimating ICC and its CI for binary responses using different methods. The ICCbin package is designed to provide estimates of ICC in 16 different ways including analysis of variance methods, moments based estimation, direct probabilistic methods, correlation based estimation, and resampling method. CI of ICC is estimated using 5 different methods. It also generates cluster binary data using exchangeable correlation structure. ICCbin package provides two functions for users. The function rcbin() generates cluster binary data and the function iccbin() estimates ICC and it's CI. The users can choose appropriate ICC and its CI estimate from the wide selection of estimates from the outputs. The R package ICCbin presents very flexible and easy to use ways to generate cluster binary data and to estimate ICC and it's CI for binary response using different methods. The package ICCbin is freely available for use with R from the CRAN repository (https://cran.r-project.org/package=ICCbin). We believe that this package can be a very useful tool for researchers to design cluster randomized trials with binary outcome. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Estimation of electric fields and current from ground-based magnetometer data

    NASA Technical Reports Server (NTRS)

    Kamide, Y.; Richmond, A. D.

    1984-01-01

    Recent advances in numerical algorithms for estimating ionospheric electric fields and currents from groundbased magnetometer data are reviewed and evaluated. Tests of the adequacy of one such algorithm in reproducing large-scale patterns of electrodynamic parameters in the high-latitude ionosphere have yielded generally positive results, at least for some simple cases. Some encouraging advances in producing realistic conductivity models, which are a critical input, are pointed out. When the algorithms are applied to extensive data sets, such as the ones from meridian chain magnetometer networks during the IMS, together with refined conductivity models, unique information on instantaneous electric field and current patterns can be obtained. Examples of electric potentials, ionospheric currents, field-aligned currents, and Joule heating distributions derived from ground magnetic data are presented. Possible directions for future improvements are also pointed out.

  17. Estimating Power System Dynamic States Using Extended Kalman Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Zhenyu; Schneider, Kevin P.; Nieplocha, Jaroslaw

    2014-10-31

    Abstract—The state estimation tools which are currently deployed in power system control rooms are based on a steady state assumption. As a result, the suite of operational tools that rely on state estimation results as inputs do not have dynamic information available and their accuracy is compromised. This paper investigates the application of Extended Kalman Filtering techniques for estimating dynamic states in the state estimation process. The new formulated “dynamic state estimation” includes true system dynamics reflected in differential equations, not like previously proposed “dynamic state estimation” which only considers the time-variant snapshots based on steady state modeling. This newmore » dynamic state estimation using Extended Kalman Filter has been successfully tested on a multi-machine system. Sensitivity studies with respect to noise levels, sampling rates, model errors, and parameter errors are presented as well to illustrate the robust performance of the developed dynamic state estimation process.« less

  18. Wide-Area Soil Moisture Estimation Using the Propagation of Lightning Generated Low-Frequency Electromagnetic Signals 1977

    USDA-ARS?s Scientific Manuscript database

    Land surface moisture measurements are central to our understanding of the earth’s water system, and are needed to produce accurate model-based weather/climate predictions. Currently, there exists no in-situ network capable of estimating wide-area soil moisture. In this paper, we explore an alterna...

  19. Estimating Children’s Soil/Dust Ingestion Rates through Retrospective Analyses of Blood Lead Biomonitoring from the Bunker Hill Superfund Site in Idaho

    EPA Science Inventory

    Background: Soil/dust ingestion rates are important variables in assessing children’s health risks in contaminated environments. Current estimates are based largely on soil tracer methodology, which is limited by analytical uncertainty, small sample size, and short study du...

  20. Forests of Illinois, 2017

    Treesearch

    Susan J. Crocker

    2018-01-01

    This update provides an overview of forest resources in Illinois following an inventory by the USDA Forest Service, Forest Inventory and Analysis program, Northern Research Station. Estimates are derived from field data collected using an annualized sample design. Current variable estimates such as area and volume are based on 5,994 (1,046 forested) plots measured in...

  1. ASCAL: A Microcomputer Program for Estimating Logistic IRT Item Parameters.

    ERIC Educational Resources Information Center

    Vale, C. David; Gialluca, Kathleen A.

    ASCAL is a microcomputer-based program for calibrating items according to the three-parameter logistic model of item response theory. It uses a modified multivariate Newton-Raphson procedure for estimating item parameters. This study evaluated this procedure using Monte Carlo Simulation Techniques. The current version of ASCAL was then compared to…

  2. Estimated Demand for Michigan's College and University Graduates of 1987.

    ERIC Educational Resources Information Center

    Shingleton, John D.; Scheetz, L. Patrick

    The current job market for 1987 Michigan college graduates was estimated by placement directors and career counselors at 50 Michigan two-year and four-year colleges and universities. The staff rated supply and demand based on information from graduate surveys, employers, and job listings. For each major, the actual ratings are provided of…

  3. Current and anticipated use of thermal-hydraulic codes for BWR transient and accident analyses in Japan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arai, Kenji; Ebata, Shigeo

    1997-07-01

    This paper summarizes the current and anticipated use of the thermal-hydraulic and neutronic codes for the BWR transient and accident analyses in Japan. The codes may be categorized into the licensing codes and the best estimate codes for the BWR transient and accident analyses. Most of the licensing codes have been originally developed by General Electric. Some codes have been updated based on the technical knowledge obtained in the thermal hydraulic study in Japan, and according to the BWR design changes. The best estimates codes have been used to support the licensing calculations and to obtain the phenomenological understanding ofmore » the thermal hydraulic phenomena during a BWR transient or accident. The best estimate codes can be also applied to a design study for a next generation BWR to which the current licensing model may not be directly applied. In order to rationalize the margin included in the current BWR design and develop a next generation reactor with appropriate design margin, it will be required to improve the accuracy of the thermal-hydraulic and neutronic model. In addition, regarding the current best estimate codes, the improvement in the user interface and the numerics will be needed.« less

  4. MTPA control of mechanical sensorless IPMSM based on adaptive nonlinear control.

    PubMed

    Najjar-Khodabakhsh, Abbas; Soltani, Jafar

    2016-03-01

    In this paper, an adaptive nonlinear control scheme has been proposed for implementing maximum torque per ampere (MTPA) control strategy corresponding to interior permanent magnet synchronous motor (IPMSM) drive. This control scheme is developed in the rotor d-q axis reference frame using adaptive input-output state feedback linearization (AIOFL) method. The drive system control stability is supported by Lyapunov theory. The motor inductances are online estimated by an estimation law obtained by AIOFL. The estimation errors of these parameters are proved to be asymptotically converged to zero. Based on minimizing the motor current amplitude, the MTPA control strategy is performed by using the nonlinear optimization technique while considering the online reference torque. The motor reference torque is generated by a conventional rotor speed PI controller. By performing MTPA control strategy, the generated online motor d-q reference currents were used in AIOFL controller to obtain the SV-PWM reference voltages and the online estimation of the motor d-q inductances. In addition, the stator resistance is online estimated using a conventional PI controller. Moreover, the rotor position is detected using the online estimation of the stator flux and online estimation of the motor q-axis inductance. Simulation and experimental results obtained prove the effectiveness and the capability of the proposed control method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Resolution of Forces and Strain Measurements from an Acoustic Ground Test

    NASA Technical Reports Server (NTRS)

    Smith, Andrew M.; LaVerde, Bruce T.; Hunt, Ronald; Waldon, James M.

    2013-01-01

    The Conservatism in Typical Vibration Tests was Demonstrated: Vibration test at component level produced conservative force reactions by approximately a factor of 4 (approx.12 dB) as compared to the integrated acoustic test in 2 out of 3 axes. Reaction Forces Estimated at the Base of Equipment Using a Finite Element Based Method were Validated: FEM based estimate of interface forces may be adequate to guide development of vibration test criteria with less conservatism. Element Forces Estimated in Secondary Structure Struts were Validated: Finite element approach provided best estimate of axial strut forces in frequency range below 200 Hz where a rigid lumped mass assumption for the entire electronics box was valid. Models with enough fidelity to represent diminishing apparent mass of equipment are better suited for estimating force reactions across the frequency range. Forward Work: Demonstrate the reduction in conservatism provided by; Current force limited approach and an FEM guided approach. Validate proposed CMS approach to estimate coupled response from uncoupled system characteristics for vibroacoustics.

  6. Examining the Reliability of Student Growth Percentiles Using Multidimensional IRT

    ERIC Educational Resources Information Center

    Monroe, Scott; Cai, Li

    2015-01-01

    Student growth percentiles (SGPs, Betebenner, 2009) are used to locate a student's current score in a conditional distribution based on the student's past scores. Currently, following Betebenner (2009), quantile regression (QR) is most often used operationally to estimate the SGPs. Alternatively, multidimensional item response theory (MIRT) may…

  7. Review of the Current Body Fat Taping Method and Its Importance in Ascertaining Fitness Levels in the United States Marine Corps

    DTIC Science & Technology

    2015-06-01

    Defense (DOD) body fat estimate was developed based on data collected in 1984 from the Naval Health Research Center, San Diego. In this thesis, multiple...Defense (DOD) body fat estimate was developed based on data collected in 1984 from the Naval Health Research Center, San Diego. In this thesis...7   B.   EVOLUTION OF WEIGHT AND FITNESS STANDARDS: CIVIL WAR THROUGH 1980

  8. Monte Carlo role in radiobiological modelling of radiotherapy outcomes

    NASA Astrophysics Data System (ADS)

    El Naqa, Issam; Pater, Piotr; Seuntjens, Jan

    2012-06-01

    Radiobiological models are essential components of modern radiotherapy. They are increasingly applied to optimize and evaluate the quality of different treatment planning modalities. They are frequently used in designing new radiotherapy clinical trials by estimating the expected therapeutic ratio of new protocols. In radiobiology, the therapeutic ratio is estimated from the expected gain in tumour control probability (TCP) to the risk of normal tissue complication probability (NTCP). However, estimates of TCP/NTCP are currently based on the deterministic and simplistic linear-quadratic formalism with limited prediction power when applied prospectively. Given the complex and stochastic nature of the physical, chemical and biological interactions associated with spatial and temporal radiation induced effects in living tissues, it is conjectured that methods based on Monte Carlo (MC) analysis may provide better estimates of TCP/NTCP for radiotherapy treatment planning and trial design. Indeed, over the past few decades, methods based on MC have demonstrated superior performance for accurate simulation of radiation transport, tumour growth and particle track structures; however, successful application of modelling radiobiological response and outcomes in radiotherapy is still hampered with several challenges. In this review, we provide an overview of some of the main techniques used in radiobiological modelling for radiotherapy, with focus on the MC role as a promising computational vehicle. We highlight the current challenges, issues and future potentials of the MC approach towards a comprehensive systems-based framework in radiobiological modelling for radiotherapy.

  9. Method and system for determining induction motor speed

    DOEpatents

    Parlos, Alexander G.; Bharadwaj, Raj M.

    2004-03-30

    A non-linear, semi-parametric neural network-based adaptive filter is utilized to determine the dynamic speed of a rotating rotor within an induction motor, without the explicit use of a speed sensor, such as a tachometer, is disclosed. The neural network-based filter is developed using actual motor current measurements, voltage measurements, and nameplate information. The neural network-based adaptive filter is trained using an estimated speed calculator derived from the actual current and voltage measurements. The neural network-based adaptive filter uses voltage and current measurements to determine the instantaneous speed of a rotating rotor. The neural network-based adaptive filter also includes an on-line adaptation scheme that permits the filter to be readily adapted for new operating conditions during operations.

  10. Combining computer adaptive testing technology with cognitively diagnostic assessment.

    PubMed

    McGlohen, Meghan; Chang, Hua-Hua

    2008-08-01

    A major advantage of computerized adaptive testing (CAT) is that it allows the test to home in on an examinee's ability level in an interactive manner. The aim of the new area of cognitive diagnosis is to provide information about specific content areas in which an examinee needs help. The goal of this study was to combine the benefit of specific feedback from cognitively diagnostic assessment with the advantages of CAT. In this study, three approaches to combining these were investigated: (1) item selection based on the traditional ability level estimate (theta), (2) item selection based on the attribute mastery feedback provided by cognitively diagnostic assessment (alpha), and (3) item selection based on both the traditional ability level estimate (theta) and the attribute mastery feedback provided by cognitively diagnostic assessment (alpha). The results from these three approaches were compared for theta estimation accuracy, attribute mastery estimation accuracy, and item exposure control. The theta- and alpha-based condition outperformed the alpha-based condition regarding theta estimation, attribute mastery pattern estimation, and item exposure control. Both the theta-based condition and the theta- and alpha-based condition performed similarly with regard to theta estimation, attribute mastery estimation, and item exposure control, but the theta- and alpha-based condition has an additional advantage in that it uses the shadow test method, which allows the administrator to incorporate additional constraints in the item selection process, such as content balancing, item type constraints, and so forth, and also to select items on the basis of both the current theta and alpha estimates, which can be built on top of existing 3PL testing programs.

  11. Wave height estimates from pressure and velocity data at an intermediate depth in the presence of uniform currents

    NASA Astrophysics Data System (ADS)

    Basu, Biswajit

    2017-12-01

    Bounds on estimates of wave heights (valid for large amplitudes) from pressure and flow measurements at an arbitrary intermediate depth have been provided. Two-dimensional irrotational steady water waves over a flat bed with a finite depth in the presence of underlying uniform currents have been considered in the analysis. Five different upper bounds based on a combination of pressure and velocity field measurements have been derived, though there is only one available lower bound on the wave height in the case of the speed of current greater than or less than the wave speed. This article is part of the theme issue 'Nonlinear water waves'.

  12. Estimating pathway-specific contributions to biodegradation in aquifers based on dual isotope analysis: theoretical analysis and reactive transport simulations.

    PubMed

    Centler, Florian; Heße, Falk; Thullner, Martin

    2013-09-01

    At field sites with varying redox conditions, different redox-specific microbial degradation pathways contribute to total contaminant degradation. The identification of pathway-specific contributions to total contaminant removal is of high practical relevance, yet difficult to achieve with current methods. Current stable-isotope-fractionation-based techniques focus on the identification of dominant biodegradation pathways under constant environmental conditions. We present an approach based on dual stable isotope data to estimate the individual contributions of two redox-specific pathways. We apply this approach to carbon and hydrogen isotope data obtained from reactive transport simulations of an organic contaminant plume in a two-dimensional aquifer cross section to test the applicability of the method. To take aspects typically encountered at field sites into account, additional simulations addressed the effects of transverse mixing, diffusion-induced stable-isotope fractionation, heterogeneities in the flow field, and mixing in sampling wells on isotope-based estimates for aerobic and anaerobic pathway contributions to total contaminant biodegradation. Results confirm the general applicability of the presented estimation method which is most accurate along the plume core and less accurate towards the fringe where flow paths receive contaminant mass and associated isotope signatures from the core by transverse dispersion. The presented method complements the stable-isotope-fractionation-based analysis toolbox. At field sites with varying redox conditions, it provides a means to identify the relative importance of individual, redox-specific degradation pathways. © 2013.

  13. Online cross-validation-based ensemble learning.

    PubMed

    Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark

    2018-01-30

    Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and, as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  14. A method for estimating maternal and newborn lives saved from health-related investments funded by the UK government Department for International Development using the Lives Saved Tool.

    PubMed

    Friberg, Ingrid K; Baschieri, Angela; Abbotts, Jo

    2017-11-07

    In 2010, the UK Government Department for International Development (DFID) committed through its 'Framework for results for reproductive, maternal and newborn health (RMNH)' to save 50,000 maternal lives and 250,000 newborn lives by 2015. They also committed to monitoring the performance of this portfolio of investments to demonstrate transparency and accountability. Methods currently available to directly measure lives saved are cost-, time-, and labour-intensive. The gold standard for calculating the total number of lives saved would require measuring mortality with large scale population based surveys or annual vital events surveillance. Neither is currently available in all low- and middle-income countries. Estimating the independent effect of DFID support relative to all other effects on health would also be challenging. The Lives Saved Tool (LiST) is an evidence based software for modelling the effect of changes in health intervention coverage on reproductive, maternal, newborn and child mortality. A multi-country LiST-based analysis protocol was developed to retrospectively assess the total annual number of maternal and newborn lives saved from DFID aid programming in low- and middle-income countries. Annual LiST analyses using the latest program data from DFID country offices were conducted between 2013 and 2016, estimating the annual number of maternal and neonatal lives saved across 2010-2015. For each country, independent project results were aggregated into health intervention coverage estimates, with and in the absence of DFID funding. More than 80% of reported projects were suitable for inclusion in the analysis, with 151 projects analysed in the 2016 analysis. Between 2010 and 2014, it is estimated that DFID contributed to saving the lives of 15,000 women in pregnancy and childbirth with health programming and 88,000 with family planning programming. It is estimated that DFID health programming contributed to saving 187,000 newborn lives. It is feasible to estimate the overall contribution and impact of DFID's investment in RMNH from currently available information on interventions and coverage from individual country offices. This utilization of LiST, with estimated population coverage based on DFID program inputs, can be applied to similar types of datasets to quantify programme impact. The global data were used to estimate DFID's progress against the Framework for results targets to inform future programming. The identified limitations can also be considered to inform future monitoring and evaluation program design and implementation within DFID.

  15. Effect of Binary Source Companions on the Microlensing Optical Depth Determination toward the Galactic Bulge Field

    NASA Astrophysics Data System (ADS)

    Han, Cheongho

    2005-11-01

    Currently, gravitational microlensing survey experiments toward the Galactic bulge field use two different methods of minimizing the blending effect for the accurate determination of the optical depth τ. One is measuring τ based on clump giant (CG) source stars, and the other is using ``difference image analysis'' (DIA) photometry to measure the unblended source flux variation. Despite the expectation that the two estimates should be the same assuming that blending is properly considered, the estimates based on CG stars systematically fall below the DIA results based on all events with source stars down to the detection limit. Prompted by the gap, we investigate the previously unconsidered effect of companion-associated events on τ determination. Although the image of a companion is blended with that of its primary star and thus not resolved, the event associated with the companion can be detected if the companion flux is highly magnified. Therefore, companions work effectively as source stars to microlensing, and thus the neglect of them in the source star count could result in a wrong τ estimation. By carrying out simulations based on the assumption that companions follow the same luminosity function as primary stars, we estimate that the contribution of the companion-associated events to the total event rate is ~5fbi% for current surveys and can reach up to ~6fbi% for future surveys monitoring fainter stars, where fbi is the binary frequency. Therefore, we conclude that the companion-associated events comprise a nonnegligible fraction of all events. However, their contribution to the optical depth is not large enough to explain the systematic difference between the optical depth estimates based on the two different methods.

  16. Estimates of natural salinity and hydrology in a subtropical estuarine ecosystem: implications for Greater Everglades restoration

    USGS Publications Warehouse

    Marshall, Frank E.; Wingard, G. Lynn; Pitts, Patrick A.

    2014-01-01

    Disruption of the natural patterns of freshwater flow into estuarine ecosystems occurred in many locations around the world beginning in the twentieth century. To effectively restore these systems, establishing a pre-alteration perspective allows managers to develop science-based restoration targets for salinity and hydrology. This paper describes a process to develop targets based on natural hydrologic functions by coupling paleoecology and regression models using the subtropical Greater Everglades Ecosystem as an example. Paleoecological investigations characterize the circa 1900 CE (pre-alteration) salinity regime in Florida Bay based on molluscan remains in sediment cores. These paleosalinity estimates are converted into time series estimates of paleo-based salinity, stage, and flow using numeric and statistical models. Model outputs are weighted using the mean square error statistic and then combined. Results indicate that, in the absence of water management, salinity in Florida Bay would be about 3 to 9 salinity units lower than current conditions. To achieve this target, upstream freshwater levels must be about 0.25 m higher than indicated by recent observed data, with increased flow inputs to Florida Bay between 2.1 and 3.7 times existing flows. This flow deficit is comparable to the average volume of water currently being diverted from the Everglades ecosystem by water management. The products (paleo-based Florida Bay salinity and upstream hydrology) provide estimates of pre-alteration hydrology and salinity that represent target restoration conditions. This method can be applied to any estuarine ecosystem with available paleoecologic data and empirical and/or model-based hydrologic data.

  17. Optimal designs based on the maximum quasi-likelihood estimator

    PubMed Central

    Shen, Gang; Hyun, Seung Won; Wong, Weng Kee

    2016-01-01

    We use optimal design theory and construct locally optimal designs based on the maximum quasi-likelihood estimator (MqLE), which is derived under less stringent conditions than those required for the MLE method. We show that the proposed locally optimal designs are asymptotically as efficient as those based on the MLE when the error distribution is from an exponential family, and they perform just as well or better than optimal designs based on any other asymptotically linear unbiased estimators such as the least square estimator (LSE). In addition, we show current algorithms for finding optimal designs can be directly used to find optimal designs based on the MqLE. As an illustrative application, we construct a variety of locally optimal designs based on the MqLE for the 4-parameter logistic (4PL) model and study their robustness properties to misspecifications in the model using asymptotic relative efficiency. The results suggest that optimal designs based on the MqLE can be easily generated and they are quite robust to mis-specification in the probability distribution of the responses. PMID:28163359

  18. Outer planet probe cost estimates: First impressions

    NASA Technical Reports Server (NTRS)

    Niehoff, J.

    1974-01-01

    An examination was made of early estimates of outer planetary atmospheric probe cost by comparing the estimates with past planetary projects. Of particular interest is identification of project elements which are likely cost drivers for future probe missions. Data are divided into two parts: first, the description of a cost model developed by SAI for the Planetary Programs Office of NASA, and second, use of this model and its data base to evaluate estimates of probe costs. Several observations are offered in conclusion regarding the credibility of current estimates and specific areas of the outer planet probe concept most vulnerable to cost escalation.

  19. Lifetime Earnings Estimates for Men and Women in the United States: 1979.

    ERIC Educational Resources Information Center

    Burkhead, Dan L.

    1983-01-01

    This report presents estimates of expected lifetime earnings based on data collected in the March Current Population Survey by age, sex, and educational attainment for 1978, 1979, and 1980. The text describes the data tables and charts, methodology, and limitations of the data. The eight figures and five detailed tables present lifetime earning…

  20. Error vector magnitude based parameter estimation for digital filter back-propagation mitigating SOA distortions in 16-QAM.

    PubMed

    Amiralizadeh, Siamak; Nguyen, An T; Rusch, Leslie A

    2013-08-26

    We investigate the performance of digital filter back-propagation (DFBP) using coarse parameter estimation for mitigating SOA nonlinearity in coherent communication systems. We introduce a simple, low overhead method for parameter estimation for DFBP based on error vector magnitude (EVM) as a figure of merit. The bit error rate (BER) penalty achieved with this method has negligible penalty as compared to DFBP with fine parameter estimation. We examine different bias currents for two commercial SOAs used as booster amplifiers in our experiments to find optimum operating points and experimentally validate our method. The coarse parameter DFBP efficiently compensates SOA-induced nonlinearity for both SOA types in 80 km propagation of 16-QAM signal at 22 Gbaud.

  1. Estimated Effects of Different Alcohol Taxation and Price Policies on Health Inequalities: A Mathematical Modelling Study.

    PubMed

    Meier, Petra S; Holmes, John; Angus, Colin; Ally, Abdallah K; Meng, Yang; Brennan, Alan

    2016-02-01

    While evidence that alcohol pricing policies reduce alcohol-related health harm is robust, and alcohol taxation increases are a WHO "best buy" intervention, there is a lack of research comparing the scale and distribution across society of health impacts arising from alternative tax and price policy options. The aim of this study is to test whether four common alcohol taxation and pricing strategies differ in their impact on health inequalities. An econometric epidemiological model was built with England 2014/2015 as the setting. Four pricing strategies implemented on top of the current tax were equalised to give the same 4.3% population-wide reduction in total alcohol-related mortality: current tax increase, a 13.4% all-product duty increase under the current UK system; a value-based tax, a 4.0% ad valorem tax based on product price; a strength-based tax, a volumetric tax of £0.22 per UK alcohol unit (= 8 g of ethanol); and minimum unit pricing, a minimum price threshold of £0.50 per unit, below which alcohol cannot be sold. Model inputs were calculated by combining data from representative household surveys on alcohol purchasing and consumption, administrative and healthcare data on 43 alcohol-attributable diseases, and published price elasticities and relative risk functions. Outcomes were annual per capita consumption, consumer spending, and alcohol-related deaths. Uncertainty was assessed via partial probabilistic sensitivity analysis (PSA) and scenario analysis. The pricing strategies differ as to how effects are distributed across the population, and, from a public health perspective, heavy drinkers in routine/manual occupations are a key group as they are at greatest risk of health harm from their drinking. Strength-based taxation and minimum unit pricing would have greater effects on mortality among drinkers in routine/manual occupations (particularly for heavy drinkers, where the estimated policy effects on mortality rates are as follows: current tax increase, -3.2%; value-based tax, -2.9%; strength-based tax, -6.1%; minimum unit pricing, -7.8%) and lesser impacts among drinkers in professional/managerial occupations (for heavy drinkers: current tax increase, -1.3%; value-based tax, -1.4%; strength-based tax, +0.2%; minimum unit pricing, +0.8%). Results from the PSA give slightly greater mean effects for both the routine/manual (current tax increase, -3.6% [95% uncertainty interval (UI) -6.1%, -0.6%]; value-based tax, -3.3% [UI -5.1%, -1.7%]; strength-based tax, -7.5% [UI -13.7%, -3.9%]; minimum unit pricing, -10.3% [UI -10.3%, -7.0%]) and professional/managerial occupation groups (current tax increase, -1.8% [UI -4.7%, +1.6%]; value-based tax, -1.9% [UI -3.6%, +0.4%]; strength-based tax, -0.8% [UI -6.9%, +4.0%]; minimum unit pricing, -0.7% [UI -5.6%, +3.6%]). Impacts of price changes on moderate drinkers were small regardless of income or socioeconomic group. Analysis of uncertainty shows that the relative effectiveness of the four policies is fairly stable, although uncertainty in the absolute scale of effects exists. Volumetric taxation and minimum unit pricing consistently outperform increasing the current tax or adding an ad valorem tax in terms of reducing mortality among the heaviest drinkers and reducing alcohol-related health inequalities (e.g., in the routine/manual occupation group, volumetric taxation reduces deaths more than increasing the current tax in 26 out of 30 probabilistic runs, minimum unit pricing reduces deaths more than volumetric tax in 21 out of 30 runs, and minimum unit pricing reduces deaths more than increasing the current tax in 30 out of 30 runs). Study limitations include reducing model complexity by not considering a largely ineffective ban on below-tax alcohol sales, special duty rates covering only small shares of the market, and the impact of tax fraud or retailer non-compliance with minimum unit prices. Our model estimates that, compared to tax increases under the current system or introducing taxation based on product value, alcohol-content-based taxation or minimum unit pricing would lead to larger reductions in health inequalities across income groups. We also estimate that alcohol-content-based taxation and minimum unit pricing would have the largest impact on harmful drinking, with minimal effects on those drinking in moderation.

  2. Cost-effectiveness of human papillomavirus vaccination in the United States.

    PubMed

    Chesson, Harrell W; Ekwueme, Donatus U; Saraiya, Mona; Markowitz, Lauri E

    2008-02-01

    We describe a simplified model, based on the current economic and health effects of human papillomavirus (HPV), to estimate the cost-effectiveness of HPV vaccination of 12-year-old girls in the United States. Under base-case parameter values, the estimated cost per quality-adjusted life year gained by vaccination in the context of current cervical cancer screening practices in the United States ranged from $3,906 to $14,723 (2005 US dollars), depending on factors such as whether herd immunity effects were assumed; the types of HPV targeted by the vaccine; and whether the benefits of preventing anal, vaginal, vulvar, and oropharyngeal cancers were included. The results of our simplified model were consistent with published studies based on more complex models when key assumptions were similar. This consistency is reassuring because models of varying complexity will be essential tools for policy makers in the development of optimal HPV vaccination strategies.

  3. Reevaluation of mid-Pliocene North Atlantic sea surface temperatures

    USGS Publications Warehouse

    Robinson, Marci M.; Dowsett, Harry J.; Dwyer, Gary S.; Lawrence, Kira T.

    2008-01-01

    Multiproxy temperature estimation requires careful attention to biological, chemical, physical, temporal, and calibration differences of each proxy and paleothermometry method. We evaluated mid-Pliocene sea surface temperature (SST) estimates from multiple proxies at Deep Sea Drilling Project Holes 552A, 609B, 607, and 606, transecting the North Atlantic Drift. SST estimates derived from faunal assemblages, foraminifer Mg/Ca, and alkenone unsaturation indices showed strong agreement at Holes 552A, 607, and 606 once differences in calibration, depth, and seasonality were addressed. Abundant extinct species and/or an unrecognized productivity signal in the faunal assemblage at Hole 609B resulted in exaggerated faunal-based SST estimates but did not affect alkenone-derived or Mg/Ca–derived estimates. Multiproxy mid-Pliocene North Atlantic SST estimates corroborate previous studies documenting high-latitude mid-Pliocene warmth and refine previous faunal-based estimates affected by environmental factors other than temperature. Multiproxy investigations will aid SST estimation in high-latitude areas sensitive to climate change and currently underrepresented in SST reconstructions.

  4. Method and system for early detection of incipient faults in electric motors

    DOEpatents

    Parlos, Alexander G; Kim, Kyusung

    2003-07-08

    A method and system for early detection of incipient faults in an electric motor are disclosed. First, current and voltage values for one or more phases of the electric motor are measured during motor operations. A set of current predictions is then determined via a neural network-based current predictor based on the measured voltage values and an estimate of motor speed values of the electric motor. Next, a set of residuals is generated by combining the set of current predictions with the measured current values. A set of fault indicators is subsequently computed from the set of residuals and the measured current values. Finally, a determination is made as to whether or not there is an incipient electrical, mechanical, and/or electromechanical fault occurring based on the comparison result of the set of fault indicators and a set of predetermined baseline values.

  5. Use of inequality constrained least squares estimation in small area estimation

    NASA Astrophysics Data System (ADS)

    Abeygunawardana, R. A. B.; Wickremasinghe, W. N.

    2017-05-01

    Traditional surveys provide estimates that are based only on the sample observations collected for the population characteristic of interest. However, these estimates may have unacceptably large variance for certain domains. Small Area Estimation (SAE) deals with determining precise and accurate estimates for population characteristics of interest for such domains. SAE usually uses least squares or maximum likelihood procedures incorporating prior information and current survey data. Many available methods in SAE use constraints in equality form. However there are practical situations where certain inequality restrictions on model parameters are more realistic. It will lead to Inequality Constrained Least Squares (ICLS) estimates if the method used is least squares. In this study ICLS estimation procedure is applied to many proposed small area estimates.

  6. Quantification of Microbial Phenotypes

    PubMed Central

    Martínez, Verónica S.; Krömer, Jens O.

    2016-01-01

    Metabolite profiling technologies have improved to generate close to quantitative metabolomics data, which can be employed to quantitatively describe the metabolic phenotype of an organism. Here, we review the current technologies available for quantitative metabolomics, present their advantages and drawbacks, and the current challenges to generate fully quantitative metabolomics data. Metabolomics data can be integrated into metabolic networks using thermodynamic principles to constrain the directionality of reactions. Here we explain how to estimate Gibbs energy under physiological conditions, including examples of the estimations, and the different methods for thermodynamics-based network analysis. The fundamentals of the methods and how to perform the analyses are described. Finally, an example applying quantitative metabolomics to a yeast model by 13C fluxomics and thermodynamics-based network analysis is presented. The example shows that (1) these two methods are complementary to each other; and (2) there is a need to take into account Gibbs energy errors. Better estimations of metabolic phenotypes will be obtained when further constraints are included in the analysis. PMID:27941694

  7. Robust Statistical Approaches for RSS-Based Floor Detection in Indoor Localization.

    PubMed

    Razavi, Alireza; Valkama, Mikko; Lohan, Elena Simona

    2016-05-31

    Floor detection for indoor 3D localization of mobile devices is currently an important challenge in the wireless world. Many approaches currently exist, but usually the robustness of such approaches is not addressed or investigated. The goal of this paper is to show how to robustify the floor estimation when probabilistic approaches with a low number of parameters are employed. Indeed, such an approach would allow a building-independent estimation and a lower computing power at the mobile side. Four robustified algorithms are to be presented: a robust weighted centroid localization method, a robust linear trilateration method, a robust nonlinear trilateration method, and a robust deconvolution method. The proposed approaches use the received signal strengths (RSS) measured by the Mobile Station (MS) from various heard WiFi access points (APs) and provide an estimate of the vertical position of the MS, which can be used for floor detection. We will show that robustification can indeed increase the performance of the RSS-based floor detection algorithms.

  8. Current status of Marek’s disease in the United States & worldwide based on a questionnaire survey

    USDA-ARS?s Scientific Manuscript database

    A questionnaire was widely distributed in 2011 to estimate the global prevalence of Marek’s disease (MD) and gain a better understanding of current control strategies and future concerns. A total of 112 questionnaires were returned representing 116 countries from sources including national branch s...

  9. 77 FR 74824 - Notice of Revision of a Currently Approved Information Collection

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-18

    ... the Commodity Credit Corporation (CCC) to request a revision for a currently approved information collection in support of the CCC's Dairy Export Incentive Program (DEIP) based on re-estimates. Although the... applying for a CCC bonus (section 1494.501), and (4) documentation evidencing export to support payment of...

  10. From Models to Measurements: Comparing Downed Dead Wood Carbon Stock Estimates in the U.S. Forest Inventory

    PubMed Central

    Domke, Grant M.; Woodall, Christopher W.; Walters, Brian F.; Smith, James E.

    2013-01-01

    The inventory and monitoring of coarse woody debris (CWD) carbon (C) stocks is an essential component of any comprehensive National Greenhouse Gas Inventory (NGHGI). Due to the expense and difficulty associated with conducting field inventories of CWD pools, CWD C stocks are often modeled as a function of more commonly measured stand attributes such as live tree C density. In order to assess potential benefits of adopting a field-based inventory of CWD C stocks in lieu of the current model-based approach, a national inventory of downed dead wood C across the U.S. was compared to estimates calculated from models associated with the U.S.’s NGHGI and used in the USDA Forest Service, Forest Inventory and Analysis program. The model-based population estimate of C stocks for CWD (i.e., pieces and slash piles) in the conterminous U.S. was 9 percent (145.1 Tg) greater than the field-based estimate. The relatively small absolute difference was driven by contrasting results for each CWD component. The model-based population estimate of C stocks from CWD pieces was 17 percent (230.3 Tg) greater than the field-based estimate, while the model-based estimate of C stocks from CWD slash piles was 27 percent (85.2 Tg) smaller than the field-based estimate. In general, models overestimated the C density per-unit-area from slash piles early in stand development and underestimated the C density from CWD pieces in young stands. This resulted in significant differences in CWD C stocks by region and ownership. The disparity in estimates across spatial scales illustrates the complexity in estimating CWD C in a NGHGI. Based on the results of this study, it is suggested that the U.S. adopt field-based estimates of CWD C stocks as a component of its NGHGI to both reduce the uncertainty within the inventory and improve the sensitivity to potential management and climate change events. PMID:23544112

  11. From models to measurements: comparing downed dead wood carbon stock estimates in the U.S. forest inventory.

    PubMed

    Domke, Grant M; Woodall, Christopher W; Walters, Brian F; Smith, James E

    2013-01-01

    The inventory and monitoring of coarse woody debris (CWD) carbon (C) stocks is an essential component of any comprehensive National Greenhouse Gas Inventory (NGHGI). Due to the expense and difficulty associated with conducting field inventories of CWD pools, CWD C stocks are often modeled as a function of more commonly measured stand attributes such as live tree C density. In order to assess potential benefits of adopting a field-based inventory of CWD C stocks in lieu of the current model-based approach, a national inventory of downed dead wood C across the U.S. was compared to estimates calculated from models associated with the U.S.'s NGHGI and used in the USDA Forest Service, Forest Inventory and Analysis program. The model-based population estimate of C stocks for CWD (i.e., pieces and slash piles) in the conterminous U.S. was 9 percent (145.1 Tg) greater than the field-based estimate. The relatively small absolute difference was driven by contrasting results for each CWD component. The model-based population estimate of C stocks from CWD pieces was 17 percent (230.3 Tg) greater than the field-based estimate, while the model-based estimate of C stocks from CWD slash piles was 27 percent (85.2 Tg) smaller than the field-based estimate. In general, models overestimated the C density per-unit-area from slash piles early in stand development and underestimated the C density from CWD pieces in young stands. This resulted in significant differences in CWD C stocks by region and ownership. The disparity in estimates across spatial scales illustrates the complexity in estimating CWD C in a NGHGI. Based on the results of this study, it is suggested that the U.S. adopt field-based estimates of CWD C stocks as a component of its NGHGI to both reduce the uncertainty within the inventory and improve the sensitivity to potential management and climate change events.

  12. Enhancing the USDA Global Crop Assessment Decision Support System Using SMAP Soil Moisture Data

    NASA Astrophysics Data System (ADS)

    Bolten, J. D.; Mladenova, I. E.; Crow, W. T.; Reynolds, C. A.

    2016-12-01

    The Foreign Agricultural Services (FAS) is a subdivision of U.S. Department of Agriculture (USDA) that is in charge with providing information on current and expected crop supply and demand estimates. Knowledge of the amount of water in the root zone is an essential source of information for the crop analysts as it governs the crop development and crop growth, which in turn determine the end-of-season yields. USDA FAS currently relies on root zone soil moisture (RZSM) estimates generated using the modified two-layer Palmer Model (PM). PM is a simple water-balance hydrologic model that is driven by daily precipitation observations and minimum and maximum temperature data. These forcing data are based on ground meteorological station measurements from the World Meteorological Organization (WMO), and gridded weather data from the former U.S. Air Force Weather Agency (AFWA), currently called U.S. Air Force 557th Weather Wing. The PM was extended by adding a data assimilation (DA) unit that provides the opportunity to routinely ingest satellite-based soil moisture observations. This allows us to adjust for precipitation-related inaccuracies and enhance the quality of the PM soil moisture estimates. The current operational DA system is based on a 1-D Ensample Kalman Filter approach and relies on observations obtained from the Soil Moisture Ocean Salinity Mission (SMOS). Our talk will demonstrate the value of assimilating two satellite products (i.e. a passive and active) and discuss work that is done in preparation for ingesting soil moisture observations from the Soil Moisture Active Passive (SMAP) mission.

  13. Comparison of Remote Sensing and Fixed-Site Monitoring Approaches for Examining Air Pollution and Health in a National Study Population

    NASA Technical Reports Server (NTRS)

    Prud'homme, Genevieve; Dobbin, Nina A.; Sun, Liu; Burnet, Richard T.; Martin, Randall V.; Davidson, Andrew; Cakmak, Sabit; Villeneuve, Paul J.; Lamsal, Lok N.; vanDonkelaar, Aaron; hide

    2013-01-01

    Satellite remote sensing (RS) has emerged as a cutting edge approach for estimating ground level ambient air pollution. Previous studies have reported a high correlation between ground level PM2.5 and NO2 estimated by RS and measurements collected at regulatory monitoring sites. The current study examined associations between air pollution and adverse respiratory and allergic health outcomes using multi-year averages of NO2 and PM2.5 from RS and from regulatory monitoring. RS estimates were derived using satellite measurements from OMI, MODIS, and MISR instruments. Regulatory monitoring data were obtained from Canada's National Air Pollution Surveillance Network. Self-reported prevalence of doctor-diagnosed asthma, current asthma, allergies, and chronic bronchitis were obtained from the Canadian Community Health Survey (a national sample of individuals 12 years of age and older). Multi-year ambient pollutant averages were assigned to each study participant based on their six digit postal code at the time of health survey, and were used as a marker for long-term exposure to air pollution. RS derived estimates of NO2 and PM2.5 were associated with 6e10% increases in respiratory and allergic health outcomes per interquartile range (3.97 mg m3 for PM2.5 and 1.03 ppb for NO2) among adults (aged 20e64) in the national study population. Risk estimates for air pollution and respiratory/ allergic health outcomes based on RS were similar to risk estimates based on regulatory monitoring for areas where regulatory monitoring data were available (within 40 km of a regulatory monitoring station). RS derived estimates of air pollution were also associated with adverse health outcomes among participants residing outside the catchment area of the regulatory monitoring network (p < 0.05).

  14. Software for the grouped optimal aggregation technique

    NASA Technical Reports Server (NTRS)

    Brown, P. M.; Shaw, G. W. (Principal Investigator)

    1982-01-01

    The grouped optimal aggregation technique produces minimum variance, unbiased estimates of acreage and production for countries, zones (states), or any designated collection of acreage strata. It uses yield predictions, historical acreage information, and direct acreage estimate from satellite data. The acreage strata are grouped in such a way that the ratio model over historical acreage provides a smaller variance than if the model were applied to each individual stratum. An optimal weighting matrix based on historical acreages, provides the link between incomplete direct acreage estimates and the total, current acreage estimate.

  15. Sliding Mode Observer-Based Current Sensor Fault Reconstruction and Unknown Load Disturbance Estimation for PMSM Driven System.

    PubMed

    Zhao, Kaihui; Li, Peng; Zhang, Changfan; Li, Xiangfei; He, Jing; Lin, Yuliang

    2017-12-06

    This paper proposes a new scheme of reconstructing current sensor faults and estimating unknown load disturbance for a permanent magnet synchronous motor (PMSM)-driven system. First, the original PMSM system is transformed into two subsystems; the first subsystem has unknown system load disturbances, which are unrelated to sensor faults, and the second subsystem has sensor faults, but is free from unknown load disturbances. Introducing a new state variable, the augmented subsystem that has sensor faults can be transformed into having actuator faults. Second, two sliding mode observers (SMOs) are designed: the unknown load disturbance is estimated by the first SMO in the subsystem, which has unknown load disturbance, and the sensor faults can be reconstructed using the second SMO in the augmented subsystem, which has sensor faults. The gains of the proposed SMOs and their stability analysis are developed via the solution of linear matrix inequality (LMI). Finally, the effectiveness of the proposed scheme was verified by simulations and experiments. The results demonstrate that the proposed scheme can reconstruct current sensor faults and estimate unknown load disturbance for the PMSM-driven system.

  16. Price estimates for the production of wafers from silicon ingots

    NASA Technical Reports Server (NTRS)

    Mokashi, A. R.

    1982-01-01

    The status of the inside-diameter sawing, (ID), multiblade sawing (MBS), and fixed-abrasive slicing technique (FAST) processes are discussed with respect to the estimated price each process adds on to the price of the final photovoltaic module. The expected improvements in each process, based on the knowledge of the current level of technology, are projected for the next two to five years and the expected add-on prices in 1983 and 1986 are estimated.

  17. Current Pressure Transducer Application of Model-based Prognostics Using Steady State Conditions

    NASA Technical Reports Server (NTRS)

    Teubert, Christopher; Daigle, Matthew J.

    2014-01-01

    Prognostics is the process of predicting a system's future states, health degradation/wear, and remaining useful life (RUL). This information plays an important role in preventing failure, reducing downtime, scheduling maintenance, and improving system utility. Prognostics relies heavily on wear estimation. In some components, the sensors used to estimate wear may not be fast enough to capture brief transient states that are indicative of wear. For this reason it is beneficial to be capable of detecting and estimating the extent of component wear using steady-state measurements. This paper details a method for estimating component wear using steady-state measurements, describes how this is used to predict future states, and presents a case study of a current/pressure (I/P) Transducer. I/P Transducer nominal and off-nominal behaviors are characterized using a physics-based model, and validated against expected and observed component behavior. This model is used to map observed steady-state responses to corresponding fault parameter values in the form of a lookup table. This method was chosen because of its fast, efficient nature, and its ability to be applied to both linear and non-linear systems. Using measurements of the steady state output, and the lookup table, wear is estimated. A regression is used to estimate the wear propagation parameter and characterize the damage progression function, which are used to predict future states and the remaining useful life of the system.

  18. Counting glomeruli and podocytes: rationale and methodologies

    PubMed Central

    Puelles, Victor G.; Bertram, John F.

    2015-01-01

    Purpose of review There is currently much interest in the numbers of both glomeruli and podocytes. This interest stems from greater understanding of the effects of suboptimal fetal events on nephron endowment, the associations between low nephron number and chronic cardiovascular and kidney disease in adults, and the emergence of the podocyte depletion hypothesis. Recent findings Obtaining accurate and precise estimates of glomerular and podocyte number has proven surprisingly difficult. When whole kidneys or large tissue samples are available, design-based stereological methods are considered gold-standard because they are based on principles that negate systematic bias. However, these methods are often tedious and time-consuming, and oftentimes inapplicable when dealing with small samples such as biopsies. Therefore, novel methods suitable for small tissue samples, and innovative approaches to facilitate high through put measurements, such as magnetic resonance imaging (MRI) to estimate glomerular number and flow cytometry to estimate podocyte number, have recently been described. Summary This review describes current gold-standard methods for estimating glomerular and podocyte number, as well as methods developed in the past 3 years. We are now better placed than ever before to accurately and precisely estimate glomerular and podocyte number, and to examine relationships between these measurements and kidney health and disease. PMID:25887899

  19. [IR spectral-analysis-based range estimation for an object with small temperature difference from background].

    PubMed

    Fu, Xiao-Ning; Wang, Jie; Yang, Lin

    2013-01-01

    It is a typical passive ranging technology that estimation of distance of an object is based on transmission characteristic of infrared radiation, it is also a hotspot in electro-optic countermeasures. Because of avoiding transmitting energy in the detection, this ranging technology will significantly enhance the penetration capability and infrared conceal capability of the missiles or unmanned aerial vehicles. With the current situation in existing passive ranging system, for overcoming the shortage in ranging an oncoming target object with small temperature difference from background, an improved distance estimation scheme was proposed. This article begins with introducing the concept of signal transfer function, makes clear the working curve of current algorithm, and points out that the estimated distance is not unique due to inherent nonlinearity of the working curve. A new distance calculation algorithm was obtained through nonlinear correction technique. It is a ranging formula by using sensing information at 3-5 and 8-12 microm combined with background temperature and field meteorological conditions. The authors' study has shown that the ranging error could be mainly kept around the level of 10% under the condition of the target and background apparent temperature difference equal to +/- 5 K, and the error in estimating background temperature is no more than +/- 15 K.

  20. Colorectal Cancer Deaths Attributable to Nonuse of Screening in the United States

    PubMed Central

    Meester, Reinier G.S.; Doubeni, Chyke A.; Lansdorp-Vogelaar, Iris; Goede, S.L.; Levin, Theodore R.; Quinn, Virginia P.; van Ballegooijen, Marjolein; Corley, Douglas A.; Zauber, Ann G.

    2015-01-01

    Purpose Screening is a major contributor to colorectal cancer (CRC) mortality reductions in the U.S., but is underutilized. We estimated the fraction of CRC deaths attributable to nonuse of screening to demonstrate the potential benefits from targeted interventions. Methods The established MISCAN-colon microsimulation model was used to estimate the population attributable fraction (PAF) in people aged ≥50 years. The model incorporates long-term patterns and effects of screening by age and type of screening test. PAF for 2010 was estimated using currently available data on screening uptake; PAF was also projected assuming constant future screening rates to incorporate lagged effects from past increases in screening uptake. We also computed PAF using Levin's formula to gauge how this simpler approach differs from the model-based approach. Results There were an estimated 51,500 CRC deaths in 2010, about 63% (N∼32,200) of which were attributable to non-screening. The PAF decreases slightly to 58% in 2020. Levin's approach yielded a considerably more conservative PAF of 46% (N∼23,600) for 2010. Conclusions The majority of current U.S. CRC deaths are attributable to non-screening. This underscores the potential benefits of increasing screening uptake in the population. Traditional methods of estimating PAF underestimated screening effects compared with model-based approaches. PMID:25721748

  1. Establishment of design space for high current gain in III-N hot electron transistors

    NASA Astrophysics Data System (ADS)

    Gupta, Geetak; Ahmadi, Elaheh; Suntrup, Donald J., III; Mishra, Umesh K.

    2018-01-01

    This paper establishes the design space of III-N hot electron transistors (HETs) for high current gain by designing and fabricating HETs with scaled base thickness. The device structure consists of GaN-based emitter, base and collector regions where emitter and collector barriers are implemented using AlN and InGaN layers, respectively, as polarization-dipoles. Electrons tunnel through the AlN layer to be injected into the base at a high energy where they travel in a quasi-ballistic manner before being collected. Current gain increases from 1 to 3.5 when base thickness is reduced from 7 to 4 nm. The extracted mean free path (λ mfp) is 5.8 nm at estimated injection energy of 1.5 eV.

  2. Refinement of current monitoring methodology for electroosmotic flow assessment under low ionic strength conditions

    PubMed Central

    Saucedo-Espinosa, Mario A.; Lapizco-Encinas, Blanca H.

    2016-01-01

    Current monitoring is a well-established technique for the characterization of electroosmotic (EO) flow in microfluidic devices. This method relies on monitoring the time response of the electric current when a test buffer solution is displaced by an auxiliary solution using EO flow. In this scheme, each solution has a different ionic concentration (and electric conductivity). The difference in the ionic concentration of the two solutions defines the dynamic time response of the electric current and, hence, the current signal to be measured: larger concentration differences result in larger measurable signals. A small concentration difference is needed, however, to avoid dispersion at the interface between the two solutions, which can result in undesired pressure-driven flow that conflicts with the EO flow. Additional challenges arise as the conductivity of the test solution decreases, leading to a reduced electric current signal that may be masked by noise during the measuring process, making for a difficult estimation of an accurate EO mobility. This contribution presents a new scheme for current monitoring that employs multiple channels arranged in parallel, producing an increase in the signal-to-noise ratio of the electric current to be measured and increasing the estimation accuracy. The use of this parallel approach is particularly useful in the estimation of the EO mobility in systems where low conductivity mediums are required, such as insulator based dielectrophoresis devices. PMID:27375813

  3. Preliminary Upper Estimate of Peak Currents in Transcranial Magnetic Stimulation at Distant Locations from a TMS Coil

    PubMed Central

    Makarov, Sergey N.; Yanamadala, Janakinadh; Piazza, Matthew W.; Helderman, Alex M.; Thang, Niang S.; Burnham, Edward H.; Pascual-Leone, Alvaro

    2016-01-01

    Goals Transcranial magnetic stimulation (TMS) is increasingly used as a diagnostic and therapeutic tool for numerous neuropsychiatric disorders. The use of TMS might cause whole-body exposure to undesired induced currents in patients and TMS operators. The aim of the present study is to test and justify a simple analytical model known previously, which may be helpful as an upper estimate of eddy current density at a particular distant observation point for any body composition and any coil setup. Methods We compare the analytical solution with comprehensive adaptive mesh refinement-based FEM simulations of a detailed full-body human model, two coil types, five coil positions, about 100,000 observation points, and two distinct pulse rise times, thus providing a representative number of different data sets for comparison, while also using other numerical data. Results Our simulations reveal that, after a certain modification, the analytical model provides an upper estimate for the eddy current density at any location within the body. In particular, it overestimates the peak eddy currents at distant locations from a TMS coil by a factor of 10 on average. Conclusion The simple analytical model tested in the present study may be valuable as a rapid method to safely estimate levels of TMS currents at different locations within a human body. Significance At present, safe limits of general exposure to TMS electric and magnetic fields are an open subject, including fetal exposure for pregnant women. PMID:26685221

  4. Trajectory-Based Takeoff Time Predictions Applied to Tactical Departure Scheduling: Concept Description, System Design, and Initial Observations

    NASA Technical Reports Server (NTRS)

    Engelland, Shawn A.; Capps, Alan

    2011-01-01

    Current aircraft departure release times are based on manual estimates of aircraft takeoff times. Uncertainty in takeoff time estimates may result in missed opportunities to merge into constrained en route streams and lead to lost throughput. However, technology exists to improve takeoff time estimates by using the aircraft surface trajectory predictions that enable air traffic control tower (ATCT) decision support tools. NASA s Precision Departure Release Capability (PDRC) is designed to use automated surface trajectory-based takeoff time estimates to improve en route tactical departure scheduling. This is accomplished by integrating an ATCT decision support tool with an en route tactical departure scheduling decision support tool. The PDRC concept and prototype software have been developed, and an initial test was completed at air traffic control facilities in Dallas/Fort Worth. This paper describes the PDRC operational concept, system design, and initial observations.

  5. A feature-based inference model of numerical estimation: the split-seed effect.

    PubMed

    Murray, Kyle B; Brown, Norman R

    2009-07-01

    Prior research has identified two modes of quantitative estimation: numerical retrieval and ordinal conversion. In this paper we introduce a third mode, which operates by a feature-based inference process. In contrast to prior research, the results of three experiments demonstrate that people estimate automobile prices by combining metric information associated with two critical features: product class and brand status. In addition, Experiments 2 and 3 demonstrated that when participants are seeded with the actual current base price of one of the to-be-estimated vehicles, they respond by revising the general metric and splitting the information carried by the seed between the two critical features. As a result, the degree of post-seeding revision is directly related to the number of these features that the seed and the transfer items have in common. The paper concludes with a general discussion of the practical and theoretical implications of our findings.

  6. Incremental inverse kinematics based vision servo for autonomous robotic capture of non-cooperative space debris

    NASA Astrophysics Data System (ADS)

    Dong, Gangqi; Zhu, Z. H.

    2016-04-01

    This paper proposed a new incremental inverse kinematics based vision servo approach for robotic manipulators to capture a non-cooperative target autonomously. The target's pose and motion are estimated by a vision system using integrated photogrammetry and EKF algorithm. Based on the estimated pose and motion of the target, the instantaneous desired position of the end-effector is predicted by inverse kinematics and the robotic manipulator is moved incrementally from its current configuration subject to the joint speed limits. This approach effectively eliminates the multiple solutions in the inverse kinematics and increases the robustness of the control algorithm. The proposed approach is validated by a hardware-in-the-loop simulation, where the pose and motion of the non-cooperative target is estimated by a real vision system. The simulation results demonstrate the effectiveness and robustness of the proposed estimation approach for the target and the incremental control strategy for the robotic manipulator.

  7. The Extended-Image Tracking Technique Based on the Maximum Likelihood Estimation

    NASA Technical Reports Server (NTRS)

    Tsou, Haiping; Yan, Tsun-Yee

    2000-01-01

    This paper describes an extended-image tracking technique based on the maximum likelihood estimation. The target image is assume to have a known profile covering more than one element of a focal plane detector array. It is assumed that the relative position between the imager and the target is changing with time and the received target image has each of its pixels disturbed by an independent additive white Gaussian noise. When a rotation-invariant movement between imager and target is considered, the maximum likelihood based image tracking technique described in this paper is a closed-loop structure capable of providing iterative update of the movement estimate by calculating the loop feedback signals from a weighted correlation between the currently received target image and the previously estimated reference image in the transform domain. The movement estimate is then used to direct the imager to closely follow the moving target. This image tracking technique has many potential applications, including free-space optical communications and astronomy where accurate and stabilized optical pointing is essential.

  8. Entropy-based adaptive attitude estimation

    NASA Astrophysics Data System (ADS)

    Kiani, Maryam; Barzegar, Aylin; Pourtakdoust, Seid H.

    2018-03-01

    Gaussian approximation filters have increasingly been developed to enhance the accuracy of attitude estimation in space missions. The effective employment of these algorithms demands accurate knowledge of system dynamics and measurement models, as well as their noise characteristics, which are usually unavailable or unreliable. An innovation-based adaptive filtering approach has been adopted as a solution to this problem; however, it exhibits two major challenges, namely appropriate window size selection and guaranteed assurance of positive definiteness for the estimated noise covariance matrices. The current work presents two novel techniques based on relative entropy and confidence level concepts in order to address the abovementioned drawbacks. The proposed adaptation techniques are applied to two nonlinear state estimation algorithms of the extended Kalman filter and cubature Kalman filter for attitude estimation of a low earth orbit satellite equipped with three-axis magnetometers and Sun sensors. The effectiveness of the proposed adaptation scheme is demonstrated by means of comprehensive sensitivity analysis on the system and environmental parameters by using extensive independent Monte Carlo simulations.

  9. Driving range estimation for electric vehicles based on driving condition identification and forecast

    NASA Astrophysics Data System (ADS)

    Pan, Chaofeng; Dai, Wei; Chen, Liao; Chen, Long; Wang, Limei

    2017-10-01

    With the impact of serious environmental pollution in our cities combined with the ongoing depletion of oil resources, electric vehicles are becoming highly favored as means of transport. Not only for the advantage of low noise, but for their high energy efficiency and zero pollution. The Power battery is used as the energy source of electric vehicles. However, it does currently still have a few shortcomings, noticeably the low energy density, with high costs and short cycle life results in limited mileage compared with conventional passenger vehicles. There is great difference in vehicle energy consumption rate under different environment and driving conditions. Estimation error of current driving range is relatively large due to without considering the effects of environmental temperature and driving conditions. The development of a driving range estimation method will have a great impact on the electric vehicles. A new driving range estimation model based on the combination of driving cycle identification and prediction is proposed and investigated. This model can effectively eliminate mileage errors and has good convergence with added robustness. Initially the identification of the driving cycle is based on Kernel Principal Component feature parameters and fuzzy C referring to clustering algorithm. Secondly, a fuzzy rule between the characteristic parameters and energy consumption is established under MATLAB/Simulink environment. Furthermore the Markov algorithm and BP(Back Propagation) neural network method is utilized to predict the future driving conditions to improve the accuracy of the remaining range estimation. Finally, driving range estimation method is carried out under the ECE 15 condition by using the rotary drum test bench, and the experimental results are compared with the estimation results. Results now show that the proposed driving range estimation method can not only estimate the remaining mileage, but also eliminate the fluctuation of the residual range under different driving conditions.

  10. Near Real-time GNSS-based Ionospheric Model using Expanded Kriging in the East Asia Region

    NASA Astrophysics Data System (ADS)

    Choi, P. H.; Bang, E.; Lee, J.

    2016-12-01

    Many applications which utilize radio waves (e.g. navigation, communications, and radio sciences) are influenced by the ionosphere. The technology to provide global ionospheric maps (GIM) which show ionospheric Total Electron Content (TEC) has been progressed by processing GNSS data. However, the GIMs have limited spatial resolution (e.g. 2.5° in latitude and 5° in longitude), because they are generated using globally-distributed and thus relatively sparse GNSS reference station networks. This study presents a near real-time and high spatial resolution TEC model over East Asia by using ionospheric observables from both International GNSS Service (IGS) and local GNSS networks and the expanded kriging method. New signals from multi-constellation (e.g,, GPS L5, Galileo E5) were also used to generate high-precision TEC estimates. The newly proposed estimation method is based on the universal kriging interpolation technique, but integrates TEC data from previous epochs to those from the current epoch to improve the TEC estimation performance by increasing ionospheric observability. To propagate previous measurements to the current epoch, we implemented a Kalman filter whose dynamic model was derived by using the first-order Gauss-Markov process which characterizes temporal ionospheric changes under the nominal ionospheric conditions. Along with the TEC estimates at grids, the method generates the confidence bounds on the estimates using resulting estimation covariance. We also suggest to classify the confidence bounds into several categories to allow users to recognize the quality levels of TEC estimates according to the requirements for user's applications. This paper examines the performance of the proposed method by obtaining estimation results for both nominal and disturbed ionospheric conditions, and compares these results to those provided by GIM of the NASA Jet propulsion Laboratory. In addition, the estimation results based on the expanded kriging method are compared to the results from the universal kriging method for both nominal and disturbed ionospheric conditions.

  11. A Web-Based System for Bayesian Benchmark Dose Estimation.

    PubMed

    Shao, Kan; Shapiro, Andrew J

    2018-01-11

    Benchmark dose (BMD) modeling is an important step in human health risk assessment and is used as the default approach to identify the point of departure for risk assessment. A probabilistic framework for dose-response assessment has been proposed and advocated by various institutions and organizations; therefore, a reliable tool is needed to provide distributional estimates for BMD and other important quantities in dose-response assessment. We developed an online system for Bayesian BMD (BBMD) estimation and compared results from this software with U.S. Environmental Protection Agency's (EPA's) Benchmark Dose Software (BMDS). The system is built on a Bayesian framework featuring the application of Markov chain Monte Carlo (MCMC) sampling for model parameter estimation and BMD calculation, which makes the BBMD system fundamentally different from the currently prevailing BMD software packages. In addition to estimating the traditional BMDs for dichotomous and continuous data, the developed system is also capable of computing model-averaged BMD estimates. A total of 518 dichotomous and 108 continuous data sets extracted from the U.S. EPA's Integrated Risk Information System (IRIS) database (and similar databases) were used as testing data to compare the estimates from the BBMD and BMDS programs. The results suggest that the BBMD system may outperform the BMDS program in a number of aspects, including fewer failed BMD and BMDL calculations and estimates. The BBMD system is a useful alternative tool for estimating BMD with additional functionalities for BMD analysis based on most recent research. Most importantly, the BBMD has the potential to incorporate prior information to make dose-response modeling more reliable and can provide distributional estimates for important quantities in dose-response assessment, which greatly facilitates the current trend for probabilistic risk assessment. https://doi.org/10.1289/EHP1289.

  12. Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†

    PubMed Central

    Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia

    2015-01-01

    Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144

  13. Mean of Microaccelerations Estimate in the Small Spacecraft Internal Environment with the Use of Fuzzy Sets

    NASA Astrophysics Data System (ADS)

    Sedelnikov, A. V.

    2018-05-01

    Assessment of parameters of rotary motion of the small spacecraft around its center of mass and of microaccelerations using measurements of current from silicon photocells is carried out. At the same time there is a problem of interpretation of ambiguous telemetric data. Current from two opposite sides of the small spacecraft is significant. The mean of removal of such uncertainty is considered. It is based on an fuzzy set. As membership function it is offered to use a normality condition of the direction cosines. The example of uncertainty removal for a prototype of the Aist small spacecraft is given. The offered approach can significantly increase the accuracy of microaccelerations estimate when using measurements of current from silicon photocells.

  14. Estimates of ground-water recharge, base flow, and stream reach gains and losses in the Willamette River basin, Oregon

    USGS Publications Warehouse

    Lee, Karl K.; Risley, John C.

    2002-03-19

    Precipitation-runoff models, base-flow-separation techniques, and stream gain-loss measurements were used to study recharge and ground-water surface-water interaction as part of a study of the ground-water resources of the Willamette River Basin. The study was a cooperative effort between the U.S. Geological Survey and the State of Oregon Water Resources Department. Precipitation-runoff models were used to estimate the water budget of 216 subbasins in the Willamette River Basin. The models were also used to compute long-term average recharge and base flow. Recharge and base-flow estimates will be used as input to a regional ground-water flow model, within the same study. Recharge and base-flow estimates were made using daily streamflow records. Recharge estimates were made at 16 streamflow-gaging-station locations and were compared to recharge estimates from the precipitation-runoff models. Base-flow separation methods were used to identify the base-flow component of streamflow at 52 currently operated and discontinued streamflow-gaging-station locations. Stream gain-loss measurements were made on the Middle Fork Willamette, Willamette, South Yamhill, Pudding, and South Santiam Rivers, and were used to identify and quantify gaining and losing stream reaches both spatially and temporally. These measurements provide further understanding of ground-water/surface-water interactions.

  15. Resource Assessment of Tidal Current Energy in Hangzhou Bay Based on Long Term Measurement

    NASA Astrophysics Data System (ADS)

    Zhang, Feng; Dai, Chun-Ni; Xu, Xue-Feng; Wang, Chuan-Kun; Ye, Qin

    2017-05-01

    Compared with other marine renewable energy, tidal current energy benefits a lot in high energy density and good predictability. Based on the measured tidal current data in Hangzhou Bay from Nov 2012 to Oct 2012, this paper analysed temporal and spatial changes of tidal current energy in the site. It is the first time measured data of such long time been taken in tidal current energy analysis. Occurrence frequency and duration of the current of different speed are given out in the paper. According to the analysis results, monthly average power density changed a lot in different month, and installation orientation of tidal current turbine significantly affected energy acquisition. Finally, the annual average power density of tidal current energy with coefficient Cp in the site was calculated, and final output of a tidal current plant was also estimated.

  16. Improvement in Visual Target Tracking for a Mobile Robot

    NASA Technical Reports Server (NTRS)

    Kim, Won; Ansar, Adnan; Madison, Richard

    2006-01-01

    In an improvement of the visual-target-tracking software used aboard a mobile robot (rover) of the type used to explore the Martian surface, an affine-matching algorithm has been replaced by a combination of a normalized- cross-correlation (NCC) algorithm and a template-image-magnification algorithm. Although neither NCC nor template-image magnification is new, the use of both of them to increase the degree of reliability with which features can be matched is new. In operation, a template image of a target is obtained from a previous rover position, then the magnification of the template image is based on the estimated change in the target distance from the previous rover position to the current rover position (see figure). For this purpose, the target distance at the previous rover position is determined by stereoscopy, while the target distance at the current rover position is calculated from an estimate of the current pose of the rover. The template image is then magnified by an amount corresponding to the estimated target distance to obtain a best template image to match with the image acquired at the current rover position.

  17. Physics-of-Failure Approach to Prognostics

    NASA Technical Reports Server (NTRS)

    Kulkarni, Chetan S.

    2017-01-01

    As more and more electric vehicles emerge in our daily operation progressively, a very critical challenge lies in accurate prediction of the electrical components present in the system. In case of electric vehicles, computing remaining battery charge is safety-critical. In order to tackle and solve the prediction problem, it is essential to have awareness of the current state and health of the system, especially since it is necessary to perform condition-based predictions. To be able to predict the future state of the system, it is also required to possess knowledge of the current and future operations of the vehicle. In this presentation our approach to develop a system level health monitoring safety indicator for different electronic components is presented which runs estimation and prediction algorithms to determine state-of-charge and estimate remaining useful life of respective components. Given models of the current and future system behavior, the general approach of model-based prognostics can be employed as a solution to the prediction problem and further for decision making.

  18. Inertial sensor-based smoother for gait analysis.

    PubMed

    Suh, Young Soo

    2014-12-17

    An off-line smoother algorithm is proposed to estimate foot motion using an inertial sensor unit (three-axis gyroscopes and accelerometers) attached to a shoe. The smoother gives more accurate foot motion estimation than filter-based algorithms by using all of the sensor data instead of using the current sensor data. The algorithm consists of two parts. In the first part, a Kalman filter is used to obtain initial foot motion estimation. In the second part, the error in the initial estimation is compensated using a smoother, where the problem is formulated in the quadratic optimization problem. An efficient solution of the quadratic optimization problem is given using the sparse structure. Through experiments, it is shown that the proposed algorithm can estimate foot motion more accurately than a filter-based algorithm with reasonable computation time. In particular, there is significant improvement in the foot motion estimation when the foot is moving off the floor: the z-axis position error squared sum (total time: 3.47 s) when the foot is in the air is 0.0807 m2 (Kalman filter) and 0.0020 m2 (the proposed smoother).

  19. Assessing the vertical structure of baroclinic tidal currents in a global model

    NASA Astrophysics Data System (ADS)

    Timko, Patrick; Arbic, Brian; Scott, Robert

    2010-05-01

    Tidal forcing plays an important role in many aspects of oceanography. Mixing, transport of particulates and internal wave generation are just three examples of local phenomena that may depend on the strength of local tidal currents. Advances in satellite altimetry have made an assessment of the global barotropic tide possible. However, the vertical structure of the tide may only be observed by deployment of instruments throughout the water column. Typically these observations are conducted at pre-determined depths based upon the interest of the observer. The high cost of such observations often limits both the number and the length of the observations resulting in a limit to our knowledge of the vertical structure of tidal currents. One way to expand our insight into the baroclinic structure of the ocean is through the use of numerical models. We compare the vertical structure of the global baroclinic tidal velocities in 1/12 degree HYCOM (HYbrid Coordinate Ocean Model) to a global database of current meter records. The model output is a subset of a 5 year global simulation that resolves the eddying general circulation, barotropic tides and baroclinic tides using 32 vertical layers. The density structure within the simulation is both vertically and horizontally non-uniform. In addition to buoyancy forcing the model is forced by astronomical tides and winds. We estimate the dominant semi-diurnal (M2), and diurnal (K1) tidal constituents of the model data using classical harmonic analysis. In regions where current meter record coverage is adequate, the model skill in replicating the vertical structure of the dominant diurnal and semi-diurnal tidal currents is assessed based upon the strength, orientation and phase of the tidal ellipses. We also present a global estimate of the baroclinic tidal energy at fixed depths estimated from the model output.

  20. Impact of acid precipitation on recreation and tourism in Ontario: an overview

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The impacts of acid precipitation on fishing opportunities, waterfowl and moose hunting, water contact activities, and the perception of the environment in Ontario are analyzed. Economic effects and future research needs are also estimated and discussed. These questions have been examined by identifying the likely links between acidic precipitation and recreation and tourism, by developing estimates of the importance of aquatic-based recreation and tourism, by describing the current and estimated future effects of acid precipitation. 101 references, 9 figures, 19 tables.

  1. New methodology for estimating biofuel consumption for cooking: Atmospheric emissions of black carbon and sulfur dioxide from India

    NASA Astrophysics Data System (ADS)

    Habib, Gazala; Venkataraman, Chandra; Shrivastava, Manish; Banerjee, Rangan; Stehr, J. W.; Dickerson, Russell R.

    2004-09-01

    The dominance of biofuel combustion emissions in the Indian region, and the inherently large uncertainty in biofuel use estimates based on cooking energy surveys, prompted the current work, which develops a new methodology for estimating biofuel consumption for cooking. This is based on food consumption statistics, and the specific energy for food cooking. Estimated biofuel consumption in India was 379 (247-584) Tg yr-1. New information on the user population of different biofuels was compiled at a state level, to derive the biofuel mix, which varied regionally and was 74:16:10%, respectively, of fuelwood, dung cake and crop waste, at a national level. Importantly, the uncertainty in biofuel use from quantitative error assessment using the new methodology is around 50%, giving a narrower bound than in previous works. From this new activity data and currently used black carbon emission factors, the black carbon (BC) emissions from biofuel combustion were estimated as 220 (65-760) Gg yr-1. The largest BC emissions were from fuelwood (75%), with lower contributions from dung cake (16%) and crop waste (9%). The uncertainty of 245% in the BC emissions estimate is now governed by the large spread in BC emission factors from biofuel combustion (122%), implying the need for reducing this uncertainty through measurements. Emission factors of SO2 from combustion of biofuels widely used in India were measured, and ranged 0.03-0.08 g kg-1 from combustion of two wood species, 0.05-0.20 g kg-1 from 10 crop waste types, and 0.88 g kg-1 from dung cake, significantly lower than currently used emission factors for wood and crop waste. Estimated SO2 emissions from biofuels of 75 (36-160) Gg yr-1 were about a factor of 3 lower than that in recent studies, with a large contribution from dung cake (73%), followed by fuelwood (21%) and crop waste (6%).

  2. Model-based cartilage thickness measurement in the submillimeter range

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Streekstra, G. J.; Strackee, S. D.; Maas, M.

    2007-09-15

    Current methods of image-based thickness measurement in thin sheet structures utilize second derivative zero crossings to locate the layer boundaries. It is generally acknowledged that the nonzero width of the point spread function (PSF) limits the accuracy of this measurement procedure. We propose a model-based method that strongly reduces PSF-induced bias by incorporating the PSF into the thickness estimation method. We estimated the bias in thickness measurements in simulated thin sheet images as obtained from second derivative zero crossings. To gain insight into the range of sheet thickness where our method is expected to yield improved results, sheet thickness wasmore » varied between 0.15 and 1.2 mm with an assumed PSF as present in the high-resolution modes of current computed tomography (CT) scanners [full width at half maximum (FWHM) 0.5-0.8 mm]. Our model-based method was evaluated in practice by measuring layer thickness from CT images of a phantom mimicking two parallel cartilage layers in an arthrography procedure. CT arthrography images of cadaver wrists were also evaluated, and thickness estimates were compared to those obtained from high-resolution anatomical sections that served as a reference. The thickness estimates from the simulated images reveal that the method based on second derivative zero crossings shows considerable bias for layers in the submillimeter range. This bias is negligible for sheet thickness larger than 1 mm, where the size of the sheet is more than twice the FWHM of the PSF but can be as large as 0.2 mm for a 0.5 mm sheet. The results of the phantom experiments show that the bias is effectively reduced by our method. The deviations from the true thickness, due to random fluctuations induced by quantum noise in the CT images, are of the order of 3% for a standard wrist imaging protocol. In the wrist the submillimeter thickness estimates from the CT arthrography images correspond within 10% to those estimated from the anatomical sections. We present a method that yields virtually unbiased thickness estimates of cartilage layers in the submillimeter range. The good agreement of thickness estimates from CT images with estimates from anatomical sections is promising for clinical application of the method in cartilage integrity staging of the wrist and the ankle.« less

  3. Comprehensive seismic monitoring of the Cascadia megathrust with real-time GPS

    NASA Astrophysics Data System (ADS)

    Melbourne, T. I.; Szeliga, W. M.; Santillan, V. M.; Scrivner, C. W.; Webb, F.

    2013-12-01

    We have developed a comprehensive real-time GPS-based seismic monitoring system for the Cascadia subduction zone based on 1- and 5-second point position estimates computed within the ITRF08 reference frame. A Kalman filter stream editor that uses a geometry-free combination of phase and range observables to speed convergence while also producing independent estimation of carrier phase biases and ionosphere delay pre-cleans raw satellite measurements. These are then analyzed with GIPSY-OASIS using satellite clock and orbit corrections streamed continuously from the International GNSS Service (IGS) and the German Aerospace Center (DLR). The resulting RMS position scatter is less than 3 cm, and typical latencies are under 2 seconds. Currently 31 coastal Washington, Oregon, and northern California stations from the combined PANGA and PBO networks are analyzed. We are now ramping up to include all of the remaining 400+ stations currently operating throughout the Cascadia subduction zone, all of which are high-rate and telemetered in real-time to CWU. These receivers span the M9 megathrust, M7 crustal faults beneath population centers, several active Cascades volcanoes, and a host of other hazard sources. To use the point position streams for seismic monitoring, we have developed an inter-process client communication package that captures, buffers and re-broadcasts real-time positions and covariances to a variety of seismic estimation routines running on distributed hardware. An aggregator ingests, re-streams and can rebroadcast up to 24 hours of point-positions and resultant seismic estimates derived from the point positions to application clients distributed across web. A suite of seismic monitoring applications has also been written, which includes position time series analysis, instantaneous displacement vectors, and peak ground displacement contouring and mapping. We have also implemented a continuous estimation of finite-fault slip along the Cascadia megathrust using a NIF-type approach. This currently operates on the terrestrial GPS data streams, but could readily be expanded to use real-time offshore geodetic measurements as well. The continuous slip distributions are used in turn to compute tsunami excitation and, when convolved with pre-computed, hydrodynamic Green functions calculated using the COMCOT tsunami modeling software, run-up estimates for the entire Cascadia coastal margin. Finally, a suite of data visualization tools has been written to allow interaction with the real-time position streams and seismic estimates based on them, including time series plotting, instantaneous offset vectors, peak ground deformation contouring, finite-fault inversions, and tsunami run-up. This suite is currently bundled within a single client written in JAVA, called ';GPS Cockpit,' which is available for download.

  4. Value-based decision-making battery: A Bayesian adaptive approach to assess impulsive and risky behavior.

    PubMed

    Pooseh, Shakoor; Bernhardt, Nadine; Guevara, Alvaro; Huys, Quentin J M; Smolka, Michael N

    2018-02-01

    Using simple mathematical models of choice behavior, we present a Bayesian adaptive algorithm to assess measures of impulsive and risky decision making. Practically, these measures are characterized by discounting rates and are used to classify individuals or population groups, to distinguish unhealthy behavior, and to predict developmental courses. However, a constant demand for improved tools to assess these constructs remains unanswered. The algorithm is based on trial-by-trial observations. At each step, a choice is made between immediate (certain) and delayed (risky) options. Then the current parameter estimates are updated by the likelihood of observing the choice, and the next offers are provided from the indifference point, so that they will acquire the most informative data based on the current parameter estimates. The procedure continues for a certain number of trials in order to reach a stable estimation. The algorithm is discussed in detail for the delay discounting case, and results from decision making under risk for gains, losses, and mixed prospects are also provided. Simulated experiments using prescribed parameter values were performed to justify the algorithm in terms of the reproducibility of its parameters for individual assessments, and to test the reliability of the estimation procedure in a group-level analysis. The algorithm was implemented as an experimental battery to measure temporal and probability discounting rates together with loss aversion, and was tested on a healthy participant sample.

  5. Incorporating diverse data and realistic complexity into demographic estimation procedures for sea otters

    USGS Publications Warehouse

    Tinker, M. Timothy; Doak, Daniel F.; Estes, James A.; Hatfield, Brian B.; Staedler, Michelle M.; Gross, Arthur

    2006-01-01

    Reliable information on historical and current population dynamics is central to understanding patterns of growth and decline in animal populations. We developed a maximum likelihood-based analysis to estimate spatial and temporal trends in age/sex-specific survival rates for the threatened southern sea otter (Enhydra lutris nereis), using annual population censuses and the age structure of salvaged carcass collections. We evaluated a wide range of possible spatial and temporal effects and used model averaging to incorporate model uncertainty into the resulting estimates of key vital rates and their variances. We compared these results to current demographic parameters estimated in a telemetry-based study conducted between 2001 and 2004. These results show that survival has decreased substantially from the early 1990s to the present and is generally lowest in the north-central portion of the population's range. The greatest temporal decrease in survival was for adult females, and variation in the survival of this age/sex class is primarily responsible for regulating population growth and driving population trends. Our results can be used to focus future research on southern sea otters by highlighting the life history stages and mortality factors most relevant to conservation. More broadly, we have illustrated how the powerful and relatively straightforward tools of information-theoretic-based model fitting can be used to sort through and parameterize quite complex demographic modeling frameworks. ?? 2006 by the Ecological Society of America.

  6. Investigation of IRT-Based Equating Methods in the Presence of Outlier Common Items

    ERIC Educational Resources Information Center

    Hu, Huiqin; Rogers, W. Todd; Vukmirovic, Zarko

    2008-01-01

    Common items with inconsistent b-parameter estimates may have a serious impact on item response theory (IRT)--based equating results. To find a better way to deal with the outlier common items with inconsistent b-parameters, the current study investigated the comparability of 10 variations of four IRT-based equating methods (i.e., concurrent…

  7. 78 FR 36117 - Fisheries Off West Coast States; Coastal Pelagic Species Fisheries; Annual Specifications

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-17

    ... regulations require NMFS to set these annual catch levels for the Pacific sardine fishery based on the annual... HG, the primary management target for the fishery, for the current fishing season. The HG is based... Fisheries Science Center and the resulting Pacific sardine biomass estimate of 659,539 mt. Based on the...

  8. Studies of reaction geometry in oxidation and reduction of the alkaline silver electrode

    NASA Technical Reports Server (NTRS)

    Butler, E. A.; Blackham, A. U.

    1971-01-01

    Two methods of surface area estimations of sintered silver electrodes have given roughness factors of 58 and 81. One method is based on constant current oxidation, the other is based on potentiostatic oxidation. Examination of both wire and sintered silver electrodes via scanning electron microscopy at various stages of oxidation have shown that important structural features are mounds of oxide. In potentiostatic oxidations these appear to form on sites instantaneously nucleated while in constant current oxidations progressive nucleation is indicated.

  9. Modified wind chill temperatures determined by a whole body thermoregulation model and human-based facial convective coefficients.

    PubMed

    Shabat, Yael Ben; Shitzer, Avraham; Fiala, Dusan

    2014-08-01

    Wind chill equivalent temperatures (WCETs) were estimated by a modified Fiala's whole body thermoregulation model of a clothed person. Facial convective heat exchange coefficients applied in the computations concurrently with environmental radiation effects were taken from a recently derived human-based correlation. Apart from these, the analysis followed the methodology used in the derivation of the currently used wind chill charts. WCET values are summarized by the following equation:[Formula: see text]Results indicate consistently lower estimated facial skin temperatures and consequently higher WCETs than those listed in the literature and used by the North American weather services. Calculated dynamic facial skin temperatures were additionally applied in the estimation of probabilities for the occurrence of risks of frostbite. Predicted weather combinations for probabilities of "Practically no risk of frostbite for most people," for less than 5 % risk at wind speeds above 40 km h(-1), were shown to occur at air temperatures above -10 °C compared to the currently published air temperature of -15 °C. At air temperatures below -35 °C, the presently calculated weather combination of 40 km h(-1)/-35 °C, at which the transition for risks to incur a frostbite in less than 2 min, is less conservative than that published: 60 km h(-1)/-40 °C. The present results introduce a fundamentally improved scientific basis for estimating facial skin temperatures, wind chill temperatures and risk probabilities for frostbites over those currently practiced.

  10. Modified wind chill temperatures determined by a whole body thermoregulation model and human-based facial convective coefficients

    NASA Astrophysics Data System (ADS)

    Shabat, Yael Ben; Shitzer, Avraham; Fiala, Dusan

    2014-08-01

    Wind chill equivalent temperatures (WCETs) were estimated by a modified Fiala's whole body thermoregulation model of a clothed person. Facial convective heat exchange coefficients applied in the computations concurrently with environmental radiation effects were taken from a recently derived human-based correlation. Apart from these, the analysis followed the methodology used in the derivation of the currently used wind chill charts. WCET values are summarized by the following equation: Results indicate consistently lower estimated facial skin temperatures and consequently higher WCETs than those listed in the literature and used by the North American weather services. Calculated dynamic facial skin temperatures were additionally applied in the estimation of probabilities for the occurrence of risks of frostbite. Predicted weather combinations for probabilities of "Practically no risk of frostbite for most people," for less than 5 % risk at wind speeds above 40 km h-1, were shown to occur at air temperatures above -10 °C compared to the currently published air temperature of -15 °C. At air temperatures below -35 °C, the presently calculated weather combination of 40 km h-1/-35 °C, at which the transition for risks to incur a frostbite in less than 2 min, is less conservative than that published: 60 km h-1/-40 °C. The present results introduce a fundamentally improved scientific basis for estimating facial skin temperatures, wind chill temperatures and risk probabilities for frostbites over those currently practiced.

  11. Determining prescription durations based on the parametric waiting time distribution.

    PubMed

    Støvring, Henrik; Pottegård, Anton; Hallas, Jesper

    2016-12-01

    The purpose of the study is to develop a method to estimate the duration of single prescriptions in pharmacoepidemiological studies when the single prescription duration is not available. We developed an estimation algorithm based on maximum likelihood estimation of a parametric two-component mixture model for the waiting time distribution (WTD). The distribution component for prevalent users estimates the forward recurrence density (FRD), which is related to the distribution of time between subsequent prescription redemptions, the inter-arrival density (IAD), for users in continued treatment. We exploited this to estimate percentiles of the IAD by inversion of the estimated FRD and defined the duration of a prescription as the time within which 80% of current users will have presented themselves again. Statistical properties were examined in simulation studies, and the method was applied to empirical data for four model drugs: non-steroidal anti-inflammatory drugs (NSAIDs), warfarin, bendroflumethiazide, and levothyroxine. Simulation studies found negligible bias when the data-generating model for the IAD coincided with the FRD used in the WTD estimation (Log-Normal). When the IAD consisted of a mixture of two Log-Normal distributions, but was analyzed with a single Log-Normal distribution, relative bias did not exceed 9%. Using a Log-Normal FRD, we estimated prescription durations of 117, 91, 137, and 118 days for NSAIDs, warfarin, bendroflumethiazide, and levothyroxine, respectively. Similar results were found with a Weibull FRD. The algorithm allows valid estimation of single prescription durations, especially when the WTD reliably separates current users from incident users, and may replace ad-hoc decision rules in automated implementations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Updating estimates of low streamflow statistics to account for possible trends

    NASA Astrophysics Data System (ADS)

    Blum, A. G.; Archfield, S. A.; Hirsch, R. M.; Vogel, R. M.; Kiang, J. E.; Dudley, R. W.

    2017-12-01

    Given evidence of both increasing and decreasing trends in low flows in many streams, methods are needed to update estimators of low flow statistics used in water resources management. One such metric is the 10-year annual low-flow statistic (7Q10) calculated as the annual minimum seven-day streamflow which is exceeded in nine out of ten years on average. Historical streamflow records may not be representative of current conditions at a site if environmental conditions are changing. We present a new approach to frequency estimation under nonstationary conditions that applies a stationary nonparametric quantile estimator to a subset of the annual minimum flow record. Monte Carlo simulation experiments were used to evaluate this approach across a range of trend and no trend scenarios. Relative to the standard practice of using the entire available streamflow record, use of a nonparametric quantile estimator combined with selection of the most recent 30 or 50 years for 7Q10 estimation were found to improve accuracy and reduce bias. Benefits of data subset selection approaches were greater for higher magnitude trends annual minimum flow records with lower coefficients of variation. A nonparametric trend test approach for subset selection did not significantly improve upon always selecting the last 30 years of record. At 174 stream gages in the Chesapeake Bay region, 7Q10 estimators based on the most recent 30 years of flow record were compared to estimators based on the entire period of record. Given the availability of long records of low streamflow, using only a subset of the flow record ( 30 years) can be used to update 7Q10 estimators to better reflect current streamflow conditions.

  13. Methods for accurate estimation of net discharge in a tidal channel

    USGS Publications Warehouse

    Simpson, M.R.; Bland, R.

    2000-01-01

    Accurate estimates of net residual discharge in tidally affected rivers and estuaries are possible because of recently developed ultrasonic discharge measurement techniques. Previous discharge estimates using conventional mechanical current meters and methods based on stage/discharge relations or water slope measurements often yielded errors that were as great as or greater than the computed residual discharge. Ultrasonic measurement methods consist of: 1) the use of ultrasonic instruments for the measurement of a representative 'index' velocity used for in situ estimation of mean water velocity and 2) the use of the acoustic Doppler current discharge measurement system to calibrate the index velocity measurement data. Methods used to calibrate (rate) the index velocity to the channel velocity measured using the Acoustic Doppler Current Profiler are the most critical factors affecting the accuracy of net discharge estimation. The index velocity first must be related to mean channel velocity and then used to calculate instantaneous channel discharge. Finally, discharge is low-pass filtered to remove the effects of the tides. An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin Rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. Two sets of data were collected during a spring tide (monthly maximum tidal current) and one of data collected during a neap tide (monthly minimum tidal current). The relative magnitude of instrumental errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was found to be the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three sets of calibration data differed by less than an average of 4 cubic meters per second, or less than 0.5% of a typical peak tidal discharge rate of 750 cubic meters per second.

  14. Thoracic and respirable particle definitions for human health risk assessment.

    PubMed

    Brown, James S; Gordon, Terry; Price, Owen; Asgharian, Bahman

    2013-04-10

    Particle size-selective sampling refers to the collection of particles of varying sizes that potentially reach and adversely affect specific regions of the respiratory tract. Thoracic and respirable fractions are defined as the fraction of inhaled particles capable of passing beyond the larynx and ciliated airways, respectively, during inhalation. In an attempt to afford greater protection to exposed individuals, current size-selective sampling criteria overestimate the population means of particle penetration into regions of the lower respiratory tract. The purpose of our analyses was to provide estimates of the thoracic and respirable fractions for adults and children during typical activities with both nasal and oral inhalation, that may be used in the design of experimental studies and interpretation of health effects evidence. We estimated the fraction of inhaled particles (0.5-20 μm aerodynamic diameter) penetrating beyond the larynx (based on experimental data) and ciliated airways (based on a mathematical model) for an adult male, adult female, and a 10 yr old child during typical daily activities and breathing patterns. Our estimates show less penetration of coarse particulate matter into the thoracic and gas exchange regions of the respiratory tract than current size-selective criteria. Of the parameters we evaluated, particle penetration into the lower respiratory tract was most dependent on route of breathing. For typical activity levels and breathing habits, we estimated a 50% cut-size for the thoracic fraction at an aerodynamic diameter of around 3 μm in adults and 5 μm in children, whereas current ambient and occupational criteria suggest a 50% cut-size of 10 μm. By design, current size-selective sample criteria overestimate the mass of particles generally expected to penetrate into the lower respiratory tract to provide protection for individuals who may breathe orally. We provide estimates of thoracic and respirable fractions for a variety of breathing habits and activities that may benefit the design of experimental studies and interpretation of particle size-specific health effects.

  15. Thoracic and respirable particle definitions for human health risk assessment

    PubMed Central

    2013-01-01

    Background Particle size-selective sampling refers to the collection of particles of varying sizes that potentially reach and adversely affect specific regions of the respiratory tract. Thoracic and respirable fractions are defined as the fraction of inhaled particles capable of passing beyond the larynx and ciliated airways, respectively, during inhalation. In an attempt to afford greater protection to exposed individuals, current size-selective sampling criteria overestimate the population means of particle penetration into regions of the lower respiratory tract. The purpose of our analyses was to provide estimates of the thoracic and respirable fractions for adults and children during typical activities with both nasal and oral inhalation, that may be used in the design of experimental studies and interpretation of health effects evidence. Methods We estimated the fraction of inhaled particles (0.5-20 μm aerodynamic diameter) penetrating beyond the larynx (based on experimental data) and ciliated airways (based on a mathematical model) for an adult male, adult female, and a 10 yr old child during typical daily activities and breathing patterns. Results Our estimates show less penetration of coarse particulate matter into the thoracic and gas exchange regions of the respiratory tract than current size-selective criteria. Of the parameters we evaluated, particle penetration into the lower respiratory tract was most dependent on route of breathing. For typical activity levels and breathing habits, we estimated a 50% cut-size for the thoracic fraction at an aerodynamic diameter of around 3 μm in adults and 5 μm in children, whereas current ambient and occupational criteria suggest a 50% cut-size of 10 μm. Conclusions By design, current size-selective sample criteria overestimate the mass of particles generally expected to penetrate into the lower respiratory tract to provide protection for individuals who may breathe orally. We provide estimates of thoracic and respirable fractions for a variety of breathing habits and activities that may benefit the design of experimental studies and interpretation of particle size-specific health effects. PMID:23575443

  16. Postapplication Fipronil Exposure Following Use on Pets.

    PubMed

    Cochran, R C; Yu, Liu; Krieger, R I; Ross, J H

    2015-01-01

    Fipronil is a pyrazole acaricide and insecticide that may be used for insect, tick, lice, and mite control on pets. Residents' short-term and long-term postapplication exposures to fipronil, including secondary environmental exposures, were estimated using data from chemical-specific studies. Estimations of acute (24-h) absorbed doses for residents were based on U.S. Environmental Protection Agency (U.S. EPA) 2012 standard operating procedures (SOPs) for postapplication exposure. Chronic exposures were not estimated for residential use, as continuous, long-term application activities were unlikely to occur. Estimated acute postapplication absorbed doses were as high as 0.56 μg/kg-d for toddlers (1-2 yr) in households with treated pets based on current U.S. EPA SOPs. Acute toddler exposures estimated here were fivefold larger in comparison to adults. Secondary exposure from the household environment in which a treated pet lives that is not from contacting the pet, but from contacting the house interior to which pet residues were transferred, was estimated based on monitoring socks worn by pet owners. These secondary exposures were more than an order of magnitude lower than those estimated from contacting the pet and thus may be considered negligible.

  17. Atmospheric Sulfur Hexafluoride: Sources, Sinks and Greenhouse Warming

    NASA Technical Reports Server (NTRS)

    Sze, Nien Dak; Wang, Wei-Chyung; Shia, George; Goldman, Aaron; Murcray, Frank J.; Murcray, David G.; Rinsland, Curtis P.

    1993-01-01

    Model calculations using estimated reaction rates of sulfur hexafluoride (SF6) with OH and 0('D) indicate that the atmospheric lifetime due to these processes may be very long (25,000 years). An upper limit for the UV cross section would suggest a photolysis lifetime much longer than 1000 years. The possibility of other removal mechanisms are discussed. The estimated lifetimes are consistent with other estimated values based on recent laboratory measurements. There appears to be no known natural source of SF6. An estimate of the current production rate of SF6 is about 5 kt/yr. Based on historical emission rates, we calculated a present-day atmospheric concentrations for SF6 of about 2.5 parts per trillion by volume (pptv) and compared the results with available atmospheric measurements. It is difficult to estimate the atmospheric lifetime of SF6 based on mass balance of the emission rate and observed abundance. There are large uncertainties concerning what portion of the SF6 is released to the atmosphere. Even if the emission rate were precisely known, it would be difficult to distinguish among lifetimes longer than 100 years since the current abundance of SF6 is due to emission in the past three decades. More information on the measured trends over the past decade and observed vertical and latitudinal distributions of SF6 in the lower stratosphere will help to narrow the uncertainty in the lifetime. Based on laboratory-measured IR absorption cross section for SF6, we showed that SF6 is about 3 times more effective as a greenhouse gas compared to CFC 11 on a per molecule basis. However, its effect on atmospheric warming will be minimal because of its very small concentration. We estimated the future concentration of SF6 at 2010 to be 8 and 10 pptv based on two projected emission scenarios. The corresponding equilibrium warming of 0.0035 C and 0.0043 C is to be compared with the estimated warming due to CO2 increase of about 0.8 C in the same period.

  18. Worldwide variance in the potential utilization of Gamma Knife radiosurgery.

    PubMed

    Hamilton, Travis; Dade Lunsford, L

    2016-12-01

    OBJECTIVE The role of Gamma Knife radiosurgery (GKRS) has expanded worldwide during the past 3 decades. The authors sought to evaluate whether experienced users vary in their estimate of its potential use. METHODS Sixty-six current Gamma Knife users from 24 countries responded to an electronic survey. They estimated the potential role of GKRS for benign and malignant tumors, vascular malformations, and functional disorders. These estimates were compared with published disease epidemiological statistics and the 2014 use reports provided by the Leksell Gamma Knife Society (16,750 cases). RESULTS Respondents reported no significant variation in the estimated use in many conditions for which GKRS is performed: meningiomas, vestibular schwannomas, and arteriovenous malformations. Significant variance in the estimated use of GKRS was noted for pituitary tumors, craniopharyngiomas, and cavernous malformations. For many current indications, the authors found significant variance in GKRS users based in the Americas, Europe, and Asia. Experts estimated that GKRS was used in only 8.5% of the 196,000 eligible cases in 2014. CONCLUSIONS Although there was a general worldwide consensus regarding many major indications for GKRS, significant variability was noted for several more controversial roles. This expert opinion survey also suggested that GKRS is significantly underutilized for many current diagnoses, especially in the Americas. Future studies should be conducted to investigate health care barriers to GKRS for many patients.

  19. Evaporation estimates from the Dead Sea and their implications on its water balance

    NASA Astrophysics Data System (ADS)

    Oroud, Ibrahim M.

    2011-12-01

    The Dead Sea (DS) is a terminal hypersaline water body situated in the deepest part of the Jordan Valley. There is a growing interest in linking the DS to the open seas due to severe water shortages in the area and the serious geological and environmental hazards to its vicinity caused by the rapid level drop of the DS. A key issue in linking the DS with the open seas would be an accurate determination of evaporation rates. There exist large uncertainties of evaporation estimates from the DS due to the complex feedback mechanisms between meteorological forcings and thermophysical properties of hypersaline solutions. Numerous methods have been used to estimate current and historical (pre-1960) evaporation rates, with estimates differing by ˜100%. Evaporation from the DS is usually deduced indirectly using energy, water balance, or pan methods with uncertainty in many parameters. Accumulated errors resulting from these uncertainties are usually pooled into the estimates of evaporation rates. In this paper, a physically based method with minimum empirical parameters is used to evaluate historical and current evaporation estimates from the DS. The more likely figures for historical and current evaporation rates from the DS were 1,500-1,600 and 1,200-1,250 mm per annum, respectively. Results obtained are congruent with field observations and with more elaborate procedures.

  20. Identifying Seizure Onset Zone From the Causal Connectivity Inferred Using Directed Information

    NASA Astrophysics Data System (ADS)

    Malladi, Rakesh; Kalamangalam, Giridhar; Tandon, Nitin; Aazhang, Behnaam

    2016-10-01

    In this paper, we developed a model-based and a data-driven estimator for directed information (DI) to infer the causal connectivity graph between electrocorticographic (ECoG) signals recorded from brain and to identify the seizure onset zone (SOZ) in epileptic patients. Directed information, an information theoretic quantity, is a general metric to infer causal connectivity between time-series and is not restricted to a particular class of models unlike the popular metrics based on Granger causality or transfer entropy. The proposed estimators are shown to be almost surely convergent. Causal connectivity between ECoG electrodes in five epileptic patients is inferred using the proposed DI estimators, after validating their performance on simulated data. We then proposed a model-based and a data-driven SOZ identification algorithm to identify SOZ from the causal connectivity inferred using model-based and data-driven DI estimators respectively. The data-driven SOZ identification outperforms the model-based SOZ identification algorithm when benchmarked against visual analysis by neurologist, the current clinical gold standard. The causal connectivity analysis presented here is the first step towards developing novel non-surgical treatments for epilepsy.

  1. Parameter estimation for a cohesive sediment transport model by assimilating satellite observations in the Hangzhou Bay: Temporal variations and spatial distributions

    NASA Astrophysics Data System (ADS)

    Wang, Daosheng; Zhang, Jicai; He, Xianqiang; Chu, Dongdong; Lv, Xianqing; Wang, Ya Ping; Yang, Yang; Fan, Daidu; Gao, Shu

    2018-01-01

    Model parameters in the suspended cohesive sediment transport models are critical for the accurate simulation of suspended sediment concentrations (SSCs). Difficulties in estimating the model parameters still prevent numerical modeling of the sediment transport from achieving a high level of predictability. Based on a three-dimensional cohesive sediment transport model and its adjoint model, the satellite remote sensing data of SSCs during both spring tide and neap tide, retrieved from Geostationary Ocean Color Imager (GOCI), are assimilated to synchronously estimate four spatially and temporally varying parameters in the Hangzhou Bay in China, including settling velocity, resuspension rate, inflow open boundary conditions and initial conditions. After data assimilation, the model performance is significantly improved. Through several sensitivity experiments, the spatial and temporal variation tendencies of the estimated model parameters are verified to be robust and not affected by model settings. The pattern for the variations of the estimated parameters is analyzed and summarized. The temporal variations and spatial distributions of the estimated settling velocity are negatively correlated with current speed, which can be explained using the combination of flocculation process and Stokes' law. The temporal variations and spatial distributions of the estimated resuspension rate are also negatively correlated with current speed, which are related to the grain size of the seabed sediments under different current velocities. Besides, the estimated inflow open boundary conditions reach the local maximum values near the low water slack conditions and the estimated initial conditions are negatively correlated with water depth, which is consistent with the general understanding. The relationships between the estimated parameters and the hydrodynamic fields can be suggestive for improving the parameterization in cohesive sediment transport models.

  2. Estimating Agricultural Water Use using the Operational Simplified Surface Energy Balance Evapotranspiration Estimation Method

    NASA Astrophysics Data System (ADS)

    Forbes, B. T.

    2015-12-01

    Due to the predominantly arid climate in Arizona, access to adequate water supply is vital to the economic development and livelihood of the State. Water supply has become increasingly important during periods of prolonged drought, which has strained reservoir water levels in the Desert Southwest over past years. Arizona's water use is dominated by agriculture, consuming about seventy-five percent of the total annual water demand. Tracking current agricultural water use is important for managers and policy makers so that current water demand can be assessed and current information can be used to forecast future demands. However, many croplands in Arizona are irrigated outside of areas where water use reporting is mandatory. To estimate irrigation withdrawals on these lands, we use a combination of field verification, evapotranspiration (ET) estimation, and irrigation system qualification. ET is typically estimated in Arizona using the Modified Blaney-Criddle method which uses meteorological data to estimate annual crop water requirements. The Modified Blaney-Criddle method assumes crops are irrigated to their full potential over the entire growing season, which may or may not be realistic. We now use the Operational Simplified Surface Energy Balance (SSEBop) ET data in a remote-sensing and energy-balance framework to estimate cropland ET. SSEBop data are of sufficient resolution (30m by 30m) for estimation of field-scale cropland water use. We evaluate our SSEBop-based estimates using ground-truth information and irrigation system qualification obtained in the field. Our approach gives the end user an estimate of crop consumptive use as well as inefficiencies in irrigation system performance—both of which are needed by water managers for tracking irrigated water use in Arizona.

  3. Estimating the Velocity and Transport of the East Australian Current using Argo, XBT, and Altimetry

    NASA Astrophysics Data System (ADS)

    Zilberman, N. V.; Roemmich, D. H.; Gille, S. T.

    2016-02-01

    Western Boundary Currents (WBCs) are the strongest ocean currents in the subtropics, and constitute the main pathway through which warm water-masses transit from low to mid-latitudes in the subtropical gyres of the Atlantic, Pacific, and Indian Oceans. Heat advection by WBCs has a significant impact on heat storage in subtropical mode waters formation regions and at high latitudes. The possibility that the magnitude of WBCs might change under greenhouse gas forcing has raised significant concerns. Improving our knowledge of WBC circulation is essential to accurately monitor the oceanic heat budget. Because of the narrowness and strong mesoscale variability of WBCs, estimation of WBC velocity and transport places heavy demands on any potential sampling scheme. One strategy for studying WBCs is to combine complementary data sources. High-resolution bathythermograph (HRX) profiles to 800-m have been collected along transects crossing the East Australian Current (EAC) system at 3-month nominal sampling intervals since 1991. EAC transects, with spatial sampling as fine as 10-15 km, are obtained off Brisbane (27°S) and Sydney (34°S), and crossing the related East Auckland Current north of Auckland. Here, HRX profiles collected since 2004 off Brisbane are merged with Argo float profiles and 1000 m trajectory-based velocities to expand HRX shear estimates to 2000-m and to estimate absolute geostrophic velocity and transport. A method for combining altimetric data with HRX and Argo profiles to mitigate temporal aliasing by the HRX transects and to reduce sampling errors in the HRX/Argo datasets is described. The HRX/Argo/altimetry-based estimate of the time-mean poleward alongshore transport of the EAC off Brisbane is 18.3 Sv, with a width of about 180 km, and of which 3.7 Sv recirculates equatorward on a similar spatial scale farther offshore. Geostrophic transport anomalies in the EAC at 27°S show variability of ± 1.3 Sv at interannual time scale related to ENSO. The present calculation is a case study that will be extended to other subtropical WBCs.

  4. Factors predicting high estimated 10-year stroke risk: thai epidemiologic stroke study.

    PubMed

    Hanchaiphiboolkul, Suchat; Puthkhao, Pimchanok; Towanabut, Somchai; Tantirittisak, Tasanee; Wangphonphatthanasiri, Khwanrat; Termglinchan, Thanes; Nidhinandana, Samart; Suwanwela, Nijasri Charnnarong; Poungvarin, Niphon

    2014-08-01

    The purpose of the study was to determine the factors predicting high estimated 10-year stroke risk based on a risk score, and among the risk factors comprising the risk score, which factors had a greater impact on the estimated risk. Thai Epidemiologic Stroke study was a community-based cohort study, which recruited participants from the general population from 5 regions of Thailand. Cross-sectional baseline data of 16,611 participants aged 45-69 years who had no history of stroke were included in this analysis. Multiple logistic regression analysis was used to identify the predictors of high estimated 10-year stroke risk based on the risk score of the Japan Public Health Center Study, which estimated the projected 10-year risk of incident stroke. Educational level, low personal income, occupation, geographic area, alcohol consumption, and hypercholesterolemia were significantly associated with high estimated 10-year stroke risk. Among these factors, unemployed/house work class had the highest odds ratio (OR, 3.75; 95% confidence interval [CI], 2.47-5.69) followed by illiterate class (OR, 2.30; 95% CI, 1.44-3.66). Among risk factors comprising the risk score, the greatest impact as a stroke risk factor corresponded to age, followed by male sex, diabetes mellitus, systolic blood pressure, and current smoking. Socioeconomic status, in particular, unemployed/house work and illiterate class, might be good proxy to identify the individuals at higher risk of stroke. The most powerful risk factors were older age, male sex, diabetes mellitus, systolic blood pressure, and current smoking. Copyright © 2014 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  5. Antiretroviral therapy needs: the effect of changing global guidelines.

    PubMed

    Stanecki, Karen; Daher, Juliana; Stover, John; Beusenberg, Michel; Souteyrand, Yves; García Calleja, Jesus M

    2010-12-01

    In 2010 the WHO issued a revision of the guidelines on antiretroviral therapy (ART) for HIV infection in adults and adolescents. The recommendations included earlier diagnosis and treatment of HIV in the interest of a longer and healthier life. The current analysis explores the impact on the estimates of treatment needs of the new criteria for initiating ART compared with the previous guidelines. The analyses are based on the national models of HIV estimates for the years 1990-2009. These models produce time series estimates of ART treatment need and HIV-related mortality. The ART need estimates based on ART eligibility criteria promoted by the 2010 WHO guidelines were compared with the need estimates based on the 2006 WHO guidelines. With the 2010 eligibility criteria, the proportion of people living with HIV currently in need of ART is estimated to increase from 34% to 49%. Globally, the need increases from 11.4 million (10.2-12.5 million) to 16.2 million (14.8-17.1 million). Regional differences include 7.4 million (6.4-8.4 million) to 10.6 million (9.7-11.5 million) in sub-Saharan Africa, 1.6 million (1.3-1.7 million) to 2.4 million (2.1-2.5 million) in Asia and 710 000 (610 000-780 000) to 950 000 (810 000-1.0 million) in Latin America and the Caribbean. When adopting the new recommendations, countries have to adapt their planning process in order to accelerate access to life saving drugs to those in need. These recommendations have a significant impact on resource needs. In addition to improving and prolonging the lives of the infected individuals, it will have the expected benefit of reducing HIV transmission and the future HIV/AIDS burden.

  6. An automated multi-model based evapotranspiration estimation framework for understanding crop-climate interactions in India

    NASA Astrophysics Data System (ADS)

    Bhattarai, N.; Jain, M.; Mallick, K.

    2017-12-01

    A remote sensing based multi-model evapotranspiration (ET) estimation framework is developed using MODIS and NASA Merra-2 reanalysis data for data poor regions, and we apply this framework to the Indian subcontinent. The framework eliminates the need for in-situ calibration data and hence estimates ET completely from space and is replicable across all regions in the world. Currently, six surface energy balance models ranging from widely-used SEBAL, METRIC, and SEBS to moderately-used S-SEBI, SSEBop, and a relatively new model, STIC1.2 are being integrated and validated. Preliminary analysis suggests good predictability of the models for estimating near- real time ET under clear sky conditions from various crop types in India with coefficient of determination 0.32-0.55 and percent bias -15%-28%, when compared against Bowen Ratio based ET estimates. The results are particularly encouraging given that no direct ground input data were used in the analysis. The framework is currently being extended to estimate seasonal ET across the Indian subcontinent using a model-ensemble approach that uses all available MODIS 8-day datasets since 2000. These ET products are being used to monitor inter-seasonal and inter-annual dynamics of ET and crop water use across different crop and irrigation practices in India. Particularly, the potential impacts of changes in precipitation patterns and extreme heat (e.g., extreme degree days) on seasonal crop water consumption is being studied. Our ET products are able to locate the water stress hotspots that need to be targeted with water saving interventions to maintain agricultural production in the face of climate variability and change.

  7. A log-linear model approach to estimation of population size using the line-transect sampling method

    USGS Publications Warehouse

    Anderson, D.R.; Burnham, K.P.; Crain, B.R.

    1978-01-01

    The technique of estimating wildlife population size and density using the belt or line-transect sampling method has been used in many past projects, such as the estimation of density of waterfowl nestling sites in marshes, and is being used currently in such areas as the assessment of Pacific porpoise stocks in regions of tuna fishing activity. A mathematical framework for line-transect methodology has only emerged in the last 5 yr. In the present article, we extend this mathematical framework to a line-transect estimator based upon a log-linear model approach.

  8. Current risk estimates based on the A-bomb survivors data - a discussion in terms of the ICRP recommendations on the neutron weighting factor.

    PubMed

    Rühm, W; Walsh, L

    2007-01-01

    Currently, most analyses of the A-bomb survivors' solid tumour and leukaemia data are based on a constant neutron relative biological effectiveness (RBE) value of 10 that is applied to all survivors, independent of their distance to the hypocentre at the time of bombing. The results of these analyses are then used as a major basis for current risk estimates suggested by the International Commission on Radiological Protection (ICRP) for use in international safety guidelines. It is shown here that (i) a constant value of 10 is not consistent with weighting factors recommended by the ICRP for neutrons and (ii) it does not account for the hardening of the neutron spectra in Hiroshima and Nagasaki, which takes place with increasing distance from the hypocentres. The purpose of this paper is to present new RBE values for the neutrons, calculated as a function of distance from the hypocentres for both cities that are consistent with the ICRP60 neutron weighting factor. If based on neutron spectra from the DS86 dosimetry system, these calculations suggest values of about 31 at 1000 m and 23 at 2000 m ground range in Hiroshima, while the corresponding values for Nagasaki are 24 and 22. If the neutron weighting factor that is consistent with ICRP92 is used, the corresponding values are about 23 and 21 for Hiroshima and 21 and 20 for Nagasaki, respectively. It is concluded that the current risk estimates will be subject to some changes in view of the changed RBE values. This conclusion does not change significantly if the new doses from the Dosimetry System DS02 are used.

  9. Epidemiology and Impact of Health Care Provider-Diagnosed Anxiety and Depression Among US Children.

    PubMed

    Bitsko, Rebecca H; Holbrook, Joseph R; Ghandour, Reem M; Blumberg, Stephen J; Visser, Susanna N; Perou, Ruth; Walkup, John T

    2018-06-01

    This study documents the prevalence and impact of anxiety and depression in US children based on the parent report of health care provider diagnosis. National Survey of Children's Health data from 2003, 2007, and 2011-2012 were analyzed to estimate the prevalence of anxiety or depression among children aged 6 to 17 years. Estimates were based on the parent report of being told by a health care provider that their child had the specified condition. Sociodemographic characteristics, co-occurrence of other conditions, health care use, school measures, and parenting aggravation were estimated using 2011-2012 data. Based on the parent report, lifetime diagnosis of anxiety or depression among children aged 6 to 17 years increased from 5.4% in 2003 to 8.4% in 2011-2012. Current anxiety or depression increased from 4.7% in 2007 to 5.3% in 2011-2012; current anxiety increased significantly, whereas current depression did not change. Anxiety and depression were associated with increased risk of co-occurring conditions, health care use, school problems, and having parents with high parenting aggravation. Children with anxiety or depression with effective care coordination or a medical home were less likely to have unmet health care needs or parents with high parenting aggravation. By parent report, more than 1 in 20 US children had current anxiety or depression in 2011-2012. Both were associated with significant comorbidity and impact on children and families. These findings may inform efforts to improve the health and well-being of children with internalizing disorders. Future research is needed to determine why child anxiety diagnoses seem to have increased from 2007 to 2012.

  10. Composite Particle Swarm Optimizer With Historical Memory for Function Optimization.

    PubMed

    Li, Jie; Zhang, JunQi; Jiang, ChangJun; Zhou, MengChu

    2015-10-01

    Particle swarm optimization (PSO) algorithm is a population-based stochastic optimization technique. It is characterized by the collaborative search in which each particle is attracted toward the global best position (gbest) in the swarm and its own best position (pbest). However, all of particles' historical promising pbests in PSO are lost except their current pbests. In order to solve this problem, this paper proposes a novel composite PSO algorithm, called historical memory-based PSO (HMPSO), which uses an estimation of distribution algorithm to estimate and preserve the distribution information of particles' historical promising pbests. Each particle has three candidate positions, which are generated from the historical memory, particles' current pbests, and the swarm's gbest. Then the best candidate position is adopted. Experiments on 28 CEC2013 benchmark functions demonstrate the superiority of HMPSO over other algorithms.

  11. Evaluating Childhood Vaccination Coverage of NIP Vaccines: Coverage Survey versus Zhejiang Provincial Immunization Information System.

    PubMed

    Hu, Yu; Chen, Yaping

    2017-07-11

    Vaccination coverage in Zhejiang province, east China, is evaluated through repeated coverage surveys. The Zhejiang provincial immunization information system (ZJIIS) was established in 2004 with links to all immunization clinics. ZJIIS has become an alternative to quickly assess the vaccination coverage. To assess the current completeness and accuracy on the vaccination coverage derived from ZJIIS, we compared the estimates from ZJIIS with the estimates from the most recent provincial coverage survey in 2014, which combined interview data with verified data from ZJIIS. Of the enrolled 2772 children in the 2014 provincial survey, the proportions of children with vaccination cards and registered in ZJIIS were 94.0% and 87.4%, respectively. Coverage estimates from ZJIIS were systematically higher than the corresponding estimates obtained through the survey, with a mean difference of 4.5%. Of the vaccination doses registered in ZJIIS, 16.7% differed from the date recorded in the corresponding vaccination cards. Under-registration in ZJIIS significantly influenced the coverage estimates derived from ZJIIS. Therefore, periodic coverage surveys currently provide more complete and reliable results than the estimates based on ZJIIS alone. However, further improvement of completeness and accuracy of ZJIIS will likely allow more reliable and timely estimates in future.

  12. Data assimilation of ground GPG total electron content into a physics-based ionosheric model by use of the Kalman filter

    NASA Technical Reports Server (NTRS)

    Hajj, G. A.; Wilson, B. D.; Wang, C.; Pi, X.; Rosen, I. G.

    2004-01-01

    A three-dimensional (3-D) Global Assimilative Ionospheric Model (GAIM) is currently being developed by a joint University of Southern California and Jet Propulsion Laboratory (JPL) team. To estimate the electron density on a global grid, GAIM uses a first-principles ionospheric physics model and the Kalman filter as one of its possible estimation techniques.

  13. Wood material use in the U.S. cabinet industry 1999 to 2001

    Treesearch

    David Olah; Robert Smith; Bruce. Hansen

    2003-01-01

    Fax and mail questionnaires were used to estimate consumption of wood-based products by the U.S. cabinet industry and evaluate current management issues affecting the cabinet industry. Results indicated that the cabinet industry used an estimated 484 million board feet (MMBF) of hardwood lumber. Nearly 95 percent of the hardwood lumber purchases were grade No. 1 Common...

  14. Site productivity - current estimates, change, and possible enhancements for the Northern Research Station

    Treesearch

    Scott A. Pugh

    2012-01-01

    Site productivity (SP) is the inherent capacity to grow crops of industrial wood. SP identifies the potential growth in cubic feet/acre/year and is based on the culmination of mean annual increment of fully stocked natural stands. Changes in SP were summarized for timberland and the associated effects on net growth and removal estimates were investigated using data...

  15. Estimating turbidity current conditions from channel morphology: A Froude number approach

    NASA Astrophysics Data System (ADS)

    Sequeiros, Octavio E.

    2012-04-01

    There is a growing need across different disciplines to develop better predictive tools for flow conditions of density and turbidity currents. Apart from resorting to complex numerical modeling or expensive field measurements, little is known about how to estimate gravity flow parameters from scarce available data and how they relate to each other. This study presents a new method to estimate normal flow conditions of gravity flows from channel morphology based on an extensive data set of laboratory and field measurements. The compilation consists of 78 published works containing 1092 combined measurements of velocity and concentration of gravity flows dating as far back as the early 1950s. Because the available data do not span all ranges of the critical parameters, such as bottom slope, a validated Reynolds-averaged Navier-Stokes (RANS)κ-ɛnumerical model is used to cover the gaps. It is shown that gravity flows fall within a range of Froude numbers spanning 1 order of magnitude centered on unity, as opposed to rivers and open-channel flows which extend to a much wider range. It is also observed that the transition from subcritical to supercritical flow regime occurs around a slope of 1%, with a spread caused by parameters other than the bed slope, like friction and suspended sediment settling velocity. The method is based on a set of equations relating Froude number to bed slope, combined friction, suspended material, and other flow parameters. The applications range from quick estimations of gravity flow conditions to improved numerical modeling and back calculation of missing parameters. A real case scenario of turbidity current estimation from a submarine canyon off the Nigerian coast is provided as an example.

  16. Acute health impacts of airborne particles estimated from satellite remote sensing.

    PubMed

    Wang, Zhaoxi; Liu, Yang; Hu, Mu; Pan, Xiaochuan; Shi, Jing; Chen, Feng; He, Kebin; Koutrakis, Petros; Christiani, David C

    2013-01-01

    Satellite-based remote sensing provides a unique opportunity to monitor air quality from space at global, continental, national and regional scales. Most current research focused on developing empirical models using ground measurements of the ambient particulate. However, the application of satellite-based exposure assessment in environmental health is still limited, especially for acute effects, because the development of satellite PM(2.5) model depends on the availability of ground measurements. We tested the hypothesis that MODIS AOD (aerosol optical depth) exposure estimates, obtained from NASA satellites, are directly associated with daily health outcomes. Three independent healthcare databases were used: unscheduled outpatient visits, hospital admissions, and mortality collected in Beijing metropolitan area, China during 2006. We use generalized linear models to compare the short-term effects of air pollution assessed by ground monitoring (PM(10)) with adjustment of absolute humidity (AH) and AH-calibrated AOD. Across all databases we found that both AH-calibrated AOD and PM(10) (adjusted by AH) were consistently associated with elevated daily events on the current day and/or lag days for cardiovascular diseases, ischemic heart diseases, and COPD. The relative risks estimated by AH-calibrated AOD and PM(10) (adjusted by AH) were similar. Additionally, compared to ground PM(10), we found that AH-calibrated AOD had narrower confidence intervals for all models and was more robust in estimating the current day and lag day effects. Our preliminary findings suggested that, with proper adjustment of meteorological factors, satellite AOD can be used directly to estimate the acute health impacts of ambient particles without prior calibrating to the sparse ground monitoring networks. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Electric field effects on ion currents in satellite wakes

    NASA Technical Reports Server (NTRS)

    Parks, D. E.; Katz, I.

    1985-01-01

    Small currents associated with satellite spin, dielectric conduction, or trace concentrations of H+, can have a substantial effect on the potential of a satellite and the particle currents reaching its surface. The importance of such small currents at altitudes below about 300 km stems from the extremely small 0+ currents impinging on the wake-side of the spacecraft. The particle current on the downstream side of the AE-C satellite is considered. Theoretical estimates based on a newly described constant of the motion of a particle indicate that accounting for small concentrations of H+ remove a major discrepancy between calculated and measured currents.

  18. Estimation of Operating Condition of Appliances Using Circuit Current Data on Electric Distribution Boards

    NASA Astrophysics Data System (ADS)

    Iwafune, Yumiko; Ogimoto, Kazuhiko; Yagita, Yoshie

    The Energy management systems (EMS) on demand sides are expected as a method to enhance the capability of supply and demand balancing of a power system under the anticipated penetration of renewable energy generation such as Photovoltaics (PV). Elucidation of energy consumption structure in a building is one of important elements for realization of EMS and contributes to the extraction of potential energy saving. In this paper, we propose the estimation method of operating condition of household appliances using circuit current data on an electric distribution board. Circuit current data are broken down by their shape using a self-organization map method and aggregated by appliance based on customers' information of appliance possessed. Proposed method is verified using residential energy consumption measurement survey data.

  19. dropEst: pipeline for accurate estimation of molecular counts in droplet-based single-cell RNA-seq experiments.

    PubMed

    Petukhov, Viktor; Guo, Jimin; Baryawno, Ninib; Severe, Nicolas; Scadden, David T; Samsonova, Maria G; Kharchenko, Peter V

    2018-06-19

    Recent single-cell RNA-seq protocols based on droplet microfluidics use massively multiplexed barcoding to enable simultaneous measurements of transcriptomes for thousands of individual cells. The increasing complexity of such data creates challenges for subsequent computational processing and troubleshooting of these experiments, with few software options currently available. Here, we describe a flexible pipeline for processing droplet-based transcriptome data that implements barcode corrections, classification of cell quality, and diagnostic information about the droplet libraries. We introduce advanced methods for correcting composition bias and sequencing errors affecting cellular and molecular barcodes to provide more accurate estimates of molecular counts in individual cells.

  20. Gravitational wave searches using the DSN (Deep Space Network)

    NASA Technical Reports Server (NTRS)

    Nelson, S. J.; Armstrong, J. W.

    1988-01-01

    The Deep Space Network Doppler spacecraft link is currently the only method available for broadband gravitational wave searches in the 0.01 to 0.001 Hz frequency range. The DSN's role in the worldwide search for gravitational waves is described by first summarizing from the literature current theoretical estimates of gravitational wave strengths and time scales from various astrophysical sources. Current and future detection schemes for ground based and space based detectors are then discussed. Past, present, and future planned or proposed gravitational wave experiments using DSN Doppler tracking are described. Lastly, some major technical challenges to improve gravitational wave sensitivities using the DSN are discussed.

  1. 10 CFR Appendix D to Part 30 - Criteria Relating to Use of Financial Tests and Self-Guarantee for Providing Reasonable Assurance...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... based on furnishing its own guarantee that funds will be available for decommissioning costs and on a... or at least 10 times the total current decommissioning cost estimate (or the current amount required... materially adversely affect the company's ability to pay for decommissioning costs. In connection with the...

  2. 10 CFR Appendix D to Part 30 - Criteria Relating to Use of Financial Tests and Self-Guarantee for Providing Reasonable Assurance...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... based on furnishing its own guarantee that funds will be available for decommissioning costs and on a... or at least 10 times the total current decommissioning cost estimate (or the current amount required... materially adversely affect the company's ability to pay for decommissioning costs. In connection with the...

  3. Income, Poverty, and Health Insurance Coverage in the United States: 2012. Current Population Reports P60-245

    ERIC Educational Resources Information Center

    DeNavas-Walt, Carmen; Proctor, Bernadette D.; Smith, Jessica C.

    2013-01-01

    This report presents data on income, poverty, and health insurance coverage in the United States based on information collected in the 2013 and earlier Current Population Survey Annual Social and Economic Supplements (CPS ASEC) conducted by the U.S. Census Bureau. For most groups, the 2012 income, poverty, and health insurance estimates were not…

  4. Assessment of NHTSA’s Report “Relationships Between Fatality Risk, Mass, and Footprint in Model Year 2004-2011 Passenger Cars and LTVs” (LBNL Phase 1)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wenzel, Tom P.

    In its 2012 report NHTSA simulated the effect four fleetwide mass reduction scenarios would have on the change in annual fatalities. NHTSA estimated that the most aggressive of these scenarios (reducing mass 5.2% in heavier light trucks and 2.6% in all other vehicles types except lighter cars) would result in a small reduction in societal fatalities. LBNL replicated the methodology NHTSA used to simulate six mass reduction scenarios, including the mass reductions recommended in the 2015 NRC committee report, and estimated in 2021 and 2025 by EPA in the TAR, using the updated data through 2012. The analysis indicates thatmore » the estimated x change in fatalities under each scenario based on the updated analysis is comparable to that in the 2012 analysis, but less beneficial or more detrimental than that in the 2016 analysis. For example, an across the board 100-lb reduction in mass would result in an estimated 157 additional annual fatalities based on the 2012 analysis, but would result in only an estimated 91 additional annual fatalities based on the 2016 analysis, and an additional 87 fatalities based on the current analysis. The mass reductions recommended by the 2015 NRC committee report6 would result in a 224 increase in annual fatalities in the 2012 analysis, a 344 decrease in annual fatalities in the 2016 analysis, and a 141 increase in fatalities in the current analysis. The mass reductions EPA estimated for 2025 in the TAR7 would result in a 203 decrease in fatalities based on the 2016 analysis, but an increase of 39 fatalities based on the current analysis. These results support NHTSA’s conclusion from its 2012 study that, when footprint is held fixed, “no judicious combination of mass reductions in the various classes of vehicles results in a statistically significant fatality increase and many potential combinations are safety-neutral as point estimates.”Like the previous NHTSA studies, this updated report concludes that the estimated effect of mass reduction while maintaining footprint on societal U.S. fatality risk is small, and not statistically significant at the 95% or 90% confidence level for all vehicle types based on the jack-knife method NHTSA used. This report also finds that the estimated effects of other control variables, such as vehicle type, specific safety technologies, and crash conditions such as whether the crash occurred at night, in a rural county, or on a high-speed road, on risk are much larger, in some cases two orders of magnitude larger, than the estimated effect of mass or footprint reduction on risk. Finally, this report shows that after accounting for the many vehicle, driver, and crash variables NHTSA used in its regression analyses, there remains a wide variation in risk by vehicle make and model, and this variation is unrelated to vehicle mass. Although the purpose of the NHTSA and LBNL reports is to estimate the effect of vehicle mass reduction on societal risk, this is not exactly what the regression models are estimating. Rather, they are estimating the recent historical relationship between mass and risk, after accounting for most measurable differences between vehicles, drivers, and crash times and locations. In essence, the regression models are comparing the risk of a 2600-lb Dodge Neon with that of a 2500-lb Honda Civic, after attempting to account for all other differences between the two vehicles. The models are not estimating the effect of literally removing 100 pounds from the Neon, leaving everything else unchanged. In addition, the analyses are based on the relationship of vehicle mass and footprint on risk for recent vehicle designs (model year 2004 to 2011). These relationships may or may not continue into the future as manufacturers utilize new vehicle designs and incorporate new technologies, such as more extensive use of strong lightweight materials and specific safety technologies. Therefore, throughout this report we use the phrase “the estimated effect of mass (or footprint) reduction on risk” as shorthand for “the estimated change in risk as a function of its relationship to mass (or footprint) for vehicle models of recent design.”« less

  5. Probability based remaining capacity estimation using data-driven and neural network model

    NASA Astrophysics Data System (ADS)

    Wang, Yujie; Yang, Duo; Zhang, Xu; Chen, Zonghai

    2016-05-01

    Since large numbers of lithium-ion batteries are composed in pack and the batteries are complex electrochemical devices, their monitoring and safety concerns are key issues for the applications of battery technology. An accurate estimation of battery remaining capacity is crucial for optimization of the vehicle control, preventing battery from over-charging and over-discharging and ensuring the safety during its service life. The remaining capacity estimation of a battery includes the estimation of state-of-charge (SOC) and state-of-energy (SOE). In this work, a probability based adaptive estimator is presented to obtain accurate and reliable estimation results for both SOC and SOE. For the SOC estimation, an n ordered RC equivalent circuit model is employed by combining an electrochemical model to obtain more accurate voltage prediction results. For the SOE estimation, a sliding window neural network model is proposed to investigate the relationship between the terminal voltage and the model inputs. To verify the accuracy and robustness of the proposed model and estimation algorithm, experiments under different dynamic operation current profiles are performed on the commercial 1665130-type lithium-ion batteries. The results illustrate that accurate and robust estimation can be obtained by the proposed method.

  6. An investigation of using an RQP based method to calculate parameter sensitivity derivatives

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1989-01-01

    Estimation of the sensitivity of problem functions with respect to problem variables forms the basis for many of our modern day algorithms for engineering optimization. The most common application of problem sensitivities has been in the calculation of objective function and constraint partial derivatives for determining search directions and optimality conditions. A second form of sensitivity analysis, parameter sensitivity, has also become an important topic in recent years. By parameter sensitivity, researchers refer to the estimation of changes in the modeling functions and current design point due to small changes in the fixed parameters of the formulation. Methods for calculating these derivatives have been proposed by several authors (Armacost and Fiacco 1974, Sobieski et al 1981, Schmit and Chang 1984, and Vanderplaats and Yoshida 1985). Two drawbacks to estimating parameter sensitivities by current methods have been: (1) the need for second order information about the Lagrangian at the current point, and (2) the estimates assume no change in the active set of constraints. The first of these two problems is addressed here and a new algorithm is proposed that does not require explicit calculation of second order information.

  7. Spatial-temporal variability in groundwater abstraction across Uganda: Implications to sustainable water resources management

    NASA Astrophysics Data System (ADS)

    Nanteza, J.; Thomas, B. F.; Mukwaya, P. I.

    2017-12-01

    The general lack of knowledge about the current rates of water abstraction/use is a challenge to sustainable water resources management in many countries, including Uganda. Estimates of water abstraction/use rates over Uganda, currently available from the FAO are not disaggregated according to source, making it difficult to understand how much is taken out of individual water stores, limiting effective management. Modelling efforts have disaggregated water use rates according to source (i.e. groundwater and surface water). However, over Sub-Saharan Africa countries, these model use estimates are highly uncertain given the scale limitations in applying water use (i.e. point versus regional), thus influencing model calibration/validation. In this study, we utilize data from the water supply atlas project over Uganda to estimate current rates of groundwater abstraction across the country based on location, well type and other relevant information. GIS techniques are employed to demarcate areas served by each water source. These areas are combined with past population distributions and average daily water needed per person to estimate water abstraction/use through time. The results indicate an increase in groundwater use, and isolate regions prone to groundwater depletion where improved management is required to sustainably management groundwater use.

  8. Sliding Mode Observer-Based Current Sensor Fault Reconstruction and Unknown Load Disturbance Estimation for PMSM Driven System

    PubMed Central

    Li, Xiangfei; Lin, Yuliang

    2017-01-01

    This paper proposes a new scheme of reconstructing current sensor faults and estimating unknown load disturbance for a permanent magnet synchronous motor (PMSM)-driven system. First, the original PMSM system is transformed into two subsystems; the first subsystem has unknown system load disturbances, which are unrelated to sensor faults, and the second subsystem has sensor faults, but is free from unknown load disturbances. Introducing a new state variable, the augmented subsystem that has sensor faults can be transformed into having actuator faults. Second, two sliding mode observers (SMOs) are designed: the unknown load disturbance is estimated by the first SMO in the subsystem, which has unknown load disturbance, and the sensor faults can be reconstructed using the second SMO in the augmented subsystem, which has sensor faults. The gains of the proposed SMOs and their stability analysis are developed via the solution of linear matrix inequality (LMI). Finally, the effectiveness of the proposed scheme was verified by simulations and experiments. The results demonstrate that the proposed scheme can reconstruct current sensor faults and estimate unknown load disturbance for the PMSM-driven system. PMID:29211017

  9. Parameter estimation of anisotropic Manning's n coefficient for advanced circulation (ADCIRC) modeling of estuarine river currents (lower St. Johns River)

    NASA Astrophysics Data System (ADS)

    Demissie, Henok K.; Bacopoulos, Peter

    2017-05-01

    A rich dataset of time- and space-varying velocity measurements for a macrotidal estuary was used in the development of a vector-based formulation of bottom roughness in the Advanced Circulation (ADCIRC) model. The updates to the parallel code of ADCIRC to include directionally based drag coefficient are briefly discussed in the paper, followed by an application of the data assimilation (nudging analysis) to the lower St. Johns River (northeastern Florida) for parameter estimation of anisotropic Manning's n coefficient. The method produced converging estimates of Manning's n values for ebb (0.0290) and flood (0.0219) when initialized with uniform and isotropic setting of 0.0200. Modeled currents, water levels and flows were improved at observation locations where data were assimilated as well as at monitoring locations where data were not assimilated, such that the method increases model skill locally and non-locally with regard to the data locations. The methodology is readily transferrable to other circulation/estuary models, given pre-developed quality mesh/grid and adequate data available for assimilation.

  10. Current and projected water demand and water availability estimates under climate change scenarios in the Weyib River basin in Bale mountainous area of Southeastern Ethiopia

    NASA Astrophysics Data System (ADS)

    Serur, Abdulkerim Bedewi; Sarma, Arup Kumar

    2017-07-01

    This study intended to estimate the spatial and temporal variation of current and projected water demand and water availability under climate change scenarios in Weyib River basin, Bale mountainous area of Southeastern Ethiopia. Future downscaled climate variables from three Earth System Models under the three RCP emission scenarios were inputted into ArcSWAT hydrological model to simulate different components of water resources of a basin whereas current and projected human and livestock population of the basin is considered to estimate the total annual water demand for various purposes. Results revealed that the current total annual water demand of the basin is found to be about 289 Mm3, and this has to increase by 83.47% after 15 years, 200.67% after 45 years, and 328.78% after 75 years by the 2020s, 2050s, and 2080s, respectively, from base period water demand mainly due to very rapid increasing population (40.81, 130.80, and 229.12% by the 2020s, 2050s, and 2080s, respectively) and climatic variability. The future average annual total water availability in the basin is observed to be increased by ranging from 15.04 to 21.61, 20.08 to 23.34, and 16.21 to 39.53% by the 2020s, 2050s, and 2080s time slice, respectively, from base period available water resources (2333.39 Mm3). The current water availability per capita per year of the basin is about 3112.23 m3 and tends to decline ranging from 11.78 to 17.49, 46.02 to 47.45, and 57.18 to 64.34% by the 2020s, 2050s, and 2080s, respectively, from base period per capita per year water availability. This indicated that there might be possibility to fall the basin under water stress condition in the long term.

  11. Agent-Based Simulations for Project Management

    NASA Technical Reports Server (NTRS)

    White, J. Chris; Sholtes, Robert M.

    2011-01-01

    Currently, the most common approach used in project planning tools is the Critical Path Method (CPM). While this method was a great improvement over the basic Gantt chart technique being used at the time, it now suffers from three primary flaws: (1) task duration is an input, (2) productivity impacts are not considered , and (3) management corrective actions are not included. Today, computers have exceptional computational power to handle complex simulations of task e)(eculion and project management activities (e.g ., dynamically changing the number of resources assigned to a task when it is behind schedule). Through research under a Department of Defense contract, the author and the ViaSim team have developed a project simulation tool that enables more realistic cost and schedule estimates by using a resource-based model that literally turns the current duration-based CPM approach "on its head." The approach represents a fundamental paradigm shift in estimating projects, managing schedules, and reducing risk through innovative predictive techniques.

  12. Junction-to-Case Thermal Resistance of a Silicon Carbide Bipolar Junction Transistor Measured

    NASA Technical Reports Server (NTRS)

    Niedra, Janis M.

    2006-01-01

    Junction temperature of a prototype SiC-based bipolar junction transistor (BJT) was estimated by using the base-emitter voltage (V(sub BE)) characteristic for thermometry. The V(sub BE) was measured as a function of the base current (I(sub B)) at selected temperatures (T), all at a fixed collector current (I(sub C)) and under very low duty cycle pulse conditions. Under such conditions, the average temperature of the chip was taken to be the same as that of the temperature-controlled case. At increased duty cycle such as to substantially heat the chip, but same I(sub C) pulse height, the chip temperature was identified by matching the V(sub BE) to the thermometry curves. From the measured average power, the chip-to-case thermal resistance could be estimated, giving a reasonable value. A tentative explanation for an observed bunching with increasing temperature of the calibration curves may relate to an increasing dopant atom ionization. A first-cut analysis, however, does not support this.

  13. Influence of current climate, historical climate stability and topography on species richness and endemism in Mesoamerican geophyte plants

    PubMed Central

    2017-01-01

    Background A number of biotic and abiotic factors have been proposed as drivers of geographic variation in species richness. As biotic elements, inter-specific interactions are the most widely recognized. Among abiotic factors, in particular for plants, climate and topographic variables as well as their historical variation have been correlated with species richness and endemism. In this study, we determine the extent to which the species richness and endemism of monocot geophyte species in Mesoamerica is predicted by current climate, historical climate stability and topography. Methods Using approximately 2,650 occurrence points representing 507 geophyte taxa, species richness (SR) and weighted endemism (WE) were estimated at a geographic scale using grids of 0.5 × 0.5 decimal degrees resolution using Mexico as the geographic extent. SR and WE were also estimated using species distributions inferred from ecological niche modeling for species with at least five spatially unique occurrence points. Current climate, current to Last Glacial Maximum temperature, precipitation stability and topographic features were used as predictor variables on multiple spatial regression analyses (i.e., spatial autoregressive models, SAR) using the estimates of SR and WE as response variables. The standardized coefficients of the predictor variables that were significant in the regression models were utilized to understand the observed patterns of species richness and endemism. Results Our estimates of SR and WE based on direct occurrence data and distribution modeling generally yielded similar results, though estimates based on ecological niche modeling indicated broader distribution areas for SR and WE than when species richness was directly estimated using georeferenced coordinates. The SR and WE of monocot geophytes were highest along the Trans-Mexican Volcanic Belt, in both cases with higher levels in the central area of this mountain chain. Richness and endemism were also elevated in the southern regions of the Sierra Madre Oriental and Occidental mountain ranges, and in the Tehuacán Valley. Some areas of the Sierra Madre del Sur and Sierra Madre Oriental had high levels of WE, though they are not the areas with the highest SR. The spatial regressions suggest that SR is mostly influenced by current climate, whereas endemism is mainly affected by topography and precipitation stability. Conclusions Both methods (direct occurrence data and ecological niche modeling) used to estimate SR and WE in this study yielded similar results and detected a key area that should be considered in plant conservation strategies: the central region of the Trans-Mexican Volcanic Belt. Our results also corroborated that species richness is more closely correlated with current climate factors while endemism is related to differences in topography and to changes in precipitation levels compared to the LGM climatic conditions. PMID:29062605

  14. Estimate of potential benefit for Europe of fitting Autonomous Emergency Braking (AEB) systems for pedestrian protection to passenger cars.

    PubMed

    Edwards, Mervyn; Nathanson, Andrew; Wisch, Marcus

    2014-01-01

    The objective of the current study was to estimate the benefit for Europe of fitting precrash braking systems to cars that detect pedestrians and autonomously brake the car to prevent or lower the speed of the impact with the pedestrian. The analysis was divided into 2 main parts: (1) Develop and apply methodology to estimate benefit for Great Britain and Germany; (2) scale Great Britain and German results to give an indicative estimate for Europe (EU27). The calculation methodology developed to estimate the benefit was based on 2 main steps: 1. Calculate the change in the impact speed distribution curve for pedestrian casualties hit by the fronts of cars assuming pedestrian autonomous emergency braking (AEB) system fitment. 2. From this, calculate the change in the number of fatally, seriously, and slightly injured casualties by using the relationship between risk of injury and the casualty impact speed distribution to sum the resulting risks for each individual casualty. The methodology was applied to Great Britain and German data for 3 types of pedestrian AEB systems representative of (1) currently available systems; (2) future systems with improved performance, which are expected to be available in the next 2-3 years; and (3) reference limit system, which has the best performance currently thought to be technically feasible. Nominal benefits estimated for Great Britain ranged from £119 million to £385 million annually and for Germany from €63 million to €216 million annually depending on the type of AEB system assumed fitted. Sensitivity calculations showed that the benefit estimated could vary from about half to twice the nominal estimate, depending on factors such as whether or not the system would function at night and the road friction assumed. Based on scaling of estimates made for Great Britain and Germany, the nominal benefit of implementing pedestrian AEB systems on all cars in Europe was estimated to range from about €1 billion per year for current generation AEB systems to about €3.5 billion for a reference limit system (i.e., best performance thought technically feasible at present). Dividing these values by the number of new passenger cars registered in Europe per year gives an indication that the cost of a system per car should be less than ∼€80 to ∼€280 for it to be cost effective. The potential benefit of fitting AEB systems to cars in Europe for pedestrian protection has been estimated and the results interpreted to indicate the upper limit of cost for a system to allow it to be cost effective.

  15. Multivariate Granger causality: an estimation framework based on factorization of the spectral density matrix

    PubMed Central

    Wen, Xiaotong; Rangarajan, Govindan; Ding, Mingzhou

    2013-01-01

    Granger causality is increasingly being applied to multi-electrode neurophysiological and functional imaging data to characterize directional interactions between neurons and brain regions. For a multivariate dataset, one might be interested in different subsets of the recorded neurons or brain regions. According to the current estimation framework, for each subset, one conducts a separate autoregressive model fitting process, introducing the potential for unwanted variability and uncertainty. In this paper, we propose a multivariate framework for estimating Granger causality. It is based on spectral density matrix factorization and offers the advantage that the estimation of such a matrix needs to be done only once for the entire multivariate dataset. For any subset of recorded data, Granger causality can be calculated through factorizing the appropriate submatrix of the overall spectral density matrix. PMID:23858479

  16. Infant and under-five mortality in Afghanistan: current estimates and limitations

    PubMed Central

    Becker, Stan; Hansen, Peter M; Kumar, Dhirendra; Kumar, Binay; Niayesh, Haseebullah; Peters, David H; Burnham, Gilbert

    2010-01-01

    Abstract Objective To examine historical estimates of infant and under-five mortality in Afghanistan, provide estimates for rural areas from current population-based data, and discuss the methodological challenges that undermine data quality and hinder retrospective estimations of mortality. Methods Indirect methods of estimation were used to calculate infant and under-five mortality from a household survey conducted in 2006. Sex-specific differences in underreporting of births and deaths were examined and sensitivity analyses were conducted to assess the effect of underreporting on infant and under-five mortality. Findings For 2004, rural unadjusted infant and under-five mortality rates were estimated to be 129 and 191 deaths per 1000 live births, respectively, with some evidence indicating underreporting of female deaths. If adjustment for underreporting is made (i.e. by assuming 50% of the unreported girls are dead), mortality estimates go up to 140 and 209, respectively. Conclusion Commonly used estimates of infant and under-five mortality in Afghanistan are outdated; they do not reflect changes that have occurred in the past 15 years or recent intensive investments in health services development, such as the implementation of the Basic Package of Health Services. The sociocultural aspects of mortality and their effect on the reporting of births and deaths in Afghanistan need to be investigated further. PMID:20680122

  17. Deriving Continuous Fields of Tree Cover at 1-m over the Continental United States From the National Agriculture Imagery Program (NAIP) Imagery to Reduce Uncertainties in Forest Carbon Stock Estimation

    NASA Astrophysics Data System (ADS)

    Ganguly, S.; Basu, S.; Mukhopadhyay, S.; Michaelis, A.; Milesi, C.; Votava, P.; Nemani, R. R.

    2013-12-01

    An unresolved issue with coarse-to-medium resolution satellite-based forest carbon mapping over regional to continental scales is the high level of uncertainty in above ground biomass (AGB) estimates caused by the absence of forest cover information at a high enough spatial resolution (current spatial resolution is limited to 30-m). To put confidence in existing satellite-derived AGB density estimates, it is imperative to create continuous fields of tree cover at a sufficiently high resolution (e.g. 1-m) such that large uncertainties in forested area are reduced. The proposed work will provide means to reduce uncertainty in present satellite-derived AGB maps and Forest Inventory and Analysis (FIA) based regional estimates. Our primary objective will be to create Very High Resolution (VHR) estimates of tree cover at a spatial resolution of 1-m for the Continental United States using all available National Agriculture Imaging Program (NAIP) color-infrared imagery from 2010 till 2012. We will leverage the existing capabilities of the NASA Earth Exchange (NEX) high performance computing and storage facilities. The proposed 1-m tree cover map can be further aggregated to provide percent tree cover at any medium-to-coarse resolution spatial grid, which will aid in reducing uncertainties in AGB density estimation at the respective grid and overcome current limitations imposed by medium-to-coarse resolution land cover maps. We have implemented a scalable and computationally-efficient parallelized framework for tree-cover delineation - the core components of the algorithm [that] include a feature extraction process, a Statistical Region Merging image segmentation algorithm and a classification algorithm based on Deep Belief Network and a Feedforward Backpropagation Neural Network algorithm. An initial pilot exercise has been performed over the state of California (~11,000 scenes) to create a wall-to-wall 1-m tree cover map and the classification accuracy has been assessed. Results show an improvement in accuracy of tree-cover delineation as compared to existing forest cover maps from NLCD, especially over fragmented, heterogeneous and urban landscapes. Estimates of VHR tree cover will complement and enhance the accuracy of present remote-sensing based AGB modeling approaches and forest inventory based estimates at both national and local scales. A requisite step will be to characterize the inherent uncertainties in tree cover estimates and propagate them to estimate AGB.

  18. Estimating micro area behavioural risk factor prevalence from large population-based surveys: a full Bayesian approach.

    PubMed

    Seliske, L; Norwood, T A; McLaughlin, J R; Wang, S; Palleschi, C; Holowaty, E

    2016-06-07

    An important public health goal is to decrease the prevalence of key behavioural risk factors, such as tobacco use and obesity. Survey information is often available at the regional level, but heterogeneity within large geographic regions cannot be assessed. Advanced spatial analysis techniques are demonstrated to produce sensible micro area estimates of behavioural risk factors that enable identification of areas with high prevalence. A spatial Bayesian hierarchical model was used to estimate the micro area prevalence of current smoking and excess bodyweight for the Erie-St. Clair region in southwestern Ontario. Estimates were mapped for male and female respondents of five cycles of the Canadian Community Health Survey (CCHS). The micro areas were 2006 Census Dissemination Areas, with an average population of 400-700 people. Two individual-level models were specified: one controlled for survey cycle and age group (model 1), and one controlled for survey cycle, age group and micro area median household income (model 2). Post-stratification was used to derive micro area behavioural risk factor estimates weighted to the population structure. SaTScan analyses were conducted on the granular, postal-code level CCHS data to corroborate findings of elevated prevalence. Current smoking was elevated in two urban areas for both sexes (Sarnia and Windsor), and an additional small community (Chatham) for males only. Areas of excess bodyweight were prevalent in an urban core (Windsor) among males, but not females. Precision of the posterior post-stratified current smoking estimates was improved in model 2, as indicated by narrower credible intervals and a lower coefficient of variation. For excess bodyweight, both models had similar precision. Aggregation of the micro area estimates to CCHS design-based estimates validated the findings. This is among the first studies to apply a full Bayesian model to complex sample survey data to identify micro areas with variation in risk factor prevalence, accounting for spatial correlation and other covariates. Application of micro area analysis techniques helps define areas for public health planning, and may be informative to surveillance and research modeling of relevant chronic disease outcomes.

  19. The estimated future disease burden of hepatitis C virus in the Netherlands with different treatment paradigms.

    PubMed

    Willemse, S B; Razavi-Shearer, D; Zuure, F R; Veldhuijzen, I K; Croes, E A; van der Meer, A J; van Santen, D K; de Vree, J M; de Knegt, R J; Zaaijer, H L; Reesink, H W; Prins, M; Razavi, H

    2015-11-01

    Prevalence of hepatitis C virus (HCV) infection in the Netherlands is low (anti-HCV prevalence 0.22%). All-oral treatment with direct-acting antivirals (DAAs) is tolerable and effective but expensive. Our analysis projected the future HCV-related disease burden in the Netherlands by applying different treatment scenarios. Using a modelling approach, the size of the HCV-viraemic population in the Netherlands in 2014 was estimated using available data and expert consensus. The base scenario (based on the current Dutch situation) and different treatment scenarios (with increased efficacy, treatment uptake, and diagnoses) were modelled and the future HCV disease burden was predicted for each scenario. The estimated number of individuals with viraemic HCV infection in the Netherlands in 2014 was 19,200 (prevalence 0.12%). By 2030, this number is projected to decrease by 4 5% in the base scenario and by 85% if the number of treated patients increases. Furthermore, the number of individuals with hepatocellular carcinoma and liver-related deaths is estimated to decrease by 19% and 27%, respectively, in the base scenario, but may both be further decreased by 68% when focusing on treatment of HCV patients with a fibrosis stage of ≥ F2. A substantial reduction in HCV-related disease burden is possible with increases in treatment uptake as the efficacy of current therapies is high. Further reduction of HCV-related disease burden may be achieved through increases in diagnosis and preventative measures. These results might inform the further development of effective disease management strategies in the Netherlands.

  20. Cover estimation and payload location using Markov random fields

    NASA Astrophysics Data System (ADS)

    Quach, Tu-Thach

    2014-02-01

    Payload location is an approach to find the message bits hidden in steganographic images, but not necessarily their logical order. Its success relies primarily on the accuracy of the underlying cover estimators and can be improved if more estimators are used. This paper presents an approach based on Markov random field to estimate the cover image given a stego image. It uses pairwise constraints to capture the natural two-dimensional statistics of cover images and forms a basis for more sophisticated models. Experimental results show that it is competitive against current state-of-the-art estimators and can locate payload embedded by simple LSB steganography and group-parity steganography. Furthermore, when combined with existing estimators, payload location accuracy improves significantly.

  1. Reliability of environmental sampling culture results using the negative binomial intraclass correlation coefficient.

    PubMed

    Aly, Sharif S; Zhao, Jianyang; Li, Ben; Jiang, Jiming

    2014-01-01

    The Intraclass Correlation Coefficient (ICC) is commonly used to estimate the similarity between quantitative measures obtained from different sources. Overdispersed data is traditionally transformed so that linear mixed model (LMM) based ICC can be estimated. A common transformation used is the natural logarithm. The reliability of environmental sampling of fecal slurry on freestall pens has been estimated for Mycobacterium avium subsp. paratuberculosis using the natural logarithm transformed culture results. Recently, the negative binomial ICC was defined based on a generalized linear mixed model for negative binomial distributed data. The current study reports on the negative binomial ICC estimate which includes fixed effects using culture results of environmental samples. Simulations using a wide variety of inputs and negative binomial distribution parameters (r; p) showed better performance of the new negative binomial ICC compared to the ICC based on LMM even when negative binomial data was logarithm, and square root transformed. A second comparison that targeted a wider range of ICC values showed that the mean of estimated ICC closely approximated the true ICC.

  2. Estimated Effects of Different Alcohol Taxation and Price Policies on Health Inequalities: A Mathematical Modelling Study

    PubMed Central

    Meier, Petra S.; Holmes, John; Angus, Colin; Ally, Abdallah K.; Meng, Yang; Brennan, Alan

    2016-01-01

    Introduction While evidence that alcohol pricing policies reduce alcohol-related health harm is robust, and alcohol taxation increases are a WHO “best buy” intervention, there is a lack of research comparing the scale and distribution across society of health impacts arising from alternative tax and price policy options. The aim of this study is to test whether four common alcohol taxation and pricing strategies differ in their impact on health inequalities. Methods and Findings An econometric epidemiological model was built with England 2014/2015 as the setting. Four pricing strategies implemented on top of the current tax were equalised to give the same 4.3% population-wide reduction in total alcohol-related mortality: current tax increase, a 13.4% all-product duty increase under the current UK system; a value-based tax, a 4.0% ad valorem tax based on product price; a strength-based tax, a volumetric tax of £0.22 per UK alcohol unit (= 8 g of ethanol); and minimum unit pricing, a minimum price threshold of £0.50 per unit, below which alcohol cannot be sold. Model inputs were calculated by combining data from representative household surveys on alcohol purchasing and consumption, administrative and healthcare data on 43 alcohol-attributable diseases, and published price elasticities and relative risk functions. Outcomes were annual per capita consumption, consumer spending, and alcohol-related deaths. Uncertainty was assessed via partial probabilistic sensitivity analysis (PSA) and scenario analysis. The pricing strategies differ as to how effects are distributed across the population, and, from a public health perspective, heavy drinkers in routine/manual occupations are a key group as they are at greatest risk of health harm from their drinking. Strength-based taxation and minimum unit pricing would have greater effects on mortality among drinkers in routine/manual occupations (particularly for heavy drinkers, where the estimated policy effects on mortality rates are as follows: current tax increase, −3.2%; value-based tax, −2.9%; strength-based tax, −6.1%; minimum unit pricing, −7.8%) and lesser impacts among drinkers in professional/managerial occupations (for heavy drinkers: current tax increase, −1.3%; value-based tax, −1.4%; strength-based tax, +0.2%; minimum unit pricing, +0.8%). Results from the PSA give slightly greater mean effects for both the routine/manual (current tax increase, −3.6% [95% uncertainty interval (UI) −6.1%, −0.6%]; value-based tax, −3.3% [UI −5.1%, −1.7%]; strength-based tax, −7.5% [UI −13.7%, −3.9%]; minimum unit pricing, −10.3% [UI −10.3%, −7.0%]) and professional/managerial occupation groups (current tax increase, −1.8% [UI −4.7%, +1.6%]; value-based tax, −1.9% [UI −3.6%, +0.4%]; strength-based tax, −0.8% [UI −6.9%, +4.0%]; minimum unit pricing, −0.7% [UI −5.6%, +3.6%]). Impacts of price changes on moderate drinkers were small regardless of income or socioeconomic group. Analysis of uncertainty shows that the relative effectiveness of the four policies is fairly stable, although uncertainty in the absolute scale of effects exists. Volumetric taxation and minimum unit pricing consistently outperform increasing the current tax or adding an ad valorem tax in terms of reducing mortality among the heaviest drinkers and reducing alcohol-related health inequalities (e.g., in the routine/manual occupation group, volumetric taxation reduces deaths more than increasing the current tax in 26 out of 30 probabilistic runs, minimum unit pricing reduces deaths more than volumetric tax in 21 out of 30 runs, and minimum unit pricing reduces deaths more than increasing the current tax in 30 out of 30 runs). Study limitations include reducing model complexity by not considering a largely ineffective ban on below-tax alcohol sales, special duty rates covering only small shares of the market, and the impact of tax fraud or retailer non-compliance with minimum unit prices. Conclusions Our model estimates that, compared to tax increases under the current system or introducing taxation based on product value, alcohol-content-based taxation or minimum unit pricing would lead to larger reductions in health inequalities across income groups. We also estimate that alcohol-content-based taxation and minimum unit pricing would have the largest impact on harmful drinking, with minimal effects on those drinking in moderation. PMID:26905063

  3. Estimating infertility prevalence in low-to-middle-income countries: an application of a current duration approach to Demographic and Health Survey data.

    PubMed

    Polis, Chelsea B; Cox, Carie M; Tunçalp, Özge; McLain, Alexander C; Thoma, Marie E

    2017-05-01

    Can infertility prevalence be estimated using a current duration (CD) approach when applied to nationally representative Demographic and Health Survey (DHS) data collected routinely in low- or middle-income countries? Our analysis suggests that a CD approach applied to DHS data from Nigeria provides infertility prevalence estimates comparable to other smaller studies in the same region. Despite associations with serious negative health, social and economic outcomes, infertility in developing countries is a marginalized issue in sexual and reproductive health. Obtaining reliable, nationally representative prevalence estimates is critical to address the issue, but methodological and resource challenges have impeded this goal. This cross-sectional study was based on standard information available in the DHS core questionnaire and data sets, which are collected routinely among participating low-to-middle-income countries. Our research question was examined among women participating in the 2013 Nigeria DHS (n = 38 948). Among women eligible for the study, 98% were interviewed. We applied a CD approach (i.e. current length of time-at-risk of pregnancy) to estimate time-to-pregnancy (TTP) and 12-month infertility prevalence among women 'at risk' of pregnancy at the time of interview (n = 7063). Women who were 18-44 years old, married or cohabitating, sexually active within the past 4 weeks and not currently using contraception (and had not been sterilized) were included in the analysis. Estimates were based on parametric survival methods using bootstrap methods (500 bootstrap replicates) to obtain 95% CIs. The estimated median TTP among couples at risk of pregnancy was 5.1 months (95% CI: 4.2-6.3). The estimated percentage of infertile couples was 31.1% (95% CI: 27.9-34.7%)-consistent with other smaller studies from Nigeria. Primary infertility (17.4%, 95% CI: 12.9-23.8%) was substantially lower than secondary infertility (34.1%, 95% CI: 30.3-39.3%) in this population. Overall estimates for TTP >24 or >36 months dropped to 17.7% (95% CI: 15.7-20%) and 11.5% (95% CI: 10.2-13%), respectively. Subgroup analyses showed that estimates varied by age, coital frequency and fertility intentions, while being in a polygynous relationship showed minimal impact. The CD approach may be limited by assumptions on when exposure to risk of pregnancy began and methodologic assumptions required for estimation, which may be less accurate for particular subgroups or populations. Unrecognized pregnancies may have also biased our findings; however, we attempted to address this in our exclusion criteria. Limiting to married/cohabiting couples may have excluded women who are no longer in a relationship after being blamed for infertility. Although probably rare in this setting, we lack information on couples undergoing infertility treatment. Like other TTP measurement approaches, pregnancies resulting from contraceptive failure are not included, which may bias estimates. Nationally representative estimates of TTP and infertility based on a clinical definition of 12 months have been limited within developing countries. This approach represents a pragmatic advance in our ability to measure and monitor infertility in the developing world, with potentially far-reaching implications for policies and programs intended to address reproductive health. There are no competing interests and no financial support was provided for this study. Financial support for Open Access publication was provided by the World Health Organization. © The Author 2017. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology.

  4. Estimating the confidence bounds for projected ozone design values under different emissions control options

    EPA Science Inventory

    In current regulatory applications, regional air quality model is applied for a base year and a future year with reduced emissions using the same meteorological conditions. The base year design value is multiplied by the ratio of the average of the top 10 ozone concentrations fo...

  5. DNA-based approach to aging martens (Martes americana and M. caurina)

    Treesearch

    Jonathan N. Pauli; John P. Whiteman; Bruce G. Marcot; Terry M. McClean; Merav Ben-David

    2011-01-01

    Demographic structure is central to understanding the dynamics of animal populations. However, determining the age of free-ranging mammals is difficult, and currently impossible when sampling with noninvasive, genetic-based approaches. We present a method to estimate age class by combining measures of telomere lengths with other biologically meaningful covariates in a...

  6. Research-Based Educational Practices for Students with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Ryan, Joseph B.; Hughes, Elizabeth M.; Katsiyannis, Antonis; McDaniel, Melanie; Sprinkle, Cynthia

    2011-01-01

    Autism spectrum disorder (ASD) has become the fastest growing disability in the United States, with current prevalence rates estimated at as many as 1 in 110 children (CDC, 2010). This increase in the number of students identified with ASD has significant implications for public schools. The most popular research-based educational practices for…

  7. Metocean design parameter estimation for fixed platform based on copula functions

    NASA Astrophysics Data System (ADS)

    Zhai, Jinjin; Yin, Qilin; Dong, Sheng

    2017-08-01

    Considering the dependent relationship among wave height, wind speed, and current velocity, we construct novel trivariate joint probability distributions via Archimedean copula functions. Total 30-year data of wave height, wind speed, and current velocity in the Bohai Sea are hindcast and sampled for case study. Four kinds of distributions, namely, Gumbel distribution, lognormal distribution, Weibull distribution, and Pearson Type III distribution, are candidate models for marginal distributions of wave height, wind speed, and current velocity. The Pearson Type III distribution is selected as the optimal model. Bivariate and trivariate probability distributions of these environmental conditions are established based on four bivariate and trivariate Archimedean copulas, namely, Clayton, Frank, Gumbel-Hougaard, and Ali-Mikhail-Haq copulas. These joint probability models can maximize marginal information and the dependence among the three variables. The design return values of these three variables can be obtained by three methods: univariate probability, conditional probability, and joint probability. The joint return periods of different load combinations are estimated by the proposed models. Platform responses (including base shear, overturning moment, and deck displacement) are further calculated. For the same return period, the design values of wave height, wind speed, and current velocity obtained by the conditional and joint probability models are much smaller than those by univariate probability. Considering the dependence among variables, the multivariate probability distributions provide close design parameters to actual sea state for ocean platform design.

  8. Estimation of vulnerability functions based on a global earthquake damage database

    NASA Astrophysics Data System (ADS)

    Spence, R. J. S.; Coburn, A. W.; Ruffle, S. J.

    2009-04-01

    Developing a better approach to the estimation of future earthquake losses, and in particular to the understanding of the inherent uncertainties in loss models, is vital to confidence in modelling potential losses in insurance or for mitigation. For most areas of the world there is currently insufficient knowledge of the current building stock for vulnerability estimates to be based on calculations of structural performance. In such areas, the most reliable basis for estimating vulnerability is performance of the building stock in past earthquakes, using damage databases, and comparison with consistent estimates of ground motion. This paper will present a new approach to the estimation of vulnerabilities using the recently launched Cambridge University Damage Database (CUEDD). CUEDD is based on data assembled by the Martin Centre at Cambridge University since 1980, complemented by other more-recently published and some unpublished data. The database assembles in a single, organised, expandable and web-accessible database, summary information on worldwide post-earthquake building damage surveys which have been carried out since the 1960's. Currently it contains data on the performance of more than 750,000 individual buildings, in 200 surveys following 40 separate earthquakes. The database includes building typologies, damage levels, location of each survey. It is mounted on a GIS mapping system and links to the USGS Shakemaps of each earthquake which enables the macroseismic intensity and other ground motion parameters to be defined for each survey and location. Fields of data for each building damage survey include: · Basic earthquake data and its sources · Details of the survey location and intensity and other ground motion observations or assignments at that location · Building and damage level classification, and tabulated damage survey results · Photos showing typical examples of damage. In future planned extensions of the database information on human casualties will also be assembled. The database also contains analytical tools enabling data from similar locations, building classes or ground motion levels to be assembled and thus vulnerability relationships derived for any chosen ground motion parameter, for a given class of building, and for particular countries or regions. The paper presents examples of vulnerability relationships for particular classes of buildings and regions of the world, together with the estimated uncertainty ranges. It will discuss the applicability of such vulnerability functions in earthquake loss assessment for insurance purposes or for earthquake risk mitigation.

  9. Bias of health estimates obtained from chronic disease and risk factor surveillance systems using telephone population surveys in Australia: results from a representative face-to-face survey in Australia from 2010 to 2013.

    PubMed

    Dal Grande, Eleonora; Chittleborough, Catherine R; Campostrini, Stefano; Taylor, Anne W

    2016-04-18

    Emerging communication technologies have had an impact on population-based telephone surveys worldwide. Our objective was to examine the potential biases of health estimates in South Australia, a state of Australia, obtained via current landline telephone survey methodologies and to report on the impact of mobile-only household on household surveys. Data from an annual multi-stage, systematic, clustered area, face-to-face population survey, Health Omnibus Survey (approximately 3000 interviews annually), included questions about telephone ownership to assess the population that were non-contactable by current telephone sampling methods (2006 to 2013). Univariable analyses (2010 to 2013) and trend analyses were conducted for sociodemographic and health indicator variables in relation to telephone status. Relative coverage biases (RCB) of two hypothetical telephone samples was undertaken by examining the prevalence estimates of health status and health risk behaviours (2010 to 2013): directory-listed numbers, consisting mainly of landline telephone numbers and a small proportion of mobile telephone numbers; and a random digit dialling (RDD) sample of landline telephone numbers which excludes mobile-only households. Telephone (landline and mobile) coverage in South Australia is very high (97%). Mobile telephone ownership increased slightly (7.4%), rising from 89.7% in 2006 to 96.3% in 2013; mobile-only households increased by 431% over the eight year period from 5.2% in 2006 to 27.6% in 2013. Only half of the households have either a mobile or landline number listed in the telephone directory. There were small differences in the prevalence estimates for current asthma, arthritis, diabetes and obesity between the hypothetical telephone samples and the overall sample. However, prevalence estimate for diabetes was slightly underestimated (RCB value of -0.077) in 2013. Mixed RCB results were found for having a mental health condition for both telephone samples. Current smoking prevalence was lower for both hypothetical telephone samples in absolute differences and RCB values: -0.136 to -0.191 for RDD landline samples and -0.129 to -0.313 for directory-listed samples. These findings suggest landline-based sampling frames used in Australia, when appropriately weighted, produce reliable representative estimates for some health indicators but not for all. Researchers need to be aware of their limitations and potential biased estimates.

  10. Methodology of automated ionosphere front velocity estimation for ground-based augmentation of GNSS

    NASA Astrophysics Data System (ADS)

    Bang, Eugene; Lee, Jiyun

    2013-11-01

    ionospheric anomalies occurring during severe ionospheric storms can pose integrity threats to Global Navigation Satellite System (GNSS) Ground-Based Augmentation Systems (GBAS). Ionospheric anomaly threat models for each region of operation need to be developed to analyze the potential impact of these anomalies on GBAS users and develop mitigation strategies. Along with the magnitude of ionospheric gradients, the speed of the ionosphere "fronts" in which these gradients are embedded is an important parameter for simulation-based GBAS integrity analysis. This paper presents a methodology for automated ionosphere front velocity estimation which will be used to analyze a vast amount of ionospheric data, build ionospheric anomaly threat models for different regions, and monitor ionospheric anomalies continuously going forward. This procedure automatically selects stations that show a similar trend of ionospheric delays, computes the orientation of detected fronts using a three-station-based trigonometric method, and estimates speeds for the front using a two-station-based method. It also includes fine-tuning methods to improve the estimation to be robust against faulty measurements and modeling errors. It demonstrates the performance of the algorithm by comparing the results of automated speed estimation to those manually computed previously. All speed estimates from the automated algorithm fall within error bars of ± 30% of the manually computed speeds. In addition, this algorithm is used to populate the current threat space with newly generated threat points. A larger number of velocity estimates helps us to better understand the behavior of ionospheric gradients under geomagnetic storm conditions.

  11. Pseudo and conditional score approach to joint analysis of current count and current status data.

    PubMed

    Wen, Chi-Chung; Chen, Yi-Hau

    2018-04-17

    We develop a joint analysis approach for recurrent and nonrecurrent event processes subject to case I interval censorship, which are also known in literature as current count and current status data, respectively. We use a shared frailty to link the recurrent and nonrecurrent event processes, while leaving the distribution of the frailty fully unspecified. Conditional on the frailty, the recurrent event is assumed to follow a nonhomogeneous Poisson process, and the mean function of the recurrent event and the survival function of the nonrecurrent event are assumed to follow some general form of semiparametric transformation models. Estimation of the models is based on the pseudo-likelihood and the conditional score techniques. The resulting estimators for the regression parameters and the unspecified baseline functions are shown to be consistent with rates of square and cubic roots of the sample size, respectively. Asymptotic normality with closed-form asymptotic variance is derived for the estimator of the regression parameters. We apply the proposed method to a fracture-osteoporosis survey data to identify risk factors jointly for fracture and osteoporosis in elders, while accounting for association between the two events within a subject. © 2018, The International Biometric Society.

  12. Quantifying the water storage volume of major aquifers in the US

    NASA Astrophysics Data System (ADS)

    Jame, S. A.; Bowling, L. C.

    2017-12-01

    Groundwater is one of our most valuable natural resources which affects not only the food and energy nexus, but ecosystem and human health, through the availability of drinking water. Quantification of current groundwater storage is not only required to better understand groundwater flow and its role in the hydrologic cycle, but also sustainable use. In this study, a new high resolution map (5' minutes) of groundwater properties is created for US major aquifers to provide an estimate of total groundwater storage. The estimation was done using information on the spatial extent of the principal aquifers of the US from the USGS Groundwater Atlas, the average porosity of different hydrolithologic groups and the current saturated thickness of each aquifer. Saturated thickness varies within aquifers, and has been calculated by superimposing current water-table contour maps over the base aquifer altitude provided by USGS. The average saturated thickness has been computed by interpolating available data on saturated thickness for an aquifer using the kriging method. Total storage of aquifers in each cell was then calculated by multiplying the spatial extent, porosity, and thickness of the saturated layer. The resulting aquifer storage estimates was compared with current groundwater withdrawal rates to produce an estimate of how many years' worth of water are stored in the aquifers. The resulting storage map will serve as a national dataset for stakeholders to make decisions for sustainable use of groundwater.

  13. Effective number of breeding adults in Bufo bufo estimated from age-specific variation at minisatellite loci

    USGS Publications Warehouse

    Scribner, K.T.; Arntzen, J.W.; Burke, T.

    1997-01-01

    Estimates of the effective number of breeding adults were derived for three semi-isolated populations of the common toad Bufo bufo based on temporal (i.e. adult-progeny) variance in allele frequency for three highly polymorphic minisatellite loci. Estimates of spatial variance in allele frequency among populations and of age-specific measures of genetic variability are also described. Each population was characterized by a low effective adult breeding number (N(b)) based on a large age-specific variance in minisatellite allele frequency. Estimates of N(b) (range 21-46 for population means across three loci) were ??? 55-230-fold lower than estimates of total adult census size. The implications of low effective breeding numbers for long-term maintenance of genetic variability and population viability are discussed relative to the species' reproductive ecology, current land-use practices, and present and historical habitat modification and loss. The utility of indirect measures of population parameters such as N(b) and N(e) based on time-series data of minisatellite allele frequencies is discussed relative to similar measures estimated from commonly used genetic markers such as protein allozymes.

  14. Validation of an aggregate exposure model for substances in consumer products: a case study of diethyl phthalate in personal care products

    PubMed Central

    Delmaar, Christiaan; Bokkers, Bas; ter Burg, Wouter; Schuur, Gerlienke

    2015-01-01

    As personal care products (PCPs) are used in close contact with a person, they are a major source of consumer exposure to chemical substances contained in these products. The estimation of realistic consumer exposure to substances in PCPs is currently hampered by the lack of appropriate data and methods. To estimate aggregate exposure of consumers to substances contained in PCPs, a person-oriented consumer exposure model has been developed (the Probabilistic Aggregate Consumer Exposure Model, PACEM). The model simulates daily exposure in a population based on product use data collected from a survey among the Dutch population. The model is validated by comparing diethyl phthalate (DEP) dose estimates to dose estimates based on biomonitoring data. It was found that the model's estimates compared well with the estimates based on biomonitoring data. This suggests that the person-oriented PACEM model is a practical tool for assessing realistic aggregate exposures to substances in PCPs. In the future, PACEM will be extended with use pattern data on other product groups. This will allow for assessing aggregate exposure to substances in consumer products across different product groups. PMID:25352161

  15. Exemplar-based human action pose correction.

    PubMed

    Shen, Wei; Deng, Ke; Bai, Xiang; Leyvand, Tommer; Guo, Baining; Tu, Zhuowen

    2014-07-01

    The launch of Xbox Kinect has built a very successful computer vision product and made a big impact on the gaming industry. This sheds lights onto a wide variety of potential applications related to action recognition. The accurate estimation of human poses from the depth image is universally a critical step. However, existing pose estimation systems exhibit failures when facing severe occlusion. In this paper, we propose an exemplar-based method to learn to correct the initially estimated poses. We learn an inhomogeneous systematic bias by leveraging the exemplar information within a specific human action domain. Furthermore, as an extension, we learn a conditional model by incorporation of pose tags to further increase the accuracy of pose correction. In the experiments, significant improvements on both joint-based skeleton correction and tag prediction are observed over the contemporary approaches, including what is delivered by the current Kinect system. Our experiments for the facial landmark correction also illustrate that our algorithm can improve the accuracy of other detection/estimation systems.

  16. Dose Estimating Application Software Modification: Additional Function of a Size-Specific Effective Dose Calculator and Auto Exposure Control.

    PubMed

    Kobayashi, Masanao; Asada, Yasuki; Matsubara, Kosuke; Suzuki, Shouichi; Matsunaga, Yuta; Haba, Tomonobu; Kawaguchi, Ai; Daioku, Tomihiko; Toyama, Hiroshi; Kato, Ryoichi

    2017-05-01

    Adequate dose management during computed tomography is important. In the present study, the dosimetric application software ImPACT was added to a functional calculator of the size-specific dose estimate and was part of the scan settings for the auto exposure control (AEC) technique. This study aimed to assess the practicality and accuracy of the modified ImPACT software for dose estimation. We compared the conversion factors identified by the software with the values reported by the American Association of Physicists in Medicine Task Group 204, and we noted similar results. Moreover, doses were calculated with the AEC technique and a fixed-tube current of 200 mA for the chest-pelvis region. The modified ImPACT software could estimate each organ dose, which was based on the modulated tube current. The ability to perform beneficial modifications indicates the flexibility of the ImPACT software. The ImPACT software can be further modified for estimation of other doses. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. Predicting the Magnetic Properties of ICMEs: A Pragmatic View

    NASA Astrophysics Data System (ADS)

    Riley, P.; Linker, J.; Ben-Nun, M.; Torok, T.; Ulrich, R. K.; Russell, C. T.; Lai, H.; de Koning, C. A.; Pizzo, V. J.; Liu, Y.; Hoeksema, J. T.

    2017-12-01

    The southward component of the interplanetary magnetic field plays a crucial role in being able to successfully predict space weather phenomena. Yet, thus far, it has proven extremely difficult to forecast with any degree of accuracy. In this presentation, we describe an empirically-based modeling framework for estimating Bz values during the passage of interplanetary coronal mass ejections (ICMEs). The model includes: (1) an empirically-based estimate of the magnetic properties of the flux rope in the low corona (including helicity and field strength); (2) an empirically-based estimate of the dynamic properties of the flux rope in the high corona (including direction, speed, and mass); and (3) a physics-based estimate of the evolution of the flux rope during its passage to 1 AU driven by the output from (1) and (2). We compare model output with observations for a selection of events to estimate the accuracy of this approach. Importantly, we pay specific attention to the uncertainties introduced by the components within the framework, separating intrinsic limitations from those that can be improved upon, either by better observations or more sophisticated modeling. Our analysis suggests that current observations/modeling are insufficient for this empirically-based framework to provide reliable and actionable prediction of the magnetic properties of ICMEs. We suggest several paths that may lead to better forecasts.

  18. Sensitivity of ground - water recharge estimates to climate variability and change, Columbia Plateau, Washington

    USGS Publications Warehouse

    Vaccaro, John J.

    1992-01-01

    The sensitivity of groundwater recharge estimates was investigated for the semiarid Ellensburg basin, located on the Columbia Plateau, Washington, to historic and projected climatic regimes. Recharge was estimated for predevelopment and current (1980s) land use conditions using a daily energy-soil-water balance model. A synthetic daily weather generator was used to simulate lengthy sequences with parameters estimated from subsets of the historical record that were unusually wet and unusually dry. Comparison of recharge estimates corresponding to relatively wet and dry periods showed that recharge for predevelopment land use varies considerably within the range of climatic conditions observed in the 87-year historical observation period. Recharge variations for present land use conditions were less sensitive to the same range of historical climatic conditions because of irrigation. The estimated recharge based on the 87-year historical climatology was compared with adjustments to the historical precipitation and temperature records for the same record to reflect CO2-doubling climates as projected by general circulation models (GCMs). Two GCM scenarios were considered: an average of conditions for three different GCMs with CO2 doubling, and a most severe “maximum” case. For the average GCM scenario, predevelopment recharge increased, and current recharge decreased. Also considered was the sensitivity of recharge to the variability of climate within the historical and adjusted historical records. Predevelopment and current recharge were less and more sensitive, respectively, to the climate variability for the average GCM scenario as compared to the variability within the historical record. For the maximum GCM scenario, recharge for both predevelopment and current land use decreased, and the sensitivity to the CO2-related climate change was larger than sensitivity to the variability in the historical and adjusted historical climate records.

  19. Estimating multilevel logistic regression models when the number of clusters is low: a comparison of different statistical software procedures.

    PubMed

    Austin, Peter C

    2010-04-22

    Multilevel logistic regression models are increasingly being used to analyze clustered data in medical, public health, epidemiological, and educational research. Procedures for estimating the parameters of such models are available in many statistical software packages. There is currently little evidence on the minimum number of clusters necessary to reliably fit multilevel regression models. We conducted a Monte Carlo study to compare the performance of different statistical software procedures for estimating multilevel logistic regression models when the number of clusters was low. We examined procedures available in BUGS, HLM, R, SAS, and Stata. We found that there were qualitative differences in the performance of different software procedures for estimating multilevel logistic models when the number of clusters was low. Among the likelihood-based procedures, estimation methods based on adaptive Gauss-Hermite approximations to the likelihood (glmer in R and xtlogit in Stata) or adaptive Gaussian quadrature (Proc NLMIXED in SAS) tended to have superior performance for estimating variance components when the number of clusters was small, compared to software procedures based on penalized quasi-likelihood. However, only Bayesian estimation with BUGS allowed for accurate estimation of variance components when there were fewer than 10 clusters. For all statistical software procedures, estimation of variance components tended to be poor when there were only five subjects per cluster, regardless of the number of clusters.

  20. Direct estimations of linear and nonlinear functionals of a quantum state.

    PubMed

    Ekert, Artur K; Alves, Carolina Moura; Oi, Daniel K L; Horodecki, Michał; Horodecki, Paweł; Kwek, L C

    2002-05-27

    We present a simple quantum network, based on the controlled-SWAP gate, that can extract certain properties of quantum states without recourse to quantum tomography. It can be used as a basic building block for direct quantum estimations of both linear and nonlinear functionals of any density operator. The network has many potential applications ranging from purity tests and eigenvalue estimations to direct characterization of some properties of quantum channels. Experimental realizations of the proposed network are within the reach of quantum technology that is currently being developed.

  1. Estimating Benzathine Penicillin Need for the Treatment of Pregnant Women Diagnosed with Syphilis during Antenatal Care in High-Morbidity Countries

    PubMed Central

    Taylor, Melanie M.; Nurse-Findlay, Stephen; Zhang, Xiulei; Hedman, Lisa; Kamb, Mary L.; Broutet, Nathalie; Kiarie, James

    2016-01-01

    Background Congenital syphilis continues to be a preventable cause of global stillbirth and neonatal morbidity and mortality. Shortages of injectable penicillin, the only recommended treatment for pregnant women and infants with syphilis, have been reported by high-morbidity countries. We sought to estimate current and projected annual needs for benzathine penicillin in antenatal care settings for 30 high morbidity countries that account for approximately 33% of the global burden of congenital syphilis. Methods Proportions of antenatal care attendance, syphilis screening coverage in pregnancy, syphilis prevalence among pregnant women, and adverse pregnancy outcomes due to untreated maternal syphilis reported to WHO were applied to 2012 birth estimates for 30 high syphilis burden countries to estimate current and projected benzathine penicillin need for prevention of congenital syphilis. Results Using current antenatal care syphilis screening coverage and seroprevalence, we estimated the total number of women requiring treatment with at least one injection of 2.4 MU of benzathine penicillin in these 30 countries to be 351,016. Syphilis screening coverage at or above 95% for all 30 countries would increase the number of women requiring treatment with benzathine penicillin to 712,030. Based on WHO management guidelines, 351,016 doses of weight-based benzathine penicillin would also be needed for the live-born infants of mothers who test positive and are treated for syphilis in pregnancy. Assuming availability of penicillin and provision of treatment for all mothers diagnosed with syphilis, an estimated 95,938 adverse birth outcomes overall would be prevented including 37,822 stillbirths, 15,814 neonatal deaths, and 34,088 other congenital syphilis cases. Conclusion Penicillin need for maternal and infant syphilis treatment is high among this group of syphilis burdened countries. Initiatives to ensure a stable and adequate supply of benzathine penicillin for treatment of maternal syphilis are important for congenital syphilis prevention, and will be increasingly critical in the future as more countries move toward elimination targets. PMID:27434236

  2. Estimating Benzathine Penicillin Need for the Treatment of Pregnant Women Diagnosed with Syphilis during Antenatal Care in High-Morbidity Countries.

    PubMed

    Taylor, Melanie M; Nurse-Findlay, Stephen; Zhang, Xiulei; Hedman, Lisa; Kamb, Mary L; Broutet, Nathalie; Kiarie, James

    2016-01-01

    Congenital syphilis continues to be a preventable cause of global stillbirth and neonatal morbidity and mortality. Shortages of injectable penicillin, the only recommended treatment for pregnant women and infants with syphilis, have been reported by high-morbidity countries. We sought to estimate current and projected annual needs for benzathine penicillin in antenatal care settings for 30 high morbidity countries that account for approximately 33% of the global burden of congenital syphilis. Proportions of antenatal care attendance, syphilis screening coverage in pregnancy, syphilis prevalence among pregnant women, and adverse pregnancy outcomes due to untreated maternal syphilis reported to WHO were applied to 2012 birth estimates for 30 high syphilis burden countries to estimate current and projected benzathine penicillin need for prevention of congenital syphilis. Using current antenatal care syphilis screening coverage and seroprevalence, we estimated the total number of women requiring treatment with at least one injection of 2.4 MU of benzathine penicillin in these 30 countries to be 351,016. Syphilis screening coverage at or above 95% for all 30 countries would increase the number of women requiring treatment with benzathine penicillin to 712,030. Based on WHO management guidelines, 351,016 doses of weight-based benzathine penicillin would also be needed for the live-born infants of mothers who test positive and are treated for syphilis in pregnancy. Assuming availability of penicillin and provision of treatment for all mothers diagnosed with syphilis, an estimated 95,938 adverse birth outcomes overall would be prevented including 37,822 stillbirths, 15,814 neonatal deaths, and 34,088 other congenital syphilis cases. Penicillin need for maternal and infant syphilis treatment is high among this group of syphilis burdened countries. Initiatives to ensure a stable and adequate supply of benzathine penicillin for treatment of maternal syphilis are important for congenital syphilis prevention, and will be increasingly critical in the future as more countries move toward elimination targets.

  3. Spatial-altitudinal and temporal variation of Degree Day Factors (DDFs) in the Upper Indus Basin

    NASA Astrophysics Data System (ADS)

    Khan, Asif; Attaullah, Haleema; Masud, Tabinda; Khan, Mujahid

    2017-04-01

    Melt contribution from snow and ice in the Hindukush-Karakoram-Himalayan (HKH) region could account for more than 80% of annual river flows in the Upper Indus Basin (UIB). Increase or decrease in precipitation, energy input and glacier reserves can significantly affect water resources of this region. Therefore improved hydrological modelling and accurate future water resources prediction are vital for food production and hydro-power generation for millions of people living downstream, and are intensively needed. In mountain regions Degree Day Factors (DDFs) significantly vary on spatial and altitudinal basis, and are primary inputs of temperature-based hydrological modelling. However previous studies have used different DDFs as calibration parameters without due attention to the physical meaning of the values employed, and these estimates possess significant variability and uncertainty. This study provides estimates of DDFs for various altitudinal zones in the UIB at sub-basin level. Snow, clean ice and ice with debris cover bear different melt rates (or DDFs), therefore areally-averaged DDFs based on snow, clean and debris-covered ice classes in various altitudinal zones have been estimated for all sub-basins of the UIB. Zonal estimates of DDFs in the current study are significantly different from earlier adopted DDFs, hence suggest a revisit of previous hydrological modelling studies. DDFs presented in current study have been validated by using Snowmelt Runoff Model (SRM) in various sub-basins with good Nash Sutcliffe coefficients (R2 > 0.85) and low volumetric errors (Dv<10%). DDFs and methods provided in the current study can be used in future improved hydrological modelling and to provide accurate predictions of future river flows changes. The methodology used for estimation of DDFs is robust, and can be adopted to produce such estimates in other regions of the, particularly in the nearby other HKH basins.

  4. Prostate-Specific Antigen (PSA)–Based Population Screening for Prostate Cancer: An Economic Analysis

    PubMed Central

    Tawfik, A

    2015-01-01

    Background The prostate-specific antigen (PSA) blood test has become widely used in Canada to test for prostate cancer (PC), the most common cancer among Canadian men. Data suggest that population-based PSA screening may not improve overall survival. Objectives This analysis aimed to review existing economic evaluations of population-based PSA screening, determine current spending on opportunistic PSA screening in Ontario, and estimate the cost of introducing a population-based PSA screening program in the province. Methods A systematic literature search was performed to identify economic evaluations of population-based PSA screening strategies published from 1998 to 2013. Studies were assessed for their methodological quality and applicability to the Ontario setting. An original cost analysis was also performed, using data from Ontario administrative sources and from the published literature. One-year costs were estimated for 4 strategies: no screening, current (opportunistic) screening of men aged 40 years and older, current (opportunistic) screening of men aged 50 to 74 years, and population-based screening of men aged 50 to 74 years. The analysis was conducted from the payer perspective. Results The literature review demonstrated that, overall, population-based PSA screening is costly and cost-ineffective but may be cost-effective in specific populations. Only 1 Canadian study, published 15 years ago, was identified. Approximately $119.2 million is being spent annually on PSA screening of men aged 40 years and older in Ontario, including close to $22 million to screen men younger than 50 and older than 74 years of age (i.e., outside the target age range for a population-based program). A population-based screening program in Ontario would cost approximately $149.4 million in the first year. Limitations Estimates were based on the synthesis of data from a variety of sources, requiring several assumptions and causing uncertainty in the results. For example, where Ontario-specific data were unavailable, data from the United States were used. Conclusions PSA screening is associated with significant costs to the health care system when the cost of the PSA test itself is considered in addition to the costs of diagnosis, staging, and treatment of screen-detected PCs. PMID:26366237

  5. Benefit of Modeling the Observation Error in a Data Assimilation Framework Using Vegetation Information Obtained From Passive Based Microwave Data

    NASA Technical Reports Server (NTRS)

    Bolten, John D.; Mladenova, Iliana E.; Crow, Wade; De Jeu, Richard

    2016-01-01

    A primary operational goal of the United States Department of Agriculture (USDA) is to improve foreign market access for U.S. agricultural products. A large fraction of this crop condition assessment is based on satellite imagery and ground data analysis. The baseline soil moisture estimates that are currently used for this analysis are based on output from the modified Palmer two-layer soil moisture model, updated to assimilate near-real time observations derived from the Soil Moisture Ocean Salinity (SMOS) satellite. The current data assimilation system is based on a 1-D Ensemble Kalman Filter approach, where the observation error is modeled as a function of vegetation density. This allows for offsetting errors in the soil moisture retrievals. The observation error is currently adjusted using Normalized Difference Vegetation Index (NDVI) climatology. In this paper we explore the possibility of utilizing microwave-based vegetation optical depth instead.

  6. Joint Direct Attack Munition (JDAM)

    DTIC Science & Technology

    2015-12-01

    February 19, 2015 and the O&S cost are based on an ICE dated August 28, 2014 Confidence Level Confidence Level of cost estimate for current APB: 50% A...mathematically derived confidence level was not computed for this Life-Cycle Cost Estimate (LCCE). This LCCE represents the expected value, taking into...consideration relevant risks, including ordinary levels of external and unforeseen events. It aims to provide sufficient resources to execute the

  7. Isohaline position as a habitat indicator for estuarine populations

    USGS Publications Warehouse

    Jassby, Alan D.; Kimmerer, W.J.; Monismith, Stephen G.; Armor, C.; Cloern, James E.; Powell, T.M.; Vedlinski, Timothy J.

    1995-01-01

    The striped bass survival data were also used to illustrate a related important point: incorporating additionalexplanatory variables may decrease the prediction error for a population or process, but it can increase theuncertainty in parameter estimates and management strategies based on these estimates. Even in cases wherethe uncertainty is currently too large to guide management decisions, an uncertainty analysis can identify themost practical direction for future data acquisition.

  8. Estimating Vertical Stress on Soil Subjected to Vehicular Loading

    DTIC Science & Technology

    2009-02-01

    specified surface area of the tire . The silt and sand samples were both estimated to be 23.7-in. thick over a base of much harder soil. The pressures...study in which highway tread tires were used as opposed to the all-terrain tread currently on the vehicle. If the pressure pads are functioning...Vertical force versus time (front right CIV tire )....................................................................... 14 Tables Table 1. Testing

  9. Regional assessment of woody biomass physical availability as an energy feedstock for combined combustion in the US northern region

    Treesearch

    Michael E. Goerndt; Francisco X. Aguilar; Patrick Miles; Stephen Shifley; Nianfu Song; Hank Stelzer

    2012-01-01

    Woody biomass is a renewable energy feedstock with the potential to reduce current use of nonrenewable fossil fuels. We estimated the physical availability of woody biomass for cocombustion at coal-fired electricity plants in the 20-state US northern region. First, we estimated the total amount of woody biomass needed to replace total annual coal-based electricity...

  10. Estimation of aboveground forest carbon flux in Oregon: adding components of change to stock-difference assessments

    Treesearch

    Andrew N. Gray; Thomas R. Whittier; David L. Azuma

    2014-01-01

    A substantial portion of the carbon (C) emitted by human activity is apparently being stored in forest ecosystems in the Northern Hemisphere, but the magnitude and cause are not precisely understood. Current official estimates of forest C flux are based on a combination of field measurements and other methods. The goal of this study was to improve on existing methods...

  11. Federal Financial Interventions and Subsidies in Energy Markets 2007

    EIA Publications

    2008-01-01

    This report responds to a request from Senator Lamar Alexander of Tennessee that the Energy Information Administration update its 1999 to 2000 work on federal energy subsidies, including any additions or deletions of federal subsidies based on Administration or Congressional action since 2000, and providing an estimate of the size of each current subsidy. Subsidies directed to electricity production are estimated on the basis of generation by fuel.

  12. Can we improve top-down GHG inverse methods through informed prior and better representations of atmospheric transport? Insights from the Atmospheric Carbon and Transport (ACT) - America Aircraft Mission

    NASA Astrophysics Data System (ADS)

    Feng, S.; Lauvaux, T.; Keller, K.; Davis, K. J.

    2016-12-01

    Current estimates of biogenic carbon fluxes over North America based on top-down atmospheric inversions are subject to considerable uncertainty. This uncertainty stems to a large part from the uncertain prior fluxes estimates with the associated error covariances and approximations in the atmospheric transport models that link observed carbon dioxide mixing ratios with surface fluxes. Specifically, approximations in the representation of vertical mixing associated with atmospheric turbulence or convective transport and largely under-determined prior fluxes and their error structures significantly hamper our capacity to reliably estimate regional carbon fluxes. The Atmospheric Carbon and Transport - America (ACT-America) mission aims at reducing the uncertainties in inverse fluxes at the regional-scale by deploying airborne and ground-based platforms to characterize atmospheric GHG mixing ratios and the concurrent atmospheric dynamics. Two aircraft measure the 3-dimensional distribution of greenhouse gases at synoptic scales, focusing on the atmospheric boundary layer and the free troposphere during both fair and stormy weather conditions. Here we analyze two main questions: (i) What level of information can we expect from the currently planned observations? (ii) How might ACT-America reduce the hindcast and predictive uncertainty of carbon estimates over North America?

  13. Concept designs for NASA's Solar Electric Propulsion Technology Demonstration Mission

    NASA Technical Reports Server (NTRS)

    Mcguire, Melissa L.; Hack, Kurt J.; Manzella, David H.; Herman, Daniel A.

    2014-01-01

    Multiple Solar Electric Propulsion Technology Demonstration Mission were developed to assess vehicle performance and estimated mission cost. Concepts ranged from a 10,000 kilogram spacecraft capable of delivering 4000 kilogram of payload to one of the Earth Moon Lagrange points in support of future human-crewed outposts to a 180 kilogram spacecraft capable of performing an asteroid rendezvous mission after launched to a geostationary transfer orbit as a secondary payload. Low-cost and maximum Delta-V capability variants of a spacecraft concept based on utilizing a secondary payload adapter as the primary bus structure were developed as were concepts designed to be co-manifested with another spacecraft on a single launch vehicle. Each of the Solar Electric Propulsion Technology Demonstration Mission concepts developed included an estimated spacecraft cost. These data suggest estimated spacecraft costs of $200 million - $300 million if 30 kilowatt-class solar arrays and the corresponding electric propulsion system currently under development are used as the basis for sizing the mission concept regardless of launch vehicle costs. The most affordable mission concept developed based on subscale variants of the advanced solar arrays and electric propulsion technology currently under development by the NASA Space Technology Mission Directorate has an estimated cost of $50M and could provide a Delta-V capability comparable to much larger spacecraft concepts.

  14. Sodium and potassium content of 24 h urinary collections: a comparison between field- and laboratory-based analysers.

    PubMed

    Yin, Xuejun; Neal, Bruce; Tian, Maoyi; Li, Zhifang; Petersen, Kristina; Komatsu, Yuichiro; Feng, Xiangxian; Wu, Yangfeng

    2018-04-01

    Measurement of mean population Na and K intakes typically uses laboratory-based assays, which can add significant logistical burden and costs. A valid field-based measurement method would be a significant advance. In the current study, we used 24 h urine samples to compare estimates of Na, K and Na:K ratio based upon assays done using the field-based Horiba twin meter v. laboratory-based methods. The performance of the Horiba twin meter was determined by comparing field-based estimates of mean Na and K against those obtained using laboratory-based methods. The reported 95 % limits of agreement of Bland-Altman plots were calculated based on a regression approach for non-uniform differences. The 24 h urine samples were collected as part of an ongoing study being done in rural China. One hundred and sixty-six complete 24 h urine samples were qualified for estimating 24 h urinary Na and K excretion. Mean Na and K excretion were estimated as 170·4 and 37·4 mmol/d, respectively, using the meter-based assays; and 193·4 and 43·8 mmol/d, respectively, using the laboratory-based assays. There was excellent relative reliability (intraclass correlation coefficient) for both Na (0·986) and K (0·986). Bland-Altman plots showed moderate-to-good agreement between the two methods. Na and K intake estimations were moderately underestimated using assays based upon the Horiba twin meter. Compared with standard laboratory-based methods, the portable device was more practical and convenient.

  15. Joint Estimation of Effective Brain Wave Activation Modes Using EEG/MEG Sensor Arrays and Multimodal MRI Volumes.

    PubMed

    Galinsky, Vitaly L; Martinez, Antigona; Paulus, Martin P; Frank, Lawrence R

    2018-04-13

    In this letter, we present a new method for integration of sensor-based multifrequency bands of electroencephalography and magnetoencephalography data sets into a voxel-based structural-temporal magnetic resonance imaging analysis by utilizing the general joint estimation using entropy regularization (JESTER) framework. This allows enhancement of the spatial-temporal localization of brain function and the ability to relate it to morphological features and structural connectivity. This method has broad implications for both basic neuroscience research and clinical neuroscience focused on identifying disease-relevant biomarkers by enhancing the spatial-temporal resolution of the estimates derived from current neuroimaging modalities, thereby providing a better picture of the normal human brain in basic neuroimaging experiments and variations associated with disease states.

  16. Inference on periodicity of circadian time series.

    PubMed

    Costa, Maria J; Finkenstädt, Bärbel; Roche, Véronique; Lévi, Francis; Gould, Peter D; Foreman, Julia; Halliday, Karen; Hall, Anthony; Rand, David A

    2013-09-01

    Estimation of the period length of time-course data from cyclical biological processes, such as those driven by the circadian pacemaker, is crucial for inferring the properties of the biological clock found in many living organisms. We propose a methodology for period estimation based on spectrum resampling (SR) techniques. Simulation studies show that SR is superior and more robust to non-sinusoidal and noisy cycles than a currently used routine based on Fourier approximations. In addition, a simple fit to the oscillations using linear least squares is available, together with a non-parametric test for detecting changes in period length which allows for period estimates with different variances, as frequently encountered in practice. The proposed methods are motivated by and applied to various data examples from chronobiology.

  17. The Economic Costs of Substance Abuse Treatment: Updated Estimates and Cost Bands for Program Assessment and Reimbursement

    PubMed Central

    French, Michael T.; Popovici, Ioana; Tapsell, Lauren

    2008-01-01

    Federal, State, and local government agencies require current and accurate cost information for publicly funded substance abuse treatment programs to guide program assessments and reimbursement decisions. The Center for Substance Abuse Treatment (CSAT) published a list of modality-specific cost bands for this purpose in 2002. However, the upper and lower values in these ranges are so wide that they offer little practical guidance for funding agencies. Thus, the dual purpose of this investigation was to assemble the most current and comprehensive set of economic cost estimates from the readily-available literature and then use these estimates to develop updated modality-specific cost bands for more reasonable reimbursement policies. Although cost estimates were scant for some modalities, the recommended cost bands are based on the best available economic research, and we believe these new ranges will be more useful and pertinent for all stakeholders of publicly-funded substance abuse treatment. PMID:18294803

  18. Doses and risks from the ingestion of Dounreay fuel fragments.

    PubMed

    Darley, P J; Charles, M W; Fell, T P; Harrison, J D

    2003-01-01

    The radiological implications of ingestion of nuclear fuel fragments present in the marine environment around Dounreay have been reassessed by using the Monte Carlo code MCNP to obtain improved estimates of the doses to target cells in the walls of the lower large intestine resulting from the passage of a fragment. The approach takes account of the reduction in dose due to attenuation within the intestinal wall and self-absorption of radiation in the fuel fragment itself. In addition, dose is calculated on the basis of a realistic estimate of the anatomical volume of the lumen, rather than being based on the average mass of the contents, as in the current ICRP model. Our best estimates of doses from the ingestion of the largest Dounreay particles are at least a factor of 30 lower than those predicted using the current ICRP model. The new ICRP model will address the issues raised here and provide improved estimates of dose.

  19. Current sources of carbon tetrachloride (CCl4) in our atmosphere

    NASA Astrophysics Data System (ADS)

    Sherry, David; McCulloch, Archie; Liang, Qing; Reimann, Stefan; Newman, Paul A.

    2018-02-01

    Carbon tetrachloride (CCl4 or CTC) is an ozone-depleting substance whose emissive uses are controlled and practically banned by the Montreal Protocol (MP). Nevertheless, previous work estimated ongoing emissions of 35 Gg year-1 of CCl4 into the atmosphere from observation-based methods, in stark contrast to emissions estimates of 3 (0-8) Gg year-1 from reported numbers to UNEP under the MP. Here we combine information on sources from industrial production processes and legacy emissions from contaminated sites to provide an updated bottom-up estimate on current CTC global emissions of 15-25 Gg year-1. We now propose 13 Gg year-1 of global emissions from unreported non-feedstock emissions from chloromethane and perchloroethylene plants as the most significant CCl4 source. Additionally, 2 Gg year-1 are estimated as fugitive emissions from the usage of CTC as feedstock and possibly up to 10 Gg year-1 from legacy emissions and chlor-alkali plants.

  20. An investigation into incident duration forecasting for FleetForward

    DOT National Transportation Integrated Search

    2000-08-01

    Traffic condition forecasting is the process of estimating future traffic conditions based on current and archived data. Real-time forecasting is becoming an important tool in Intelligent Transportation Systems (ITS). This type of forecasting allows ...

  1. A Personalized Approach in Progressive Multiple Sclerosis: The Current Status of Disease Modifying Therapies (DMTs) and Future Perspectives

    PubMed Central

    D’Amico, Emanuele; Patti, Francesco; Zanghì, Aurora; Zappia, Mario

    2016-01-01

    Using the term of progressive multiple sclerosis (PMS), we considered a combined population of persons with secondary progressive MS (SPMS) and primary progressive MS (PPMS). These forms of MS cannot be challenged with efficacy by the licensed therapy. In the last years, several measures of risk estimation were developed for predicting clinical course in MS, but none is specific for the PMS forms. Personalized medicine is a therapeutic approach, based on identifying what might be the best therapy for an individual patient, taking into account the risk profile. We need to achieve more accurate estimates of useful predictors in PMS, including unconventional and qualitative markers which are not yet currently available or practicable routine diagnostics. The evaluation of an individual patient is based on the profile of disease activity.Within the neurology field, PMS is one of the fastest-moving going into the future. PMID:27763513

  2. Comparison of remote sensing and fixed-site monitoring approaches for examining air pollution and health in a national study population

    NASA Astrophysics Data System (ADS)

    Prud'homme, Genevieve; Dobbin, Nina A.; Sun, Liu; Burnett, Richard T.; Martin, Randall V.; Davidson, Andrew; Cakmak, Sabit; Villeneuve, Paul J.; Lamsal, Lok N.; van Donkelaar, Aaron; Peters, Paul A.; Johnson, Markey

    2013-12-01

    Satellite remote sensing (RS) has emerged as a cutting edge approach for estimating ground level ambient air pollution. Previous studies have reported a high correlation between ground level PM2.5 and NO2 estimated by RS and measurements collected at regulatory monitoring sites. The current study examined associations between air pollution and adverse respiratory and allergic health outcomes using multi-year averages of NO2 and PM2.5 from RS and from regulatory monitoring. RS estimates were derived using satellite measurements from OMI, MODIS, and MISR instruments. Regulatory monitoring data were obtained from Canada's National Air Pollution Surveillance Network. Self-reported prevalence of doctor-diagnosed asthma, current asthma, allergies, and chronic bronchitis were obtained from the Canadian Community Health Survey (a national sample of individuals 12 years of age and older). Multi-year ambient pollutant averages were assigned to each study participant based on their six digit postal code at the time of health survey, and were used as a marker for long-term exposure to air pollution. RS derived estimates of NO2 and PM2.5 were associated with 6-10% increases in respiratory and allergic health outcomes per interquartile range (3.97 μg m-3 for PM2.5 and 1.03 ppb for NO2) among adults (aged 20-64) in the national study population. Risk estimates for air pollution and respiratory/allergic health outcomes based on RS were similar to risk estimates based on regulatory monitoring for areas where regulatory monitoring data were available (within 40 km of a regulatory monitoring station). RS derived estimates of air pollution were also associated with adverse health outcomes among participants residing outside the catchment area of the regulatory monitoring network (p < 0.05). The consistency between risk estimates based on RS and regulatory monitoring as well as the associations between air pollution and health among participants living outside the catchment area for regulatory monitoring suggest that RS can provide useful estimates of long-term ambient air pollution in epidemiologic studies. This is particularly important in rural communities and other areas where monitoring and modeled air pollution data are limited or unavailable.

  3. Changes in biologically active ultraviolet radiation reaching the Earth's surface.

    PubMed

    Madronich, S; McKenzie, R L; Björn, L O; Caldwell, M M

    1998-10-01

    Stratospheric ozone levels are near their lowest point since measurements began, so current ultraviolet-B (UV-B) radiation levels are thought to be close to their maximum. Total stratospheric content of ozone-depleting substances is expected to reach a maximum before the year 2000. All other things being equal, the current ozone losses and related UV-B increases should be close to their maximum. Increases in surface erythemal (sunburning) UV radiation relative to the values in the 1970s are estimated to be: about 7% at Northern Hemisphere mid-latitudes in winter/spring; about 4% at Northern Hemisphere mid-latitudes in summer/fall; about 6% at Southern Hemisphere mid-latitudes on a year-round basis; about 130% in the Antarctic in spring; and about 22% in the Arctic in spring. Reductions in atmospheric ozone are expected to result in higher amounts of UV-B radiation reaching the Earth's surface. The expected correlation between increases in surface UV-B radiation and decreases in overhead ozone has been further demonstrated and quantified by ground-based instruments under a wide range of conditions. Improved measurements of UV-B radiation are now providing better geographical and temporal coverage. Surface UV-B radiation levels are highly variable because of cloud cover, and also because of local effects including pollutants and surface reflections. These factors usually decrease atmospheric transmission and therefore the surface irradiances at UV-B as well as other wavelengths. Occasional cloud-induced increases have also been reported. With a few exceptions, the direct detection of UV-B trends at low- and mid-latitudes remains problematic due to this high natural variability, the relatively small ozone changes, and the practical difficulties of maintaining long-term stability in networks of UV-measuring instruments. Few reliable UV-B radiation measurements are available from pre-ozone-depletion days. Satellite-based observations of atmospheric ozone and clouds are being used, together with models of atmospheric transmission, to provide global coverage and long-term estimates of surface UV-B radiation. Estimates of long-term (1979-1992) trends in zonally averaged UV irradiances that include cloud effects are nearly identical to those for clear-sky estimates, providing evidence that clouds have not influenced the UV-B trends. However, the limitations of satellite-derived UV estimates should be recognized. To assess uncertainties inherent in this approach, additional validations involving comparisons with ground-based observations are required. Direct comparisons of ground-based UV-B radiation measurements between a few mid-latitude sites in the Northern and Southern Hemispheres have shown larger differences than those estimated using satellite data. Ground-based measurements show that summertime erythemal UV irradiances in the Southern Hemisphere exceed those at comparable latitudes of the Northern Hemisphere by up to 40%, whereas corresponding satellite-based estimates yield only 10-15% differences. Atmospheric pollution may be a factor in this discrepancy between ground-based measurements and satellite-derived estimates. UV-B measurements at more sites are required to determine whether the larger observed differences are globally representative. High levels of UV-B radiation continue to be observed in Antarctica during the recurrent spring-time ozone hole. For example, during ozone-hole episodes, measured biologically damaging radiation at Palmer Station, Antarctica (64 degrees S) has been found to approach and occasionally even exceed maximum summer values at San Diego, CA, USA (32 degrees N). Long-term predictions of future UV-B levels are difficult and uncertain. Nevertheless, current best estimates suggest that a slow recovery to pre-ozone depletion levels may be expected during the next half-century. (ABSTRACT TRUNCATED)

  4. Integrating landslide and liquefaction hazard and loss estimates with existing USGS real-time earthquake information products

    USGS Publications Warehouse

    Allstadt, Kate E.; Thompson, Eric M.; Hearne, Mike; Nowicki Jessee, M. Anna; Zhu, J.; Wald, David J.; Tanyas, Hakan

    2017-01-01

    The U.S. Geological Survey (USGS) has made significant progress toward the rapid estimation of shaking and shakingrelated losses through their Did You Feel It? (DYFI), ShakeMap, ShakeCast, and PAGER products. However, quantitative estimates of the extent and severity of secondary hazards (e.g., landsliding, liquefaction) are not currently included in scenarios and real-time post-earthquake products despite their significant contributions to hazard and losses for many events worldwide. We are currently running parallel global statistical models for landslides and liquefaction developed with our collaborators in testing mode, but much work remains in order to operationalize these systems. We are expanding our efforts in this area by not only improving the existing statistical models, but also by (1) exploring more sophisticated, physics-based models where feasible; (2) incorporating uncertainties; and (3) identifying and undertaking research and product development to provide useful landslide and liquefaction estimates and their uncertainties. Although our existing models use standard predictor variables that are accessible globally or regionally, including peak ground motions, topographic slope, and distance to water bodies, we continue to explore readily available proxies for rock and soil strength as well as other susceptibility terms. This work is based on the foundation of an expanding, openly available, case-history database we are compiling along with historical ShakeMaps for each event. The expected outcome of our efforts is a robust set of real-time secondary hazards products that meet the needs of a wide variety of earthquake information users. We describe the available datasets and models, developments currently underway, and anticipated products. 

  5. Neural and Neural Gray-Box Modeling for Entry Temperature Prediction in a Hot Strip Mill

    NASA Astrophysics Data System (ADS)

    Barrios, José Angel; Torres-Alvarado, Miguel; Cavazos, Alberto; Leduc, Luis

    2011-10-01

    In hot strip mills, initial controller set points have to be calculated before the steel bar enters the mill. Calculations rely on the good knowledge of rolling variables. Measurements are available only after the bar has entered the mill, and therefore they have to be estimated. Estimation of process variables, particularly that of temperature, is of crucial importance for the bar front section to fulfill quality requirements, and the same must be performed in the shortest possible time to preserve heat. Currently, temperature estimation is performed by physical modeling; however, it is highly affected by measurement uncertainties, variations in the incoming bar conditions, and final product changes. In order to overcome these problems, artificial intelligence techniques such as artificial neural networks and fuzzy logic have been proposed. In this article, neural network-based systems, including neural-based Gray-Box models, are applied to estimate scale breaker entry temperature, given its importance, and their performance is compared to that of the physical model used in plant. Several neural systems and several neural-based Gray-Box models are designed and tested with real data. Taking advantage of the flexibility of neural networks for input incorporation, several factors which are believed to have influence on the process are also tested. The systems proposed in this study were proven to have better performance indexes and hence better prediction capabilities than the physical models currently used in plant.

  6. The Australian Work Exposures Study: prevalence of occupational exposure to diesel engine exhaust.

    PubMed

    Peters, Susan; Carey, Renee N; Driscoll, Timothy R; Glass, Deborah C; Benke, Geza; Reid, Alison; Fritschi, Lin

    2015-06-01

    Diesel engines are widely used in occupational settings. Diesel exhaust has been classified as a lung carcinogen, but data on number of workers exposed to different levels of diesel exhaust are not available in Australia. The aim of this study was to estimate the current prevalence of exposure to diesel engine exhaust in Australian workplaces. A cross-sectional survey of Australian males and females (18-65 years old) in current paid employment was undertaken. Information about the respondents' current job and various demographic factors was collected in a telephone interview using the web-based tool OccIDEAS. Semi-quantitative occupational exposure levels to diesel exhaust were assigned using programmed decision rules and numbers of workers exposed in Australia in 2011 were estimated. We defined substantial exposure as exposed at a medium or high level, for at least 5h per week. Substantial occupational exposure to diesel exhaust was experienced by 13.4% of the respondents in their current job. Exposure prevalence varied across states, ranging from 6.4% in the Australian Capital Territory to 17.0% in Western Australia. Exposures occurred mainly in the agricultural, mining, transport and construction industries, and among mechanics. Men (20.4%) were more often exposed than women (4.7%). Extrapolation to the total working population indicated that 13.8% (95% confidence interval 10.0-20.4) of the 2011 Australian workforce were estimated to be substantially exposed to diesel exhaust, and 1.8% of the workers were estimated to experience high levels of exposures in their current job. About 1.2 million Australian workers were estimated to have been exposed to diesel exhaust in their workplace in 2011. This is the first study to describe the prevalence of occupational diesel exhaust exposure in Australia and will enable estimation of the number of lung cancers attributable to diesel exhaust exposure in the workplace. © The Author 2015. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  7. Estimation of insurance premiums for coverage against natural disaster risk: an application of Bayesian Inference

    NASA Astrophysics Data System (ADS)

    Paudel, Y.; Botzen, W. J. W.; Aerts, J. C. J. H.

    2013-03-01

    This study applies Bayesian Inference to estimate flood risk for 53 dyke ring areas in the Netherlands, and focuses particularly on the data scarcity and extreme behaviour of catastrophe risk. The probability density curves of flood damage are estimated through Monte Carlo simulations. Based on these results, flood insurance premiums are estimated using two different practical methods that each account in different ways for an insurer's risk aversion and the dispersion rate of loss data. This study is of practical relevance because insurers have been considering the introduction of flood insurance in the Netherlands, which is currently not generally available.

  8. Educational Attainment in the United States: 2009. Population Characteristics. Current Population Reports. P20-566

    ERIC Educational Resources Information Center

    Ryan, Camille L.; Siebens, Julie

    2012-01-01

    This report provides a portrait of educational attainment in the United States based on data collected in the 2009 American Community Survey (ACS) and the 2005-2009 ACS 5-year estimates. It also uses data from the Annual Social and Economic Supplement (ASEC) to the Current Population Survey (CPS) collected in 2009 and earlier, as well as monthly…

  9. [Review of estimation on oceanic primary productivity by using remote sensing methods.

    PubMed

    Xu, Hong Yun; Zhou, Wei Feng; Ji, Shi Jian

    2016-09-01

    Accuracy estimation of oceanic primary productivity is of great significance in the assessment and management of fisheries resources, marine ecology systems, global change and other fields. The traditional measurement and estimation of oceanic primary productivity has to rely on in situ sample data by vessels. Satellite remote sensing has advantages of providing dynamic and eco-environmental parameters of ocean surface at large scale in real time. Thus, satellite remote sensing has increasingly become an important means for oceanic primary productivity estimation on large spatio-temporal scale. Combining with the development of ocean color sensors, the models to estimate the oceanic primary productivity by satellite remote sensing have been developed that could be mainly summarized as chlorophyll-based, carbon-based and phytoplankton absorption-based approach. The flexibility and complexity of the three kinds of models were presented in the paper. On this basis, the current research status for global estimation of oceanic primary productivity was analyzed and evaluated. In view of these, four research fields needed to be strengthened in further stu-dy: 1) Global oceanic primary productivity estimation should be segmented and studied, 2) to dee-pen the research on absorption coefficient of phytoplankton, 3) to enhance the technology of ocea-nic remote sensing, 4) to improve the in situ measurement of primary productivity.

  10. Probable Maximum Precipitation in the U.S. Pacific Northwest in a Changing Climate

    NASA Astrophysics Data System (ADS)

    Chen, Xiaodong; Hossain, Faisal; Leung, L. Ruby

    2017-11-01

    The safety of large and aging water infrastructures is gaining attention in water management given the accelerated rate of change in landscape, climate, and society. In current engineering practice, such safety is ensured by the design of infrastructure for the Probable Maximum Precipitation (PMP). Recently, several numerical modeling approaches have been proposed to modernize the conventional and ad hoc PMP estimation approach. However, the underlying physics have not been fully investigated and thus differing PMP estimates are sometimes obtained without physics-based interpretations. In this study, we present a hybrid approach that takes advantage of both traditional engineering practice and modern climate science to estimate PMP for current and future climate conditions. The traditional PMP approach is modified and applied to five statistically downscaled CMIP5 model outputs, producing an ensemble of PMP estimates in the Pacific Northwest (PNW) during the historical (1970-2016) and future (2050-2099) time periods. The hybrid approach produced consistent historical PMP estimates as the traditional estimates. PMP in the PNW will increase by 50% ± 30% of the current design PMP by 2099 under the RCP8.5 scenario. Most of the increase is caused by warming, which mainly affects moisture availability through increased sea surface temperature, with minor contributions from changes in storm efficiency in the future. Moist track change tends to reduce the future PMP. Compared with extreme precipitation, PMP exhibits higher internal variability. Thus, long-time records of high-quality data in both precipitation and related meteorological fields (temperature, wind fields) are required to reduce uncertainties in the ensemble PMP estimates.

  11. Probable Maximum Precipitation in the U.S. Pacific Northwest in a Changing Climate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xiaodong; Hossain, Faisal; Leung, Lai-Yung

    2017-12-22

    The safety of large and aging water infrastructures is gaining attention in water management given the accelerated rate of change in landscape, climate and society. In current engineering practice, such safety is ensured by the design of infrastructure for the Probable Maximum Precipitation (PMP). Recently, several physics-based numerical modeling approaches have been proposed to modernize the conventional and ad hoc PMP estimation approach. However, the underlying physics has not been investigated and thus differing PMP estimates are obtained without clarity on their interpretation. In this study, we present a hybrid approach that takes advantage of both traditional engineering wisdom andmore » modern climate science to estimate PMP for current and future climate conditions. The traditional PMP approach is improved and applied to outputs from an ensemble of five CMIP5 models. This hybrid approach is applied in the Pacific Northwest (PNW) to produce ensemble PMP estimation for the historical (1970-2016) and future (2050-2099) time periods. The new historical PMP estimates are verified by comparing them with the traditional estimates. PMP in the PNW will increase by 50% of the current level by 2099 under the RCP8.5 scenario. Most of the increase is caused by warming, which mainly affects moisture availability, with minor contributions from changes in storm efficiency in the future. Moist track change tends to reduce the future PMP. Compared with extreme precipitation, ensemble PMP exhibits higher internal variation. Thus high-quality data of both precipitation and related meteorological fields (temperature, wind fields) are required to reduce uncertainties in the ensemble PMP estimates.« less

  12. Social Media and Language Processing: How Facebook and Twitter Provide the Best Frequency Estimates for Studying Word Recognition.

    PubMed

    Herdağdelen, Amaç; Marelli, Marco

    2017-05-01

    Corpus-based word frequencies are one of the most important predictors in language processing tasks. Frequencies based on conversational corpora (such as movie subtitles) are shown to better capture the variance in lexical decision tasks compared to traditional corpora. In this study, we show that frequencies computed from social media are currently the best frequency-based estimators of lexical decision reaction times (up to 3.6% increase in explained variance). The results are robust (observed for Twitter- and Facebook-based frequencies on American English and British English datasets) and are still substantial when we control for corpus size. © 2016 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.

  13. Practical Applications for Earthquake Scenarios Using ShakeMap

    NASA Astrophysics Data System (ADS)

    Wald, D. J.; Worden, B.; Quitoriano, V.; Goltz, J.

    2001-12-01

    In planning and coordinating emergency response, utilities, local government, and other organizations are best served by conducting training exercises based on realistic earthquake situations-ones that they are most likely to face. Scenario earthquakes can fill this role; they can be generated for any geologically plausible earthquake or for actual historic earthquakes. ShakeMap Web pages now display selected earthquake scenarios (www.trinet.org/shake/archive/scenario/html) and more events will be added as they are requested and produced. We will discuss the methodology and provide practical examples where these scenarios are used directly for risk reduction. Given a selected event, we have developed tools to make it relatively easy to generate a ShakeMap earthquake scenario using the following steps: 1) Assume a particular fault or fault segment will (or did) rupture over a certain length, 2) Determine the magnitude of the earthquake based on assumed rupture dimensions, 3) Estimate the ground shaking at all locations in the chosen area around the fault, and 4) Represent these motions visually by producing ShakeMaps and generating ground motion input for loss estimation modeling (e.g., FEMA's HAZUS). At present, ground motions are estimated using empirical attenuation relationships to estimate peak ground motions on rock conditions. We then correct the amplitude at that location based on the local site soil (NEHRP) conditions as we do in the general ShakeMap interpolation scheme. Finiteness is included explicitly, but directivity enters only through the empirical relations. Although current ShakeMap earthquake scenarios are empirically based, substantial improvements in numerical ground motion modeling have been made in recent years. However, loss estimation tools, HAZUS for example, typically require relatively high frequency (3 Hz) input for predicting losses, above the range of frequencies successfully modeled to date. Achieving full-synthetic ground motion estimates that will substantially improve over empirical relations at these frequencies will require developing cost-effective numerical tools for proper theoretical inclusion of known complex ground motion effects. Current efforts underway must continue in order to obtain site, basin, and deeper crustal structure, and to characterize and test 3D earth models (including attenuation and nonlinearity). In contrast, longer period synthetics (>2 sec) are currently being generated in a deterministic fashion to include 3D and shallow site effects, an improvement on empirical estimates alone. As progress is made, we will naturally incorporate such advances into the ShakeMap scenario earthquake and processing methodology. Our scenarios are currently used heavily in emergency response planning and loss estimation. Primary users include city, county, state and federal government agencies (e.g., the California Office of Emergency Services, FEMA, the County of Los Angeles) as well as emergency response planners and managers for utilities, businesses, and other large organizations. We have found the scenarios are also of fundamental interest to many in the media and the general community interested in the nature of the ground shaking likely experienced in past earthquakes as well as effects of rupture on known faults in the future.

  14. Modeling the Effects of E-cigarettes on Smoking Behavior: Implications for Future Adult Smoking Prevalence.

    PubMed

    Cherng, Sarah T; Tam, Jamie; Christine, Paul J; Meza, Rafael

    2016-11-01

    Electronic cigarette (e-cigarette) use has increased rapidly in recent years. Given the unknown effects of e-cigarette use on cigarette smoking behaviors, e-cigarette regulation has become the subject of considerable controversy. In the absence of longitudinal data documenting the long-term effects of e-cigarette use on smoking behavior and population smoking outcomes, computational models can guide future empirical research and provide insights into the possible effects of e-cigarette use on smoking prevalence over time. Agent-based model examining hypothetical scenarios of e-cigarette use by smoking status and e-cigarette effects on smoking initiation and smoking cessation. If e-cigarettes increase individual-level smoking cessation probabilities by 20%, the model estimates a 6% reduction in smoking prevalence by 2060 compared with baseline model (no effects) outcomes. In contrast, e-cigarette use prevalence among never smokers would have to rise dramatically from current estimates, with e-cigarettes increasing smoking initiation by more than 200% relative to baseline model estimates to achieve a corresponding 6% increase in smoking prevalence by 2060. Based on current knowledge of the patterns of e-cigarette use by smoking status and the heavy concentration of e-cigarette use among current smokers, the simulated effects of e-cigarettes on smoking cessation generate substantially larger changes to smoking prevalence compared with their effects on smoking initiation.

  15. Modeling the Effects of E-Cigarettes on Smoking Behavior: Implications for Future Adult Smoking Prevalence

    PubMed Central

    Cherng, Sarah T.; Tam, Jamie; Christine, Paul; Meza, Rafael

    2016-01-01

    Background Electronic cigarette (e-cigarette) use has increased rapidly in recent years. Given the unknown effects of e-cigarette use on cigarette smoking behaviors, e-cigarette regulation has become the subject of considerable controversy. In the absence of longitudinal data documenting the long-term effects of e-cigarette use on smoking behavior and population smoking outcomes, computational models can guide future empirical research and provide insights into the possible effects of e-cigarette use on smoking prevalence over time. Methods Agent-based model examining hypothetical scenarios of e-cigarette use by smoking status and e-cigarette effects on smoking initiation and smoking cessation. Results If e-cigarettes increase individual-level smoking cessation probabilities by 20%, the model estimates a 6% reduction in smoking prevalence by 2060 compared to baseline model (no effects) outcomes. In contrast, e-cigarette use prevalence among never smokers would have to rise dramatically from current estimates, with e-cigarettes increasing smoking initiation by more than 200% relative to baseline model estimates in order to achieve a corresponding 6% increase in smoking prevalence by 2060. Conclusions Based on current knowledge of the patterns of e-cigarette use by smoking status and the heavy concentration of e-cigarette use among current smokers, the simulated effects of e-cigarettes on smoking cessation generate substantially larger changes to smoking prevalence relative to their effects on smoking initiation. PMID:27093020

  16. Requirements for Coregistration Accuracy in On-Scalp MEG.

    PubMed

    Zetter, Rasmus; Iivanainen, Joonas; Stenroos, Matti; Parkkonen, Lauri

    2018-06-22

    Recent advances in magnetic sensing has made on-scalp magnetoencephalography (MEG) possible. In particular, optically-pumped magnetometers (OPMs) have reached sensitivity levels that enable their use in MEG. In contrast to the SQUID sensors used in current MEG systems, OPMs do not require cryogenic cooling and can thus be placed within millimetres from the head, enabling the construction of sensor arrays that conform to the shape of an individual's head. To properly estimate the location of neural sources within the brain, one must accurately know the position and orientation of sensors in relation to the head. With the adaptable on-scalp MEG sensor arrays, this coregistration becomes more challenging than in current SQUID-based MEG systems that use rigid sensor arrays. Here, we used simulations to quantify how accurately one needs to know the position and orientation of sensors in an on-scalp MEG system. The effects that different types of localisation errors have on forward modelling and source estimates obtained by minimum-norm estimation, dipole fitting, and beamforming are detailed. We found that sensor position errors generally have a larger effect than orientation errors and that these errors affect the localisation accuracy of superficial sources the most. To obtain similar or higher accuracy than with current SQUID-based MEG systems, RMS sensor position and orientation errors should be [Formula: see text] and [Formula: see text], respectively.

  17. Evaluation of methods for calculating maximum allowable standing height in amputees competing in Paralympic athletics.

    PubMed

    Connick, M J; Beckman, E; Ibusuki, T; Malone, L; Tweedy, S M

    2016-11-01

    The International Paralympic Committee has a maximum allowable standing height (MASH) rule that limits stature to a pre-trauma estimation. The MASH rule reduces the probability that bilateral lower limb amputees use disproportionately long prostheses in competition. Although there are several methods for estimating stature, the validity of these methods has not been compared. To identify the most appropriate method for the MASH rule, this study aimed to compare the criterion validity of estimations resulting from the current method, the Contini method, and four Canda methods (Canda-1, Canda-2, Canda-3, and Canda-4). Stature, ulna length, demispan, sitting height, thigh length, upper arm length, and forearm length measurements in 31 males and 30 females were used to calculate the respective estimation for each method. Results showed that Canda-1 (based on four anthropometric variables) produced the smallest error and best fitted the data in males and females. The current method was associated with the largest error of those tests because it increasingly overestimated height in people with smaller stature. The results suggest that the set of Canda equations provide a more valid MASH estimation in people with a range of upper limb and bilateral lower limb amputations compared with the current method. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. Spectromicroscopy and coherent diffraction imaging: focus on energy materials applications.

    PubMed

    Hitchcock, Adam P; Toney, Michael F

    2014-09-01

    Current and future capabilities of X-ray spectromicroscopy are discussed based on coherence-limited imaging methods which will benefit from the dramatic increase in brightness expected from a diffraction-limited storage ring (DLSR). The methods discussed include advanced coherent diffraction techniques and nanoprobe-based real-space imaging using Fresnel zone plates or other diffractive optics whose performance is affected by the degree of coherence. The capabilities of current systems, improvements which can be expected, and some of the important scientific themes which will be impacted are described, with focus on energy materials applications. Potential performance improvements of these techniques based on anticipated DLSR performance are estimated. Several examples of energy sciences research problems which are out of reach of current instrumentation, but which might be solved with the enhanced DLSR performance, are discussed.

  19. Estimating the impact of adding C-reactive protein as a criterion for lipid lowering treatment in the United States.

    PubMed

    Woloshin, Steven; Schwartz, Lisa M; Kerin, Kevin; Welch, H Gilbert

    2007-02-01

    There is growing interest in using C-reactive protein (CRP) levels to help select patients for lipid lowering therapy--although this practice is not yet supported by evidence of benefit in a randomized trial. To estimate the number of Americans potentially affected if a CRP criteria were adopted as an additional indication for lipid lowering therapy. To provide context, we also determined how well current lipid lowering guidelines are being implemented. We analyzed nationally representative data to determine how many Americans age 35 and older meet current National Cholesterol Education Program (NCEP) treatment criteria (a combination of risk factors and their Framingham risk score). We then determined how many of the remaining individuals would meet criteria for treatment using 2 different CRP-based strategies: (1) narrow: treat individuals at intermediate risk (i.e., 2 or more risk factors and an estimated 10-20% risk of coronary artery disease over the next 10 years) with CRP > 3 mg/L and (2) broad: treat all individuals with CRP > 3 mg/L. Analyses are based on the 2,778 individuals participating in the 1999-2002 National Health and Nutrition Examination Survey with complete data on cardiac risk factors, fasting lipid levels, CRP, and use of lipid lowering agents. The estimated number and proportion of American adults meeting NCEP criteria who take lipid-lowering drugs, and the additional number who would be eligible based on CRP testing. About 53 of the 153 million Americans aged 35 and older meet current NCEP criteria (that do not involve CRP) for lipid-lowering treatment. Sixty-five percent, however, are not currently being treated, even among those at highest risk (i.e., patients with established heart disease or its risk equivalent)-62% are untreated. Adopting the narrow and broad CRP strategies would make an additional 2.1 and 25.3 million Americans eligible for treatment, respectively. The latter strategy would make over half the adults age 35 and older eligible for lipid-lowering therapy, with most of the additionally eligible (57%) coming from the lowest NCEP heart risk category (i.e., 0-1 risk factors). There is substantial underuse of lipid lowering therapy for American adults at high risk for coronary disease. Rather than adopting CRP-based strategies, which would make millions more lower risk patients eligible for treatment (and for whom treatment benefit has not yet been demonstrated in a randomized trial), we should ensure the treatment of currently defined high-risk patients for whom the benefit of therapy is established.

  20. NetMOD v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merchant, Bion J

    2015-12-22

    NetMOD is a tool to model the performance of global ground-based explosion monitoring systems. The version 2.0 of the software supports the simulation of seismic, hydroacoustic, and infrasonic detection capability. The tool provides a user interface to execute simulations based upon a hypothetical definition of the monitoring system configuration, geophysical properties of the Earth, and detection analysis criteria. NetMOD will be distributed with a project file defining the basic performance characteristics of the International Monitoring System (IMS), a network of sensors operated by the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO). Network modeling is needed to be able to assess and explainmore » the potential effect of changes to the IMS, to prioritize station deployment and repair, and to assess the overall CTBTO monitoring capability currently and in the future. Currently the CTBTO uses version 1.0 of NetMOD, provided to them in early 2014. NetMOD will provide a modern tool that will cover all the simulations currently available and allow for the development of additional simulation capabilities of the IMS in the future. NetMOD simulates the performance of monitoring networks by estimating the relative amplitudes of the signal and noise measured at each of the stations within the network based upon known geophysical principles. From these signal and noise estimates, a probability of detection may be determined for each of the stations. The detection probabilities at each of the stations may then be combined to produce an estimate of the detection probability for the entire monitoring network.« less

  1. Satellite Remote Sensing of Ocean Winds, Surface Waves and Surface Currents during the Hurricanes

    NASA Astrophysics Data System (ADS)

    Zhang, G.; Perrie, W. A.; Liu, G.; Zhang, L.

    2017-12-01

    Hurricanes over the ocean have been observed by spaceborne aperture radar (SAR) since the first SAR images were available in 1978. SAR has high spatial resolution (about 1 km), relatively large coverage and capability for observations during almost all-weather, day-and-night conditions. In this study, seven C-band RADARSAT-2 dual-polarized (VV and VH) ScanSAR wide images from the Canadian Space Agency (CSA) Hurricane Watch Program in 2017 are collected over five hurricanes: Harvey, Irma, Maria, Nate, and Ophelia. We retrieve the ocean winds by applying our C-band Cross-Polarization Coupled-Parameters Ocean (C-3PO) wind retrieval model [Zhang et al., 2017, IEEE TGRS] to the SAR images. Ocean waves are estimated by applying a relationship based on the fetch- and duration-limited nature of wave growth inside hurricanes [Hwang et al., 2016; 2017, J. Phys. Ocean.]. We estimate the ocean surface currents using the Doppler Shift extracted from VV-polarized SAR images [Kang et al., 2016, IEEE TGRS]. C-3PO model is based on theoretical analysis of ocean surface waves and SAR microwave backscatter. Based on the retrieved ocean winds, we estimate the hurricane center locations, maxima wind speeds, and radii of the five hurricanes by adopting the SHEW model (Symmetric Hurricane Estimates for Wind) by Zhang et al. [2017, IEEE TGRS]. Thus, we investigate possible relations between hurricane structures and intensities, and especially some possible effects of the asymmetrical characteristics on changes in the hurricane intensities, such as the eyewall replacement cycle. The three SAR images of Ophelia include the north coast of Ireland and east coast of Scotland allowing study of ocean surface currents respond to the hurricane. A system of methods capable of observing marine winds, surface waves, and surface currents from satellites is of value, even if these data are only available in near real-time or from SAR-related satellite images. Insight into high resolution ocean winds, waves and currents in hurricanes can be useful for intensity prediction, which has had relatively few improvements in the past 25 years. In 2018 RADARSAT Constellation Mission will be launched, increasing SAR coverage by 10×, allowing increased observations during the next hurricane season.

  2. Evaluating Childhood Vaccination Coverage of NIP Vaccines: Coverage Survey versus Zhejiang Provincial Immunization Information System

    PubMed Central

    Hu, Yu; Chen, Yaping

    2017-01-01

    Vaccination coverage in Zhejiang province, east China, is evaluated through repeated coverage surveys. The Zhejiang provincial immunization information system (ZJIIS) was established in 2004 with links to all immunization clinics. ZJIIS has become an alternative to quickly assess the vaccination coverage. To assess the current completeness and accuracy on the vaccination coverage derived from ZJIIS, we compared the estimates from ZJIIS with the estimates from the most recent provincial coverage survey in 2014, which combined interview data with verified data from ZJIIS. Of the enrolled 2772 children in the 2014 provincial survey, the proportions of children with vaccination cards and registered in ZJIIS were 94.0% and 87.4%, respectively. Coverage estimates from ZJIIS were systematically higher than the corresponding estimates obtained through the survey, with a mean difference of 4.5%. Of the vaccination doses registered in ZJIIS, 16.7% differed from the date recorded in the corresponding vaccination cards. Under-registration in ZJIIS significantly influenced the coverage estimates derived from ZJIIS. Therefore, periodic coverage surveys currently provide more complete and reliable results than the estimates based on ZJIIS alone. However, further improvement of completeness and accuracy of ZJIIS will likely allow more reliable and timely estimates in future. PMID:28696387

  3. The never ending road: improving, adapting and refining a needs-based model to estimate future general practitioner requirements in two Australian states.

    PubMed

    Laurence, Caroline O; Heywood, Troy; Bell, Janice; Atkinson, Kaye; Karnon, Jonathan

    2018-03-27

    Health workforce planning models have been developed to estimate the future health workforce requirements for a population whom they serve and have been used to inform policy decisions. To adapt and further develop a need-based GP workforce simulation model to incorporate current and estimated geographic distribution of patients and GPs. A need-based simulation model that estimates the supply of GPs and levels of services required in South Australia (SA) was adapted and applied to the Western Australian (WA) workforce. The main outcome measure was the differences in the number of full-time equivalent (FTE) GPs supplied and required from 2013 to 2033. The base scenario estimated a shortage of GPs in WA from 2019 onwards with a shortage of 493 FTE GPs in 2033, while for SA, estimates showed an oversupply over the projection period. The WA urban and rural models estimated an urban shortage of GPs over this period. A reduced international medical graduate recruitment scenario resulted in estimated shortfalls of GPs by 2033 for WA and SA. The WA-specific scenarios of lower population projections and registrar work value resulted in a reduced shortage of FTE GPs in 2033, while unfilled training places increased the shortfall of FTE GPs in 2033. The simulation model incorporates contextual differences to its structure that allows within and cross jurisdictional comparisons of workforce estimations. It also provides greater insights into the drivers of supply and demand and the impact of changes in workforce policy, promoting more informed decision-making.

  4. Methods for determining time of death.

    PubMed

    Madea, Burkhard

    2016-12-01

    Medicolegal death time estimation must estimate the time since death reliably. Reliability can only be provided empirically by statistical analysis of errors in field studies. Determining the time since death requires the calculation of measurable data along a time-dependent curve back to the starting point. Various methods are used to estimate the time since death. The current gold standard for death time estimation is a previously established nomogram method based on the two-exponential model of body cooling. Great experimental and practical achievements have been realized using this nomogram method. To reduce the margin of error of the nomogram method, a compound method was developed based on electrical and mechanical excitability of skeletal muscle, pharmacological excitability of the iris, rigor mortis, and postmortem lividity. Further increasing the accuracy of death time estimation involves the development of conditional probability distributions for death time estimation based on the compound method. Although many studies have evaluated chemical methods of death time estimation, such methods play a marginal role in daily forensic practice. However, increased precision of death time estimation has recently been achieved by considering various influencing factors (i.e., preexisting diseases, duration of terminal episode, and ambient temperature). Putrefactive changes may be used for death time estimation in water-immersed bodies. Furthermore, recently developed technologies, such as H magnetic resonance spectroscopy, can be used to quantitatively study decompositional changes. This review addresses the gold standard method of death time estimation in forensic practice and promising technological and scientific developments in the field.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, Aladsair J.; Viswanathan, Vilayanur V.; Stephenson, David E.

    A robust performance-based cost model is developed for all-vanadium, iron-vanadium and iron chromium redox flow batteries. Systems aspects such as shunt current losses, pumping losses and thermal management are accounted for. The objective function, set to minimize system cost, allows determination of stack design and operating parameters such as current density, flow rate and depth of discharge (DOD). Component costs obtained from vendors are used to calculate system costs for various time frames. A 2 kW stack data was used to estimate unit energy costs and compared with model estimates for the same size electrodes. The tool has been sharedmore » with the redox flow battery community to both validate their stack data and guide future direction.« less

  6. Species coextinctions and the biodiversity crisis.

    PubMed

    Koh, Lian Pin; Dunn, Robert R; Sodhi, Navjot S; Colwell, Robert K; Proctor, Heather C; Smith, Vincent S

    2004-09-10

    To assess the coextinction of species (the loss of a species upon the loss of another), we present a probabilistic model, scaled with empirical data. The model examines the relationship between coextinction levels (proportion of species extinct) of affiliates and their hosts across a wide range of coevolved interspecific systems: pollinating Ficus wasps and Ficus, parasites and their hosts, butterflies and their larval host plants, and ant butterflies and their host ants. Applying a nomographic method based on mean host specificity (number of host species per affiliate species), we estimate that 6300 affiliate species are "coendangered" with host species currently listed as endangered. Current extinction estimates need to be recalibrated by taking species coextinctions into account.

  7. A quality-based cost model for new electronic systems and products

    NASA Astrophysics Data System (ADS)

    Shina, Sammy G.; Saigal, Anil

    1998-04-01

    This article outlines a method for developing a quality-based cost model for the design of new electronic systems and products. The model incorporates a methodology for determining a cost-effective design margin allocation for electronic products and systems and its impact on manufacturing quality and cost. A spreadsheet-based cost estimating tool was developed to help implement this methodology in order for the system design engineers to quickly estimate the effect of design decisions and tradeoffs on the quality and cost of new products. The tool was developed with automatic spreadsheet connectivity to current process capability and with provisions to consider the impact of capital equipment and tooling purchases to reduce the product cost.

  8. Efficient depth intraprediction method for H.264/AVC-based three-dimensional video coding

    NASA Astrophysics Data System (ADS)

    Oh, Kwan-Jung; Oh, Byung Tae

    2015-04-01

    We present an intracoding method that is applicable to depth map coding in multiview plus depth systems. Our approach combines skip prediction and plane segmentation-based prediction. The proposed depth intraskip prediction uses the estimated direction at both the encoder and decoder, and does not need to encode residual data. Our plane segmentation-based intraprediction divides the current block into biregions, and applies a different prediction scheme for each segmented region. This method avoids incorrect estimations across different regions, resulting in higher prediction accuracy. Simulation results demonstrate that the proposed scheme is superior to H.264/advanced video coding intraprediction and has the ability to improve the subjective rendering quality.

  9. Transports and tidal current estimates in the Taiwan Strait from shipboard ADCP observations (1999-2001)

    NASA Astrophysics Data System (ADS)

    Wang, Y. H.; Jan, S.; Wang, D. P.

    2003-05-01

    Tidal and mean flows in the Taiwan Strait are obtained from analysis of 2.5 years (1999-2001) of shipboard ADCP data using a spatial least-squares technique. The average tidal current amplitude is 0.46 ms -1, the maximum amplitude is 0.80 ms -1 at the northeast and southeast entrances and the minimum amplitude is 0.20 ms -1 in the middle of the Strait. The tidal current ellipses derived from the shipboard ADCP data compare well with the predictions of a high-resolution regional tidal model. For the mean currents, the average velocity is about 0.40 ms -1. The mean transport through the Strait is northward (into the East China Sea) at 1.8 Sv. The transport is related to the along Strait wind by a simple regression, transport (Sv)=2.42+0.12×wind (ms -1). Using this empirical formula, the maximum seasonal transport is in summer, about 2.7 Sv, the minimum transport is in winter, at 0.9 Sv, and the mean transport is 1.8 Sv. For comparison, this result indicates that the seasonal amplitude is almost identical to the classical estimate by Wyrtki (Physical oceanography of the southeast Asian waters, scientific results of marine investigations of the South China Sea and Gulf of Thailand, 1959-1961. Naga Report 2, Scripps Institute of Oceanography, 195 pp.) based on the mass balance in the South China Sea, while the mean is close to the recent estimate by Isobe [Continental Shelf Research 19 (1999) 195] based on the mass balance in the East China Sea.

  10. Large scale systems : a study of computer organizations for air traffic control applications.

    DOT National Transportation Integrated Search

    1971-06-01

    Based on current sizing estimates and tracking algorithms, some computer organizations applicable to future air traffic control computing systems are described and assessed. Hardware and software problem areas are defined and solutions are outlined.

  11. 76 FR 62331 - Atlantic Highly Migratory Species; Atlantic Shark Management Measures

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-07

    ...) or 2099. The target year for rebuilding ranged from 2081 to 2257 depending on the state of nature (i... probability of rebuilding by 2099. The base model also estimated that with the current fishing mortality rate...

  12. Regression model estimation of early season crop proportions: North Dakota, some preliminary results

    NASA Technical Reports Server (NTRS)

    Lin, K. K. (Principal Investigator)

    1982-01-01

    To estimate crop proportions early in the season, an approach is proposed based on: use of a regression-based prediction equation to obtain an a priori estimate for specific major crop groups; modification of this estimate using current-year LANDSAT and weather data; and a breakdown of the major crop groups into specific crops by regression models. Results from the development and evaluation of appropriate regression models for the first portion of the proposed approach are presented. The results show that the model predicts 1980 crop proportions very well at both county and crop reporting district levels. In terms of planted acreage, the model underpredicted 9.1 percent of the 1980 published data on planted acreage at the county level. It predicted almost exactly the 1980 published data on planted acreage at the crop reporting district level and overpredicted the planted acreage by just 0.92 percent.

  13. Inertial sensor-based methods in walking speed estimation: a systematic review.

    PubMed

    Yang, Shuozhi; Li, Qingguo

    2012-01-01

    Self-selected walking speed is an important measure of ambulation ability used in various clinical gait experiments. Inertial sensors, i.e., accelerometers and gyroscopes, have been gradually introduced to estimate walking speed. This research area has attracted a lot of attention for the past two decades, and the trend is continuing due to the improvement of performance and decrease in cost of the miniature inertial sensors. With the intention of understanding the state of the art of current development in this area, a systematic review on the exiting methods was done in the following electronic engines/databases: PubMed, ISI Web of Knowledge, SportDiscus and IEEE Xplore. Sixteen journal articles and papers in proceedings focusing on inertial sensor based walking speed estimation were fully reviewed. The existing methods were categorized by sensor specification, sensor attachment location, experimental design, and walking speed estimation algorithm.

  14. Limitations and opportunities for the social cost of carbon (Invited)

    NASA Astrophysics Data System (ADS)

    Rose, S. K.

    2010-12-01

    Estimates of the marginal value of carbon dioxide-the social cost of carbon (SCC)-were recently adopted by the U.S. Government in order to satisfy requirements to value estimated GHG changes of new federal regulations. However, the development and use of SCC estimates of avoided climate change impacts comes with significant challenges and controversial decisions. Fortunately, economics can provide some guidance for conceptually appropriate estimates. At the same time, economics defaults to a benefit-cost decision framework to identify socially optimal policies. However, not all current policy decisions are benefit-cost based, nor depend on monetized information, or even have the same threshold for information. While a conceptually appropriate SCC is a useful metric, how far can we take it? This talk discusses potential applications of the SCC, limitations based on the state of research and methods, as well as opportunities for among other things consistency with climate risk management and research and decision-making tools.

  15. Inertial Sensor-Based Methods in Walking Speed Estimation: A Systematic Review

    PubMed Central

    Yang, Shuozhi; Li, Qingguo

    2012-01-01

    Self-selected walking speed is an important measure of ambulation ability used in various clinical gait experiments. Inertial sensors, i.e., accelerometers and gyroscopes, have been gradually introduced to estimate walking speed. This research area has attracted a lot of attention for the past two decades, and the trend is continuing due to the improvement of performance and decrease in cost of the miniature inertial sensors. With the intention of understanding the state of the art of current development in this area, a systematic review on the exiting methods was done in the following electronic engines/databases: PubMed, ISI Web of Knowledge, SportDiscus and IEEE Xplore. Sixteen journal articles and papers in proceedings focusing on inertial sensor based walking speed estimation were fully reviewed. The existing methods were categorized by sensor specification, sensor attachment location, experimental design, and walking speed estimation algorithm. PMID:22778632

  16. Estimation of tool wear during CNC milling using neural network-based sensor fusion

    NASA Astrophysics Data System (ADS)

    Ghosh, N.; Ravi, Y. B.; Patra, A.; Mukhopadhyay, S.; Paul, S.; Mohanty, A. R.; Chattopadhyay, A. B.

    2007-01-01

    Cutting tool wear degrades the product quality in manufacturing processes. Monitoring tool wear value online is therefore needed to prevent degradation in machining quality. Unfortunately there is no direct way of measuring the tool wear online. Therefore one has to adopt an indirect method wherein the tool wear is estimated from several sensors measuring related process variables. In this work, a neural network-based sensor fusion model has been developed for tool condition monitoring (TCM). Features extracted from a number of machining zone signals, namely cutting forces, spindle vibration, spindle current, and sound pressure level have been fused to estimate the average flank wear of the main cutting edge. Novel strategies such as, signal level segmentation for temporal registration, feature space filtering, outlier removal, and estimation space filtering have been proposed. The proposed approach has been validated by both laboratory and industrial implementations.

  17. Protein requirements of healthy pregnant women during early and late gestation are higher than current recommendations.

    PubMed

    Stephens, Trina V; Payne, Magdalene; Ball, Ronald O; Pencharz, Paul B; Elango, Rajavel

    2015-01-01

    Adequate maternal dietary protein intake is necessary for healthy pregnancy. However, current protein intake recommendations for healthy pregnant women are based on factorial calculations of nitrogen balance data derived from nonpregnant adults. Thus, an estimate of protein requirements based on pregnancy-specific data is needed. The objective of this study was to determine protein requirements of healthy pregnant women at 11-20 (early) and 31-38 (late) wk of gestation through use of the indicator amino acid oxidation method. Twenty-nine healthy women (24-37 y) each randomly received a different test protein intake (range: 0.22-2.56 g · kg(-1) · d(-1)) during each study day in early (n = 35 observations in 17 women) and late (n = 43 observations in 19 women) gestation; 7 women participated in both early and late gestation studies. The diets were isocaloric and provided energy at 1.7 × resting energy expenditure. Protein was given as a crystalline amino acid mixture based on egg protein composition, except phenylalanine and tyrosine, which were maintained constant across intakes. Protein requirements were determined by measuring the oxidation rate of L-[1-(13)C]phenylalanine to (13)CO2 (F(13)CO2). Breath and urine samples were collected at baseline and isotopic steady state. Linear regression crossover analysis identified a breakpoint (requirement) at minimal F(13)CO2 in response to different protein intakes. The estimated average requirement (EAR) for protein in early and late gestation was determined to be 1.22 (R(2) = 0.60; 95% CI: 0.79, 1.66 g · kg(-1) · d(-1)) and 1.52 g · kg(-1) · d(-1) (R(2) = 0.63; 95% CI: 1.28, 1.77 g · kg(-1) · d(-1)), respectively. These estimates are considerably higher than the EAR of 0.88 g · kg(-1) · d(-1) currently recommended by the Dietary Reference Intakes. To our knowledge, this study is the first to directly estimate gestational stage-specific protein requirements in healthy pregnant women and suggests that current recommendations based on factorial calculations underestimate requirements. This trial was registered at clinicaltrials.gov as NCT01784198. © 2015 American Society for Nutrition.

  18. Implementation of an acoustic-based methane flux estimation methodology in the Eastern Siberian Arctic Sea

    NASA Astrophysics Data System (ADS)

    Weidner, E. F.; Weber, T. C.; Mayer, L. A.

    2017-12-01

    Quantifying methane flux originating from marine seep systems in climatically sensitive regions is of critically importance for current and future climate studies. Yet, the methane contribution from these systems has been difficult to estimate given the broad spatial scale of the ocean and the heterogeneity of seep activity. One such region is the Eastern Siberian Arctic Sea (ESAS), where bubble release into the shallow water column (<40 meters average depth) facilitates transport of methane to the atmosphere without oxidation. Quantifying the current seep methane flux from the ESAS is necessary to understand not only the total ocean methane budget, but also to provide baseline estimates against which future climate-induced changes can be measured. At the 2016 AGU fall meeting, we presented a new acoustic-based flux methodology using a calibrated broadband split-beam echosounder. The broad (14-24 kHz) bandwidth provides a vertical resolution of 10 cm, making possible the identification of single bubbles. After calibration using 64 mm copper sphere of known backscatter, the acoustic backscatter of individual bubbles is measured and compared to analytical models to estimate bubble radius. Additionally, bubbles are precisely located and traced upwards through the water column to estimate rise velocity. The combination of radius and rise velocity allows for gas flux estimation. Here, we follow up with the completed implementation of this methodology applied to the Herald Canyon region of the western ESAS. From the 68 recognized seeps, bubble radii and rise velocity were computed for more than 550 individual bubbles. The range of bubble radii, 1-6 mm, is comparable to those published by other investigators, while the radius dependent rise velocities are consistent with published models. Methane flux for the Herald Canyon region was estimated by extrapolation from individual seep flux values.

  19. Fractional Zinc Absorption for Men, Women, and Adolescents Is Overestimated in the Current Dietary Reference Intakes.

    PubMed

    Armah, Seth M

    2016-06-01

    The fractional zinc absorption values used in the current Dietary Reference Intakes (DRIs) for zinc were based on data from published studies. However, the inhibitory effect of phytate was underestimated because of the low phytate content of the diets in the studies used. The objective of this study was to estimate the fractional absorption of dietary zinc from the US diet by using 2 published algorithms. Nutrient intake data were obtained from the NHANES 2009-2010 and the corresponding Food Patterns Equivalents Database. Data were analyzed with the use of R software by taking into account the complex survey design. The International Zinc Nutrition Consultative Group (IZiNCG; Brown et al. Food Nutr Bull 2004;25:S99-203) and Miller et al. (Br J Nutr 2013;109:695-700) models were used to estimate zinc absorption. Geometric means (95% CIs) of zinc absorption for all subjects were 30.1% (29.9%, 30.2%) or 31.3% (30.9%, 31.6%) with the use of the IZiNCG model and Miller et al. model, respectively. For men, women, and adolescents, absorption values obtained in this study with the use of the 2 models were 27.2%, 31.4%, and 30.1%, respectively, for the IZiNCG model and 28.0%, 33.0%, and 31.6%, respectively, for the Miller et al. model, compared with the 41%, 48%, and 40%, respectively, used in the current DRIs. For preadolescents, estimated absorption values (31.1% and 32.8% for the IZiNCG model and Miller et al. model, respectively) compare well with the conservative estimate of 30% used in the DRIs. When the new estimates of zinc absorption were applied to the current DRI values for men and women, the results suggest that the Estimated Average Requirement (EAR) and RDA for these groups need to be increased by nearly one-half of the current values in order to meet their requirements for absorbed zinc. These data suggest that zinc absorption is overestimated for men, women, and adolescents in the current DRI. Upward adjustments of the DRI for these groups are recommended. © 2016 American Society for Nutrition.

  20. Alternative Regression Equations for Estimation of Annual Peak-Streamflow Frequency for Undeveloped Watersheds in Texas using PRESS Minimization

    USGS Publications Warehouse

    Asquith, William H.; Thompson, David B.

    2008-01-01

    The U.S. Geological Survey, in cooperation with the Texas Department of Transportation and in partnership with Texas Tech University, investigated a refinement of the regional regression method and developed alternative equations for estimation of peak-streamflow frequency for undeveloped watersheds in Texas. A common model for estimation of peak-streamflow frequency is based on the regional regression method. The current (2008) regional regression equations for 11 regions of Texas are based on log10 transformations of all regression variables (drainage area, main-channel slope, and watershed shape). Exclusive use of log10-transformation does not fully linearize the relations between the variables. As a result, some systematic bias remains in the current equations. The bias results in overestimation of peak streamflow for both the smallest and largest watersheds. The bias increases with increasing recurrence interval. The primary source of the bias is the discernible curvilinear relation in log10 space between peak streamflow and drainage area. Bias is demonstrated by selected residual plots with superimposed LOWESS trend lines. To address the bias, a statistical framework based on minimization of the PRESS statistic through power transformation of drainage area is described and implemented, and the resulting regression equations are reported. Compared to log10-exclusive equations, the equations derived from PRESS minimization have PRESS statistics and residual standard errors less than the log10 exclusive equations. Selected residual plots for the PRESS-minimized equations are presented to demonstrate that systematic bias in regional regression equations for peak-streamflow frequency estimation in Texas can be reduced. Because the overall error is similar to the error associated with previous equations and because the bias is reduced, the PRESS-minimized equations reported here provide alternative equations for peak-streamflow frequency estimation.

  1. Desirable properties of wood for sustainable development in the twenty-first century

    Treesearch

    Kenneth E. Skog; Theodore H. Wegner; Ted Bilek; Charles H. Michler

    2015-01-01

    We previously identified desirable properties for wood based on current market-based trends for commercial uses (Wegner et al. 2010). World business models increasingly incorporate the concept of social responsibility and the tenets of sustainable development. Sustainable development is needed to support an estimated 9 billion people by 2050 within the carrying...

  2. Correction Methods for Organic Carbon Artifacts when Using Quartz-Fiber Filters in Large Particulate Matter Monitoring Networks: The Regression Method and Other Options

    EPA Science Inventory

    Sampling and handling artifacts can bias filter-based measurements of particulate organic carbon (OC). Several measurement-based methods for OC artifact reduction and/or estimation are currently used in research-grade field studies. OC frequently is not artifact-corrected in larg...

  3. Curriculum-Based Measurement of Reading Progress Monitoring: The Importance of Growth Magnitude and Goal Setting in Decision Making

    ERIC Educational Resources Information Center

    Van Norman, Ethan R.; Christ, Theodore J.; Newell, Kirsten W.

    2017-01-01

    Research regarding the technical adequacy of growth estimates from curriculum-based measurement of reading progress monitoring data suggests that current decision-making frameworks are likely to yield inaccurate recommendations unless data are collected for extensive periods of time. Instances where data may not need to be collected for long…

  4. Experimental demonstration of OFDM/OQAM transmission with DFT-based channel estimation for visible laser light communications

    NASA Astrophysics Data System (ADS)

    He, Jing; Shi, Jin; Deng, Rui; Chen, Lin

    2017-08-01

    Recently, visible light communication (VLC) based on light-emitting diodes (LEDs) is considered as a candidate technology for fifth-generation (5G) communications, VLC is free of electromagnetic interference and it can simplify the integration of VLC into heterogeneous wireless networks. Due to the data rates of VLC system limited by the low pumping efficiency, small output power and narrow modulation bandwidth, visible laser light communication (VLLC) system with laser diode (LD) has paid more attention. In addition, orthogonal frequency division multiplexing/offset quadrature amplitude modulation (OFDM/OQAM) is currently attracting attention in optical communications. Due to the non-requirement of cyclic prefix (CP) and time-frequency domain well-localized pulse shapes, it can achieve high spectral efficiency. Moreover, OFDM/OQAM has lower out-of-band power leakage so that it increases the system robustness against inter-carrier interference (ICI) and frequency offset. In this paper, a Discrete Fourier Transform (DFT)-based channel estimation scheme combined with the interference approximation method (IAM) is proposed and experimentally demonstrated for VLLC OFDM/OQAM system. The performance of VLLC OFDM/OQAM system with and without DFT-based channel estimation is investigated. Moreover, the proposed DFT-based channel estimation scheme and the intra-symbol frequency-domain averaging (ISFA)-based method are also compared for the VLLC OFDM/OQAM system. The experimental results show that, the performance of EVM using the DFT-based channel estimation scheme is improved about 3dB compared with the conventional IAM method. In addition, the DFT-based channel estimation scheme can resist the channel noise effectively than that of the ISFA-based method.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Favorite, Jeffrey A.; Gonzalez, Esteban

    Adjoint-based first-order perturbation theory is applied again to boundary perturbation problems. Rahnema developed a perturbation estimate that gives an accurate first-order approximation of a flux or reaction rate within a radioactive system when the boundary is perturbed. When the response of interest is the flux or leakage current on the boundary, the Roussopoulos perturbation estimate has long been used. The Rahnema and Roussopoulos estimates differ in one term. Our paper shows that the Rahnema and Roussopoulos estimates can be derived consistently, using different responses, from a single variational functional (due to Gheorghiu and Rahnema), resolving any apparent contradiction. In analyticmore » test problems, Rahnema’s estimate and the Roussopoulos estimate produce exact first derivatives of the response of interest when appropriately applied. We also present a realistic, nonanalytic test problem.« less

  6. Influence of Crown Biomass Estimators and Distribution on Canopy Fuel Characteristics in Ponderosa Pine Stands of the Black Hills

    Treesearch

    Tara Keyser; Frederick Smith

    2009-01-01

    Two determinants of crown fire hazard are canopy bulk density (CBD) and canopy base height (CBH). The Fire and Fuels Extension to the Forest Vegetation Simulator (FFE-FVS) is a model that predicts CBD and CBH. Currently, FFE-FVS accounts for neither geographic variation in tree allometries nor the nonuniform distribution of crown mass when one is estimating CBH and CBD...

  7. Advanced Extremely High Frequency Satellite (AEHF)

    DTIC Science & Technology

    2015-12-01

    control their tactical and strategic forces at all levels of conflict up to and including general nuclear war, and it supports the attainment of...10195.1 10622.2 Confidence Level Confidence Level of cost estimate for current APB: 50% The ICE) that supports the AEHF SV 1-4, like all life-cycle cost...mathematically the precise confidence levels associated with life-cycle cost estimates prepared for MDAPs. Based on the rigor in methods used in building

  8. Wideband Global SATCOM (WGS)

    DTIC Science & Technology

    2015-12-01

    system level testing. ​The WGS-6 financial data is not reported in this SAR because funding is provided by Australia in exchange for access to a...A 3831.3 3539.7 3539.7 3801.9 Confidence Level Confidence Level of cost estimate for current APB: 50% The ICE to support WGS Milestone C decision...to calculate mathematically the precise confidence levels associated with life-cycle cost estimates prepared for MDAPs. Based on the rigor in

  9. Peer Review of Launch Environments

    NASA Technical Reports Server (NTRS)

    Wilson, Timmy R.

    2011-01-01

    Catastrophic failures of launch vehicles during launch and ascent are currently modeled using equivalent trinitrotoluene (TNT) estimates. This approach tends to over-predict the blast effect with subsequent impact to launch vehicle and crew escape requirements. Bangham Engineering, located in Huntsville, Alabama, assembled a less-conservative model based on historical failure and test data coupled with physical models and estimates. This white paper summarizes NESC's peer review of the Bangham analytical work completed to date.

  10. Refractory Materials for Flame Deflector Protection System Corrosion Control: Flame Deflector Protection System Life Cycle Cost Analysis Report

    NASA Technical Reports Server (NTRS)

    Calle, Luz Marina; Hintze, Paul E.; Parlier, Christopher R.; Coffman, Brekke E.; Kolody, Mark R.; Curran, Jerome P.; Trejo, David; Reinschmidt, Ken; Kim, Hyung-Jin

    2009-01-01

    A 20-year life cycle cost analysis was performed to compare the operational life cycle cost, processing/turnaround timelines, and operations manpower inspection/repair/refurbishment requirements for corrosion protection of the Kennedy Space Center launch pad flame deflector associated with the existing cast-in-place materials and a newer advanced refractory ceramic material. The analysis compared the estimated costs of(1) continuing to use of the current refractory material without any changes; (2) completely reconstructing the flame trench using the current refractory material; and (3) completely reconstructing the flame trench with a new high-performance refractory material. Cost estimates were based on an analysis of the amount of damage that occurs after each launch and an estimate of the average repair cost. Alternative 3 was found to save $32M compared to alternative 1 and $17M compared to alternative 2 over a 20-year life cycle.

  11. Comment: Characterization of Two Historic Smallpox Specimens from a Czech Museum.

    PubMed

    Porter, Ashleigh F; Duggan, Ana T; Poinar, Hendrik N; Holmes, Edward C

    2017-09-28

    The complete genome sequences of two strains of variola virus (VARV) sampled from human smallpox specimens present in the Czech National Museum, Prague, were recently determined, with one of the sequences estimated to date to the mid-19th century. Using molecular clock methods, the authors of this study go on to infer that the currently available strains of VARV share an older common ancestor, at around 1350 AD, than some recent estimates based on other archival human samples. Herein, we show that the two Czech strains exhibit anomalous branch lengths given their proposed age, and by assuming a constant rate of evolutionary change across the rest of the VARV phylogeny estimate that their true age in fact lies between 1918 and 1937. We therefore suggest that the age of the common ancestor of currently available VARV genomes most likely dates to late 16th and early 17th centuries and not ~1350 AD.

  12. Comment: Characterization of Two Historic Smallpox Specimens from a Czech Museum

    PubMed Central

    Porter, Ashleigh F.; Duggan, Ana T.

    2017-01-01

    The complete genome sequences of two strains of variola virus (VARV) sampled from human smallpox specimens present in the Czech National Museum, Prague, were recently determined, with one of the sequences estimated to date to the mid-19th century. Using molecular clock methods, the authors of this study go on to infer that the currently available strains of VARV share an older common ancestor, at around 1350 AD, than some recent estimates based on other archival human samples. Herein, we show that the two Czech strains exhibit anomalous branch lengths given their proposed age, and by assuming a constant rate of evolutionary change across the rest of the VARV phylogeny estimate that their true age in fact lies between 1918 and 1937. We therefore suggest that the age of the common ancestor of currently available VARV genomes most likely dates to late 16th and early 17th centuries and not ~1350 AD. PMID:28956829

  13. Anthropogenic range contractions bias species climate change forecasts

    NASA Astrophysics Data System (ADS)

    Faurby, Søren; Araújo, Miguel B.

    2018-03-01

    Forecasts of species range shifts under climate change most often rely on ecological niche models, in which characterizations of climate suitability are highly contingent on the species range data used. If ranges are far from equilibrium under current environmental conditions, for instance owing to local extinctions in otherwise suitable areas, modelled environmental suitability can be truncated, leading to biased estimates of the effects of climate change. Here we examine the impact of such biases on estimated risks from climate change by comparing models of the distribution of North American mammals based on current ranges with ranges accounting for historical information on species ranges. We find that estimated future diversity, almost everywhere, except in coastal Alaska, is drastically underestimated unless the full historical distribution of the species is included in the models. Consequently forecasts of climate change impacts on biodiversity for many clades are unlikely to be reliable without acknowledging anthropogenic influences on contemporary ranges.

  14. Prevalence of dermatitis in the working population, United States, 2010 National Health Interview Survey.

    PubMed

    Luckhaupt, Sara E; Dahlhamer, James M; Ward, Brian W; Sussell, Aaron L; Sweeney, Marie H; Sestito, John P; Calvert, Geoffrey M

    2013-06-01

    Prevalence patterns of dermatitis among workers offer clues about risk factors and targets for prevention, but population-based estimates of the burden of dermatitis among US workers are lacking. Data from an occupational health supplement to the 2010 National Health Interview Survey (NHIS-OHS) were used to estimate the prevalence of dermatitis overall and by demographic characteristics and industry and occupation (I&O) of current/recent employment. Data were available for 27,157 adults, including 17,524 current/recent workers. The overall prevalence rate of dermatitis among current/recent workers was 9.8% (range among I&O groups: 5.5-15.4%), representing approximately 15.2 million workers with dermatitis. The highest prevalence rates were among I&O groups related to health care. Overall, 5.6% of dermatitis cases among workers (9.2% among healthcare workers) were attributed to work by health professionals. Dermatitis affected over 15 million US workers in 2010, and its prevalence varied by demographic characteristics and industry and occupation of employment. The prevalence rate of work-related dermatitis based on the NHIS-OHS was approximately 100-fold higher than incidence rates based on the Bureau of Labor Statistics' Survey of Occupational Illness and Injury. Published 2012. This article is a U.S. Government work and is in the public domain in the USA.

  15. Wavelet-Based Artifact Identification and Separation Technique for EEG Signals during Galvanic Vestibular Stimulation

    PubMed Central

    Adib, Mani; Cretu, Edmond

    2013-01-01

    We present a new method for removing artifacts in electroencephalography (EEG) records during Galvanic Vestibular Stimulation (GVS). The main challenge in exploiting GVS is to understand how the stimulus acts as an input to brain. We used EEG to monitor the brain and elicit the GVS reflexes. However, GVS current distribution throughout the scalp generates an artifact on EEG signals. We need to eliminate this artifact to be able to analyze the EEG signals during GVS. We propose a novel method to estimate the contribution of the GVS current in the EEG signals at each electrode by combining time-series regression methods with wavelet decomposition methods. We use wavelet transform to project the recorded EEG signal into various frequency bands and then estimate the GVS current distribution in each frequency band. The proposed method was optimized using simulated signals, and its performance was compared to well-accepted artifact removal methods such as ICA-based methods and adaptive filters. The results show that the proposed method has better performance in removing GVS artifacts, compared to the others. Using the proposed method, a higher signal to artifact ratio of −1.625 dB was achieved, which outperformed other methods such as ICA-based methods, regression methods, and adaptive filters. PMID:23956786

  16. Global gridded anthropogenic emissions inventory of carbonyl sulfide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zumkehr, Andrew; Hilton, Tim; Whelan, Mary

    Atmospheric carbonyl sulfide (COS or OCS) is the most abundant sulfur containing gas in the troposphere and is an atmospheric tracer for the carbon cycle. Gridded inventories of global anthropogenic COS are used for interpreting global COS measurements. However, previous gridded anthropogenic data are a climatological estimate based on input data that is over three decades old and are not representative of current conditions. Here we develop a new gridded data set of global anthropogenic COS sources that includes more source sectors than previously available and uses the most current emissions factors and industry activity data as input. Additionally, themore » inventory is provided as annually varying estimates from years 1980–2012 and employs a source specific spatial scaling procedure. We estimate a global source in year 2012 of 406 Gg S y -1 (range of 223–586 Gg S y -1), which is highly concentrated in China and is twice as large as the previous gridded inventory. Our large upward revision in the bottom-up estimate of the source is consistent with a recent top-down estimate based on air-monitoring and Antarctic firn data. Furthermore, our inventory time trends, including a decline in the 1990's and growth after the year 2000, are qualitatively consistent with trends in atmospheric data. Lastly, similarities between the spatial distribution in this inventory and remote sensing data suggest that the anthropogenic source could potentially play a role in explaining a missing source in the global COS budget.« less

  17. Global gridded anthropogenic emissions inventory of carbonyl sulfide

    DOE PAGES

    Zumkehr, Andrew; Hilton, Tim; Whelan, Mary; ...

    2018-03-31

    Atmospheric carbonyl sulfide (COS or OCS) is the most abundant sulfur containing gas in the troposphere and is an atmospheric tracer for the carbon cycle. Gridded inventories of global anthropogenic COS are used for interpreting global COS measurements. However, previous gridded anthropogenic data are a climatological estimate based on input data that is over three decades old and are not representative of current conditions. Here we develop a new gridded data set of global anthropogenic COS sources that includes more source sectors than previously available and uses the most current emissions factors and industry activity data as input. Additionally, themore » inventory is provided as annually varying estimates from years 1980–2012 and employs a source specific spatial scaling procedure. We estimate a global source in year 2012 of 406 Gg S y -1 (range of 223–586 Gg S y -1), which is highly concentrated in China and is twice as large as the previous gridded inventory. Our large upward revision in the bottom-up estimate of the source is consistent with a recent top-down estimate based on air-monitoring and Antarctic firn data. Furthermore, our inventory time trends, including a decline in the 1990's and growth after the year 2000, are qualitatively consistent with trends in atmospheric data. Lastly, similarities between the spatial distribution in this inventory and remote sensing data suggest that the anthropogenic source could potentially play a role in explaining a missing source in the global COS budget.« less

  18. Global gridded anthropogenic emissions inventory of carbonyl sulfide

    NASA Astrophysics Data System (ADS)

    Zumkehr, Andrew; Hilton, Tim W.; Whelan, Mary; Smith, Steve; Kuai, Le; Worden, John; Campbell, J. Elliott

    2018-06-01

    Atmospheric carbonyl sulfide (COS or OCS) is the most abundant sulfur containing gas in the troposphere and is an atmospheric tracer for the carbon cycle. Gridded inventories of global anthropogenic COS are used for interpreting global COS measurements. However, previous gridded anthropogenic data are a climatological estimate based on input data that is over three decades old and are not representative of current conditions. Here we develop a new gridded data set of global anthropogenic COS sources that includes more source sectors than previously available and uses the most current emissions factors and industry activity data as input. Additionally, the inventory is provided as annually varying estimates from years 1980-2012 and employs a source specific spatial scaling procedure. We estimate a global source in year 2012 of 406 Gg S y-1 (range of 223-586 Gg S y-1), which is highly concentrated in China and is twice as large as the previous gridded inventory. Our large upward revision in the bottom-up estimate of the source is consistent with a recent top-down estimate based on air-monitoring and Antarctic firn data. Furthermore, our inventory time trends, including a decline in the 1990's and growth after the year 2000, are qualitatively consistent with trends in atmospheric data. Finally, similarities between the spatial distribution in this inventory and remote sensing data suggest that the anthropogenic source could potentially play a role in explaining a missing source in the global COS budget.

  19. Diet Modeling in Older Americans: The Impact of Increasing Plant-Based Foods or Dairy Products on Protein Intake.

    PubMed

    Houchins, J A; Cifelli, C J; Demmer, E; Fulgoni, V L

    2017-01-01

    To determine the effects of increasing plant-based foods or dairy products on protein intake in older Americans by performing diet modeling. Data from What We Eat in America (WWEIA), the dietary component of the National Health and Nutrition Examination Survey (NHANES), 2007-2010 for Americans aged 51 years and older (n=5,389), divided as 51-70 years (n=3,513) and 71 years and older (n=1,876) were used. Usual protein intake was compared among three dietary models that increased intakes by 100%: (1) plant-based foods; (2) higher protein plant-based foods (i.e., legumes, nuts, seeds, soy); and (3) dairy products (milk, cheese, and yogurt). Models (1) and (2) had commensurate reductions in animal-based protein intake. Doubling intake of plant-based foods (as currently consumed) resulted in a drop of protein intake by approximately 22% for males and females aged 51+ years. For older males and females, aged 71+ years, doubling intake of plant-based foods (as currently consumed) resulted in an estimated usual intake of 0.83±0.02 g/kg ideal body weight (iBW))/day and 0.78±0.01 g/kg iBW/day, respectively. In this model, 33% of females aged 71+ years did not meet the estimated average requirement for protein. Doubling dairy product consumption achieved current protein intake recommendations. These data illustrate that increasing plant-based foods and reducing animal-based products could have unintended consequences on protein intake of older Americans. Doubling dairy product intake can help older adults get to an intake level of approximately 1.2 g/kg iBW/day, consistent with the growing consensus that older adults need to consume higher levels of protein for health.

  20. New method of a "point-like" neutron source creation based on sharp focusing of high-current deuteron beam onto deuterium-saturated target for neutron tomography

    NASA Astrophysics Data System (ADS)

    Golubev, S.; Skalyga, V.; Izotov, I.; Sidorov, A.

    2017-02-01

    A possibility of a compact powerful point-like neutron source creation is discussed. Neutron yield of the source based on deuterium-deuterium (D-D) reaction is estimated at the level of 1011 s-1 (1013 s-1 for deuterium-tritium reaction). The fusion takes place due to bombardment of deuterium- (or tritium) loaded target by high-current focused deuterium ion beam with energy of 100 keV. The ion beam is formed by means of high-current quasi-gasdynamic ion source of a new generation based on an electron cyclotron resonance (ECR) discharge in an open magnetic trap sustained by powerful microwave radiation. The prospects of proposed generator for neutron tomography are discussed. Suggested method is compared to the point-like neutron sources based on a spark produced by powerful femtosecond laser pulses.

  1. Lithium manganese oxide spinel electrodes

    NASA Astrophysics Data System (ADS)

    Darling, Robert Mason

    Batteries based oil intercalation eletrodes are currently being considered for a variety of applications including automobiles. This thesis is concerned with the simulation and experimental investigation of one such system: spinel LiyMn2O4. A mathematical model simulating the behavior of an electrochemical cell containing all intercalation electrode is developed and applied to Li yMn2O4 based systems. The influence of the exchange current density oil the propagation of the reaction through the depth of the electrode is examined theoretically. Galvanostatic cycling and relaxation phenomena on open circuit are simulated for different particle-size distributions. The electrode with uniformly sized particles shows the best performance when the current is on, and relaxes towards equilibrium most quickly. The impedance of a porous electrode containing a particle-size distribution at low frequencies is investigated with all analytic solution and a simplified version of the mathematical model. The presence of the particle-size distribution leads to an apparent diffusion coefficient which has all incorrect concentration dependence. A Li/1 M LiClO4 in propylene carbonate (PC)/ LiyMn 2O4 cell is used to investigate the influence of side reactions oil the current-potential behavior of intercalation electrodes. Slow cyclic voltammograms and self-discharge data are combined to estimate the reversible potential of the host material and the kinetic parameters for the side reaction. This information is then used, together with estimates of the solid-state diffusion coefficient and main-reaction exchange current density, in a mathematical model of the system. Predictions from the model compare favorably with continuous cycling results and galvanostatic experiments with periodic current interruptions. The variation with respect to composition of' the diffusion coefficient of lithium in LiyMn2O4 is estimated from incomplete galvanostatic discharges following open-circult periods. The results compared favorably with those available in the literature. Dynamic Monte Carlo simulations were conducted to investigate the concentration dependence of the diffusion coefficient fundamentally. The dynamic Monte Carlo predictions compare favorably with the experimental data.

  2. Equatorial Currents in the Indian Ocean Based on Measurements in February 2017

    NASA Astrophysics Data System (ADS)

    Neiman, V. G.; Frey, D. I.; Ambrosimov, A. K.; Kaplunenko, D. D.; Morozov, E. G.; Shapovalov, S. M.

    2018-03-01

    We analyze the results of measurements of the Tareev equatorial undercurrent in the Indian Ocean in February 2017. Sections from 3° S to 3°45' N along 68° and 65° E crossed the current with measurements of the temperature, salinity, and current velocity at oceanographic stations. The maximum velocity of this eastward flow was recorded precisely at the equator. The velocity at a depth of 50 m was approximately 60 cm/s. The transport of the Tareev Current was estimated at 9.8 Sv (1 Sv = 106 m3/s).

  3. Electrical stimulation therapy for dysphagia: a follow-up survey of USA dysphagia practitioners.

    PubMed

    Barikroo, Ali; Carnaby, Giselle; Crary, Michael

    2017-12-01

    The aim of this study was to compare current application, practice patterns, clinical outcomes, and professional attitudes of dysphagia practitioners regarding electrical stimulation (e-stim) therapy with similar data obtained in 2005. A web-based survey was posted on the American Speech-Language-Hearing Association Special Interest Group 13 webpage for 1 month. A total of 271 survey responses were analyzed and descriptively compared with the archived responses from the 2005 survey. Results suggested that e-stim application increased by 47% among dysphagia practitioners over the last 10 years. The frequency of weekly e-stim therapy sessions decreased while the reported total number of treatment sessions increased between the two surveys. Advancement in oral diet was the most commonly reported improvement in both surveys. Overall, reported satisfaction levels of clinicians and patients regarding e-stim therapy decreased. Still, the majority of e-stim practitioners continue to recommend this treatment modality to other dysphagia practitioners. Results from the novel items in the current survey suggested that motor level e-stim (e.g. higher amplitude) is most commonly used during dysphagia therapy with no preferred electrode placement. Furthermore, the majority of clinicians reported high levels of self-confidence regarding their ability to perform e-stim. The results of this survey highlight ongoing changes in application, practice patterns, clinical outcomes, and professional attitudes associated with e-stim therapy among dysphagia practitioners.

  4. NIRS-EEG joint imaging during transcranial direct current stimulation: Online parameter estimation with an autoregressive model.

    PubMed

    Sood, Mehak; Besson, Pierre; Muthalib, Makii; Jindal, Utkarsh; Perrey, Stephane; Dutta, Anirban; Hayashibe, Mitsuhiro

    2016-12-01

    Transcranial direct current stimulation (tDCS) has been shown to perturb both cortical neural activity and hemodynamics during (online) and after the stimulation, however mechanisms of these tDCS-induced online and after-effects are not known. Here, online resting-state spontaneous brain activation may be relevant to monitor tDCS neuromodulatory effects that can be measured using electroencephalography (EEG) in conjunction with near-infrared spectroscopy (NIRS). We present a Kalman Filter based online parameter estimation of an autoregressive (ARX) model to track the transient coupling relation between the changes in EEG power spectrum and NIRS signals during anodal tDCS (2mA, 10min) using a 4×1 ring high-definition montage. Our online ARX parameter estimation technique using the cross-correlation between log (base-10) transformed EEG band-power (0.5-11.25Hz) and NIRS oxy-hemoglobin signal in the low frequency (≤0.1Hz) range was shown in 5 healthy subjects to be sensitive to detect transient EEG-NIRS coupling changes in resting-state spontaneous brain activation during anodal tDCS. Conventional sliding window cross-correlation calculations suffer a fundamental problem in computing the phase relationship as the signal in the window is considered time-invariant and the choice of the window length and step size are subjective. Here, Kalman Filter based method allowed online ARX parameter estimation using time-varying signals that could capture transients in the coupling relationship between EEG and NIRS signals. Our new online ARX model based tracking method allows continuous assessment of the transient coupling between the electrophysiological (EEG) and the hemodynamic (NIRS) signals representing resting-state spontaneous brain activation during anodal tDCS. Published by Elsevier B.V.

  5. Retrospective Analog Year Analyses Using NASA Satellite Precipitation and Soil Moisture Data to Improve USDA's World Agricultural Supply and Demand Estimates

    NASA Astrophysics Data System (ADS)

    Teng, W. L.; Shannon, H.

    2010-12-01

    The USDA World Agricultural Outlook Board (WAOB) coordinates the development of the monthly World Agricultural Supply and Demand Estimates (WASDE) for the U.S. and major foreign producing countries. Given the significant effect of weather on crop progress, conditions, and production, WAOB prepares frequent agricultural weather assessments in the Global Agricultural Decision Support Environment (GLADSE). Because the timing of the precipitation is often as important as the amount, in their effects on crop production, WAOB frequently examines precipitation time series to estimate crop productivity. An effective method for such assessment is the use of analog year comparisons, where precipitation time series, based on surface weather stations, from several historical years are compared with the time series from the current year. Once analog years are identified, crop yields can be estimated for the current season based on observed yields from the analog years, because of the similarities in the precipitation patterns. In this study, NASA satellite precipitation and soil moisture time series are used to identify analog years. Given that soil moisture often has a more direct effect than does precipitation on crop water availability, the time series of soil moisture could be more effective than that of precipitation, in identifying those years with similar crop yields. Retrospective analyses of analogs will be conducted to determine any reduction in the level of uncertainty in identifying analog years, and any reduction in false negatives or false positives. The comparison of analog years could potentially be improved by quantifying the selection of analogs, instead of the current visual inspection method. Various approaches to quantifying are currently being evaluated. This study is part of a larger effort to improve WAOB estimates by integrating NASA remote sensing soil moisture observations and research results into GLADSE, including (1) the integration of the Land Parameter Retrieval Model (LPRM) soil moisture algorithm for operational production and (2) the assimilation of LPRM soil moisture into the USDA Environmental Policy Integrated Climate (EPIC) crop model.

  6. The current economic burden of illness of osteoporosis in Canada

    PubMed Central

    Burke, N.; Von Keyserlingk, C.; Leslie, W. D.; Morin, S. N.; Adachi, J. D.; Papaioannou, A.; Bessette, L.; Brown, J. P.; Pericleous, L.; Tarride, J.

    2016-01-01

    Summary We estimate the current burden of illness of osteoporosis in Canada is double ($4.6 billion) our previous estimates ($2.3 billion) due to improved data capture of the multiple encounters and services that accompany a fracture: emergency room, admissions to acute and step-down non-acute institutions, rehabilitation, home-assisted or long-term residency support. Introduction We previously estimated the economic burden of illness of osteoporosis-attributable fractures in Canada for the year 2008 to be $2.3 billion in the base case and as much as $3.9 billion. The aim of this study is to update the estimate of the economic burden of illness for osteoporosis-attributable fractures for Canada based on newly available home care and long-term care (LTC) data. Methods Multiple national databases were used for the fiscal-year ending March 31, 2011 (FY 2010/2011) for acute institutional care, emergency visits, day surgery, secondary admissions for rehabilitation, and complex continuing care, as well as national dispensing data for osteoporosis medications. Gaps in national data were supplemented by provincial and community survey data. Osteoporosis-attributable fractures for Canadians age 50+ were identified by ICD-10-CA codes. Costs were expressed in 2014 dollars. Results In FY 2010/2011, the number of osteoporosis-attributable fractures was 131,443 resulting in 64,884 acute care admissions and 983,074 acute hospital days. Acute care costs were $1.5 billion, an 18 % increase since 2008. The cost of LTC was 33.4 times the previous estimate ($31 million versus $1.03 billion) because of improved data capture. The cost for rehabilitation and secondary admissions increased 3.4 fold, while drug costs decreased 19 %. The overall cost of osteoporosis was over $4.6 billion, an increase of 83 % from the 2008 estimate. Conclusion Since the 2008 estimate, new Canadian data on home care and LTC are available which provided a better estimate of the burden of osteoporosis in Canada. This suggests that our previous estimates were seriously underestimated. PMID:27166680

  7. The current economic burden of illness of osteoporosis in Canada.

    PubMed

    Hopkins, R B; Burke, N; Von Keyserlingk, C; Leslie, W D; Morin, S N; Adachi, J D; Papaioannou, A; Bessette, L; Brown, J P; Pericleous, L; Tarride, J

    2016-10-01

    We estimate the current burden of illness of osteoporosis in Canada is double ($4.6 billion) our previous estimates ($2.3 billion) due to improved data capture of the multiple encounters and services that accompany a fracture: emergency room, admissions to acute and step-down non-acute institutions, rehabilitation, home-assisted or long-term residency support. We previously estimated the economic burden of illness of osteoporosis-attributable fractures in Canada for the year 2008 to be $2.3 billion in the base case and as much as $3.9 billion. The aim of this study is to update the estimate of the economic burden of illness for osteoporosis-attributable fractures for Canada based on newly available home care and long-term care (LTC) data. Multiple national databases were used for the fiscal-year ending March 31, 2011 (FY 2010/2011) for acute institutional care, emergency visits, day surgery, secondary admissions for rehabilitation, and complex continuing care, as well as national dispensing data for osteoporosis medications. Gaps in national data were supplemented by provincial and community survey data. Osteoporosis-attributable fractures for Canadians age 50+ were identified by ICD-10-CA codes. Costs were expressed in 2014 dollars. In FY 2010/2011, the number of osteoporosis-attributable fractures was 131,443 resulting in 64,884 acute care admissions and 983,074 acute hospital days. Acute care costs were $1.5 billion, an 18 % increase since 2008. The cost of LTC was 33.4 times the previous estimate ($31 million versus $1.03 billion) because of improved data capture. The cost for rehabilitation and secondary admissions increased 3.4 fold, while drug costs decreased 19 %. The overall cost of osteoporosis was over $4.6 billion, an increase of 83 % from the 2008 estimate. Since the 2008 estimate, new Canadian data on home care and LTC are available which provided a better estimate of the burden of osteoporosis in Canada. This suggests that our previous estimates were seriously underestimated.

  8. Merging Satellite Precipitation Products for Improved Streamflow Simulations

    NASA Astrophysics Data System (ADS)

    Maggioni, V.; Massari, C.; Barbetta, S.; Camici, S.; Brocca, L.

    2017-12-01

    Accurate quantitative precipitation estimation is of great importance for water resources management, agricultural planning and forecasting and monitoring of natural hazards such as flash floods and landslides. In situ observations are limited around the Earth, especially in remote areas (e.g., complex terrain, dense vegetation), but currently available satellite precipitation products are able to provide global precipitation estimates with an accuracy that depends upon many factors (e.g., type of storms, temporal sampling, season, etc.). The recent SM2RAIN approach proposes to estimate rainfall by using satellite soil moisture observations. As opposed to traditional satellite precipitation methods, which sense cloud properties to retrieve instantaneous estimates, this new bottom-up approach makes use of two consecutive soil moisture measurements for obtaining an estimate of the fallen precipitation within the interval between two satellite overpasses. As a result, the nature of the measurement is different and complementary to the one of classical precipitation products and could provide a different valid perspective to substitute or improve current rainfall estimates. Therefore, we propose to merge SM2RAIN and the widely used TMPA 3B42RT product across Italy for a 6-year period (2010-2015) at daily/0.25deg temporal/spatial scale. Two conceptually different merging techniques are compared to each other and evaluated in terms of different statistical metrics, including hit bias, threat score, false alarm rates, and missed rainfall volumes. The first is based on the maximization of the temporal correlation with a reference dataset, while the second is based on a Bayesian approach, which provides a probabilistic satellite precipitation estimate derived from the joint probability distribution of observations and satellite estimates. The merged precipitation products show a better performance with respect to the parental satellite-based products in terms of categorical statistics, as well as bias reduction and correlation coefficient, with the Bayesian approach being superior to other methods. A study case in the Tiber river basin is also presented to discuss the performance of forcing a hydrological model with the merged satellite precipitation product to simulate streamflow time series.

  9. Electrochemical state and internal variables estimation using a reduced-order physics-based model of a lithium-ion cell and an extended Kalman filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stetzel, KD; Aldrich, LL; Trimboli, MS

    2015-03-15

    This paper addresses the problem of estimating the present value of electrochemical internal variables in a lithium-ion cell in real time, using readily available measurements of cell voltage, current, and temperature. The variables that can be estimated include any desired set of reaction flux and solid and electrolyte potentials and concentrations at any set of one-dimensional spatial locations, in addition to more standard quantities such as state of charge. The method uses an extended Kalman filter along with a one-dimensional physics-based reduced-order model of cell dynamics. Simulations show excellent and robust predictions having dependable error bounds for most internal variables.more » (C) 2014 Elsevier B.V. All rights reserved.« less

  10. Predicting fundamental and realized distributions based on thermal niche: A case study of a freshwater turtle

    NASA Astrophysics Data System (ADS)

    Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco; Ribeiro, Bruno R.

    2018-04-01

    Species distribution models (SDM) have been broadly used in ecology to address theoretical and practical problems. Currently, there are two main approaches to generate SDMs: (i) correlative, which is based on species occurrences and environmental predictor layers and (ii) process-based models, which are constructed based on species' functional traits and physiological tolerances. The distributions estimated by each approach are based on different components of species niche. Predictions of correlative models approach species realized niches, while predictions of process-based are more akin to species fundamental niche. Here, we integrated the predictions of fundamental and realized distributions of the freshwater turtle Trachemys dorbigni. Fundamental distribution was estimated using data of T. dorbigni's egg incubation temperature, and realized distribution was estimated using species occurrence records. Both types of distributions were estimated using the same regression approaches (logistic regression and support vector machines), both considering macroclimatic and microclimatic temperatures. The realized distribution of T. dorbigni was generally nested in its fundamental distribution reinforcing theoretical assumptions that the species' realized niche is a subset of its fundamental niche. Both modelling algorithms produced similar results but microtemperature generated better results than macrotemperature for the incubation model. Finally, our results reinforce the conclusion that species realized distributions are constrained by other factors other than just thermal tolerances.

  11. Cost effectiveness of a general practice chronic disease management plan for coronary heart disease in Australia.

    PubMed

    Chew, Derek P; Carter, Robert; Rankin, Bree; Boyden, Andrew; Egan, Helen

    2010-05-01

    The cost effectiveness of a general practice-based program for managing coronary heart disease (CHD) patients in Australia remains uncertain. We have explored this through an economic model. A secondary prevention program based on initial clinical assessment and 3 monthly review, optimising of pharmacotherapies and lifestyle modification, supported by a disease registry and financial incentives for quality of care and outcomes achieved was assessed in terms of incremental cost effectiveness ratio (ICER), in Australian dollars per disability adjusted life year (DALY) prevented. Based on 2006 estimates, 263 487 DALYs were attributable to CHD in Australia. The proposed program would add $115 650 000 to the annual national heath expenditure. Using an estimated 15% reduction in death and disability and a 40% estimated program uptake, the program's ICER is $8081 per DALY prevented. With more conservative estimates of effectiveness and uptake, estimates of up to $38 316 per DALY are observed in sensitivity analysis. Although innovation in CHD management promises improved future patient outcomes, many therapies and strategies proven to reduce morbidity and mortality are available today. A general practice-based program for the optimal application of current therapies is likely to be cost-effective and provide substantial and sustainable benefits to the Australian community.

  12. Multisite evaluation of APEX for water quality: II. Regional parameterization

    USDA-ARS?s Scientific Manuscript database

    Phosphorus (P) index assessment requires independent estimates of long-term average annual P loss from multiple locations, management practices, soils, and landscape positions. Because currently available measured data are insufficient, calibrated and validated process-based models have been propos...

  13. Carbon monoxide screen for signalized intersections : COSIM, version 4.0 - technical documentation.

    DOT National Transportation Integrated Search

    2013-06-01

    Illinois Carbon Monoxide Screen for Intersection Modeling (COSIM) Version 3.0 is a Windows-based computer : program currently used by the Illinois Department of Transportation (IDOT) to estimate worst-case carbon : monoxide (CO) concentrations near s...

  14. Uncertainties in Emissions In Emissions Inputs for Near-Road Assessments

    EPA Science Inventory

    Emissions, travel demand, and dispersion models are all needed to obtain temporally and spatially resolved pollutant concentrations. Current methodology combines these three models in a bottom-up approach based on hourly traffic and emissions estimates, and hourly dispersion conc...

  15. Health-based ingestion exposure guidelines for Vibrio cholerae: Technical basis for water reuse applications.

    PubMed

    Watson, Annetta P; Armstrong, Anthony Q; White, George H; Thran, Brandolyn H

    2018-02-01

    U.S. military and allied contingency operations are increasingly occurring in locations with limited, unstable or compromised fresh water supplies. Non-potable graywater reuse is currently under assessment as a viable means to increase mission sustainability while significantly reducing the resources, logistics and attack vulnerabilities posed by transport of fresh water. Development of health-based (non-potable) exposure guidelines for the potential microbial components of graywater would provide a logical and consistent human-health basis for water reuse strategies. Such health-based strategies will support not only improved water security for contingency operations, but also sustainable military operations. Dose-response assessment of Vibrio cholerae based on adult human oral exposure data were coupled with operational water exposure scenario parameters common to numerous military activities, and then used to derive health risk-based water concentrations. The microbial risk assessment approach utilized oral human exposure V. cholerae dose studies in open literature. Selected studies focused on gastrointestinal illness associated with experimental infection by specific V. cholerae serogroups most often associated with epidemics and pandemics (O1 and O139). Nonlinear dose-response model analyses estimated V. cholerae effective doses (EDs) aligned with gastrointestinal illness severity categories characterized by diarrheal purge volume. The EDs and water exposure assumptions were used to derive Risk-Based Water Concentrations (CFU/100mL) for mission-critical illness severity levels over a range of water use activities common to military operations. Human dose-response studies, data and analyses indicate that ingestion exposures at the estimated ED 1 (50CFU) are unlikely to be associated with diarrheal illness while ingestion exposures at the lower limit (200CFU) of the estimated ED 10 are not expected to result in a level of diarrheal illness associated with degraded individual capability. The current analysis indicates that the estimated ED 20 (approximately 1000CFU) represents initiation of a more advanced stage of diarrheal illness associated with clinical care. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  16. Demographic and traditional knowledge perspectives on the current status of Canadian polar bear subpopulations.

    PubMed

    York, Jordan; Dowsley, Martha; Cornwell, Adam; Kuc, Miroslaw; Taylor, Mitchell

    2016-05-01

    Subpopulation growth rates and the probability of decline at current harvest levels were determined for 13 subpopulations of polar bears (Ursus maritimus) that are within or shared with Canada based on mark-recapture estimates of population numbers and vital rates, and harvest statistics using population viability analyses (PVA). Aboriginal traditional ecological knowledge (TEK) on subpopulation trend agreed with the seven stable/increasing results and one of the declining results, but disagreed with PVA status of five other declining subpopulations. The decline in the Baffin Bay subpopulation appeared to be due to over-reporting of harvested numbers from outside Canada. The remaining four disputed subpopulations (Southern Beaufort Sea, Northern Beaufort Sea, Southern Hudson Bay, and Western Hudson Bay) were all incompletely mark-recapture (M-R) sampled, which may have biased their survival and subpopulation estimates. Three of the four incompletely sampled subpopulations were PVA identified as nonviable (i.e., declining even with zero harvest mortality). TEK disagreement was nonrandom with respect to M-R sampling protocols. Cluster analysis also grouped subpopulations with ambiguous demographic and harvest rate estimates separately from those with apparently reliable demographic estimates based on PVA probability of decline and unharvested subpopulation growth rate criteria. We suggest that the correspondence between TEK and scientific results can be used to improve the reliability of information on natural systems and thus improve resource management. Considering both TEK and scientific information, we suggest that the current status of Canadian polar bear subpopulations in 2013 was 12 stable/increasing and one declining (Kane Basin). We do not find support for the perspective that polar bears within or shared with Canada are currently in any sort of climate crisis. We suggest that monitoring the impacts of climate change (including sea ice decline) on polar bear subpopulations should be continued and enhanced and that adaptive management practices are warranted.

  17. The Magnitude of Mortality from Ischemic Heart Disease Attributed to Occupational Factors in Korea - Attributable Fraction Estimation Using Meta-analysis.

    PubMed

    Ha, Jaehyeok; Kim, Soo-Geun; Paek, Domyung; Park, Jungsun

    2011-03-01

    Ischemic heart disease (IHD) is a major cause of death in Korea and known to result from several occupational factors. This study attempted to estimate the current magnitude of IHD mortality due to occupational factors in Korea. After selecting occupational risk factors by literature investigation, we calculated attributable fractions (AFs) from relative risks and exposure data for each factor. Relative risks were estimated using meta-analysis based on published research. Exposure data were collected from the 2006 Survey of Korean Working Conditions. Finally, we estimated 2006 occupation-related IHD mortality. FOR THE FACTORS CONSIDERED, WE ESTIMATED THE FOLLOWING RELATIVE RISKS: noise 1.06, environmental tobacco smoke 1.19 (men) and 1.22 (women), shift work 1.12, and low job control 1.15 (men) and 1.08 (women). Combined AFs of those factors in the IHD were estimated at 9.29% (0.3-18.51%) in men and 5.78% (-7.05-19.15%) in women. Based on these fractions, Korea's 2006 death toll from occupational IHD between the age of 15 and 69 was calculated at 353 in men (total 3,804) and 72 in women (total 1,246). We estimated occupational IHD mortality of Korea with updated data and more relevant evidence. Despite the efforts to obtain reliable estimates, there were many assumptions and limitations that must be overcome. Future research based on more precise design and reliable evidence is required for more accurate estimates.

  18. TU-EF-204-01: Accurate Prediction of CT Tube Current Modulation: Estimating Tube Current Modulation Schemes for Voxelized Patient Models Used in Monte Carlo Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McMillan, K; Bostani, M; McNitt-Gray, M

    2015-06-15

    Purpose: Most patient models used in Monte Carlo-based estimates of CT dose, including computational phantoms, do not have tube current modulation (TCM) data associated with them. While not a problem for fixed tube current simulations, this is a limitation when modeling the effects of TCM. Therefore, the purpose of this work was to develop and validate methods to estimate TCM schemes for any voxelized patient model. Methods: For 10 patients who received clinically-indicated chest (n=5) and abdomen/pelvis (n=5) scans on a Siemens CT scanner, both CT localizer radiograph (“topogram”) and image data were collected. Methods were devised to estimate themore » complete x-y-z TCM scheme using patient attenuation data: (a) available in the Siemens CT localizer radiograph/topogram itself (“actual-topo”) and (b) from a simulated topogram (“sim-topo”) derived from a projection of the image data. For comparison, the actual TCM scheme was extracted from the projection data of each patient. For validation, Monte Carlo simulations were performed using each TCM scheme to estimate dose to the lungs (chest scans) and liver (abdomen/pelvis scans). Organ doses from simulations using the actual TCM were compared to those using each of the estimated TCM methods (“actual-topo” and “sim-topo”). Results: For chest scans, the average differences between doses estimated using actual TCM schemes and estimated TCM schemes (“actual-topo” and “sim-topo”) were 3.70% and 4.98%, respectively. For abdomen/pelvis scans, the average differences were 5.55% and 6.97%, respectively. Conclusion: Strong agreement between doses estimated using actual and estimated TCM schemes validates the methods for simulating Siemens topograms and converting attenuation data into TCM schemes. This indicates that the methods developed in this work can be used to accurately estimate TCM schemes for any patient model or computational phantom, whether a CT localizer radiograph is available or not. Funding Support: NIH Grant R01-EB017095; Disclosures - Michael McNitt-Gray: Institutional Research Agreement, Siemens AG; Research Support, Siemens AG; Consultant, Flaherty Sensabaugh Bonasso PLLC; Consultant, Fulbright and Jaworski; Disclosures - Cynthia McCollough: Research Grant, Siemens Healthcare.« less

  19. Updated Estimates of the Remaining Market Potential of the U.S. ESCO Industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larsen, Peter H.; Carvallo Bodelon, Juan Pablo; Goldman, Charles A.

    The energy service company (ESCO) industry has a well-established track record of delivering energy and economic savings in the public and institutional buildings sector, primarily through the use of performance-based contracts. The ESCO industry often provides (or helps arrange) private sector financing to complete public infrastructure projects with little or no up-front cost to taxpayers. In 2014, total U.S. ESCO industry revenue was estimated at $5.3 billion. ESCOs expect total industry revenue to grow to $7.6 billion in 2017—a 13% annual growth rate from 2015-2017. Researchers at Lawrence Berkeley National Laboratory (LBNL) were asked by the U.S. Department of Energymore » Federal Energy Management Program (FEMP) to update and expand our estimates of the remaining market potential of the U.S. ESCO industry. We define remaining market potential as the aggregate amount of project investment by ESCOs that is technically possible based on the types of projects that ESCOS have historically implemented in the institutional, commercial, and industrial sectors using ESCO estimates of current market penetration in those sectors. In this analysis, we report U.S. ESCO industry remaining market potential under two scenarios: (1) a base case and (2) a case “unfettered” by market, bureaucratic, and regulatory barriers. We find that there is significant remaining market potential for the U.S. ESCO industry under both the base and unfettered cases. For the base case, we estimate a remaining market potential of $92-$201 billion ($2016). We estimate a remaining market potential of $190-$333 billion for the unfettered case. It is important to note, however, that there is considerable uncertainty surrounding the estimates for both the base and unfettered cases.« less

  20. Prediction techniques for jet-induced effects in hover on STOVL aircraft

    NASA Technical Reports Server (NTRS)

    Wardwell, Douglas A.; Kuhn, Richard E.

    1991-01-01

    Prediction techniques for jet induced lift effects during hover are available, relatively easy to use, and produce adequate results for preliminary design work. Although deficiencies of the current method were found, it is still currently the best way to estimate jet induced lift effects short of using computational fluid dynamics. Its use is summarized. The new summarized method, represents the first step toward the use of surface pressure data in an empirical method, as opposed to just balance data in the current method, for calculating jet induced effects. Although the new method is currently limited to flat plate configurations having two circular jets of equal thrust, it has the potential of more accurately predicting jet induced effects including a means for estimating the pitching moment in hover. As this method was developed from a very limited amount of data, broader applications of the method require the inclusion of new data on additional configurations. However, within this small data base, the new method does a better job in predicting jet induced effects in hover than the current method.

  1. An In-Rush Current Suppression Technique for the Solid-State Transfer Switch System

    NASA Astrophysics Data System (ADS)

    Cheng, Po-Tai; Chen, Yu-Hsing

    More and more utility companies provide dual power feeders as a premier service of high power quality and reliability. To take advantage of this, the solid-state transfer switch (STS) is adopted to protect the sensitive load against the voltage sag. However, the fast transfer process may cause in-rush current on the load-side transformer due to the resulting DC-offset in its magnetic flux as the load-transfer is completed. The in-rush current can reach 2∼6 p.u. and it may trigger the over-current protections on the power feeder. This paper develops a flux estimation scheme and a thyristor gating scheme based on the impulse commutation bridge STS (ICBSTS) to minimize the DC-offset on the magnetic flux. By sensing the line voltages of both feeders, the flux estimator can predict the peak transient flux linkage at the moment of load-transfer and evaluate a suitable moment for the transfer to minimize the in-rush current. Laboratory test results are presented to validate the performance of the proposed system.

  2. Addressing Angular Single-Event Effects in the Estimation of On-Orbit Error Rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, David S.; Swift, Gary M.; Wirthlin, Michael J.

    2015-12-01

    Our study describes complications introduced by angular direct ionization events on space error rate predictions. In particular, prevalence of multiple-cell upsets and a breakdown in the application of effective linear energy transfer in modern-scale devices can skew error rates approximated from currently available estimation models. Moreover, this paper highlights the importance of angular testing and proposes a methodology to extend existing error estimation tools to properly consider angular strikes in modern-scale devices. Finally, these techniques are illustrated with test data provided from a modern 28 nm SRAM-based device.

  3. A digital combining-weight estimation algorithm for broadband sources with the array feed compensation system

    NASA Technical Reports Server (NTRS)

    Vilnrotter, V. A.; Rodemich, E. R.

    1994-01-01

    An algorithm for estimating the optimum combining weights for the Ka-band (33.7-GHz) array feed compensation system was developed and analyzed. The input signal is assumed to be broadband radiation of thermal origin, generated by a distant radio source. Currently, seven video converters operating in conjunction with the real-time correlator are used to obtain these weight estimates. The algorithm described here requires only simple operations that can be implemented on a PC-based combining system, greatly reducing the amount of hardware. Therefore, system reliability and portability will be improved.

  4. Revisiting the social cost of carbon.

    PubMed

    Nordhaus, William D

    2017-02-14

    The social cost of carbon (SCC) is a central concept for understanding and implementing climate change policies. This term represents the economic cost caused by an additional ton of carbon dioxide emissions or its equivalent. The present study presents updated estimates based on a revised DICE model (Dynamic Integrated model of Climate and the Economy). The study estimates that the SCC is $31 per ton of CO 2 in 2010 US$ for the current period (2015). For the central case, the real SCC grows at 3% per year over the period to 2050. The paper also compares the estimates with those from other sources.

  5. Revisiting the social cost of carbon

    NASA Astrophysics Data System (ADS)

    Nordhaus, William D.

    2017-02-01

    The social cost of carbon (SCC) is a central concept for understanding and implementing climate change policies. This term represents the economic cost caused by an additional ton of carbon dioxide emissions or its equivalent. The present study presents updated estimates based on a revised DICE model (Dynamic Integrated model of Climate and the Economy). The study estimates that the SCC is 31 per ton of CO2 in 2010 US for the current period (2015). For the central case, the real SCC grows at 3% per year over the period to 2050. The paper also compares the estimates with those from other sources.

  6. Health economics in drug development: efficient research to inform healthcare funding decisions.

    PubMed

    Hall, Peter S; McCabe, Christopher; Brown, Julia M; Cameron, David A

    2010-10-01

    In order to decide whether a new treatment should be used in patients, a robust estimate of efficacy and toxicity is no longer sufficient. As a result of increasing healthcare costs across the globe healthcare payers and providers now seek estimates of cost-effectiveness as well. Most trials currently being designed still only consider the need for prospective efficacy and toxicity data during the development life-cycle of a new intervention. Hence the cost-effectiveness estimates are inevitably less precise than the clinical data on which they are based. Methods based on decision theory are being developed by health economists that can contribute to the design of clinical trials in such a way that they can more effectively lead to better informed drug funding decisions on the basis of cost-effectiveness in addition to clinical outcomes. There is an opportunity to apply these techniques prospectively in the design of future clinical trials. This article describes the problems encountered by those responsible for drug reimbursement decisions as a consequence of the current drug development pathway. The potential for decision theoretic methods to help overcome these problems is introduced and potential obstacles in implementation are highlighted. Copyright © 2010 Elsevier Ltd. All rights reserved.

  7. Real-time yield estimation based on deep learning

    NASA Astrophysics Data System (ADS)

    Rahnemoonfar, Maryam; Sheppard, Clay

    2017-05-01

    Crop yield estimation is an important task in product management and marketing. Accurate yield prediction helps farmers to make better decision on cultivation practices, plant disease prevention, and the size of harvest labor force. The current practice of yield estimation based on the manual counting of fruits is very time consuming and expensive process and it is not practical for big fields. Robotic systems including Unmanned Aerial Vehicles (UAV) and Unmanned Ground Vehicles (UGV), provide an efficient, cost-effective, flexible, and scalable solution for product management and yield prediction. Recently huge data has been gathered from agricultural field, however efficient analysis of those data is still a challenging task. Computer vision approaches currently face diffident challenges in automatic counting of fruits or flowers including occlusion caused by leaves, branches or other fruits, variance in natural illumination, and scale. In this paper a novel deep convolutional network algorithm was developed to facilitate the accurate yield prediction and automatic counting of fruits and vegetables on the images. Our method is robust to occlusion, shadow, uneven illumination and scale. Experimental results in comparison to the state-of-the art show the effectiveness of our algorithm.

  8. Heterogeneous Rates of Molecular Evolution and Diversification Could Explain the Triassic Age Estimate for Angiosperms.

    PubMed

    Beaulieu, Jeremy M; O'Meara, Brian C; Crane, Peter; Donoghue, Michael J

    2015-09-01

    Dating analyses based on molecular data imply that crown angiosperms existed in the Triassic, long before their undisputed appearance in the fossil record in the Early Cretaceous. Following a re-analysis of the age of angiosperms using updated sequences and fossil calibrations, we use a series of simulations to explore the possibility that the older age estimates are a consequence of (i) major shifts in the rate of sequence evolution near the base of the angiosperms and/or (ii) the representative taxon sampling strategy employed in such studies. We show that both of these factors do tend to yield substantially older age estimates. These analyses do not prove that younger age estimates based on the fossil record are correct, but they do suggest caution in accepting the older age estimates obtained using current relaxed-clock methods. Although we have focused here on the angiosperms, we suspect that these results will shed light on dating discrepancies in other major clades. ©The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Occupational cancer in Britain

    PubMed Central

    Van Tongeren, Martie; Jimenez, Araceli S; Hutchings, Sally J; MacCalman, Laura; Rushton, Lesley; Cherrie, John W

    2012-01-01

    To estimate the current occupational cancer burden due to past exposures in Britain, estimates of the number of exposed workers at different levels are required, as well as risk estimates of cancer due to the exposures. This paper describes the methods and results for estimating the historical exposures. All occupational carcinogens or exposure circumstances classified by the International Agency for Research on Cancer as definite or probable human carcinogens and potentially to be found in British workplaces over the past 20–40 years were included in this study. Estimates of the number of people exposed by industrial sector were based predominantly on two sources of data, the CARcinogen EXposure (CAREX) database and the UK Labour Force Survey. Where possible, multiple and overlapping exposures were taken into account. Dose–response risk estimates were generally not available in the epidemiological literature for the cancer–exposure pairs in this study, and none of the sources available for obtaining the numbers exposed provided data by different levels of exposure. Industrial sectors were therefore assigned using expert judgement to ‘higher'- and ‘lower'-exposure groups based on the similarity of exposure to the population in the key epidemiological studies from which risk estimates had been selected. Estimates of historical exposure prevalence were obtained for 41 carcinogens or occupational circumstances. These include exposures to chemicals and metals, combustion products, other mixtures or groups of chemicals, mineral and biological dusts, physical agents and work patterns, as well as occupations and industries that have been associated with increased risk of cancer, but for which the causative agents are unknown. There were more than half a million workers exposed to each of six carcinogens (radon, solar radiation, crystalline silica, mineral oils, non-arsenical insecticides and 2,3,7,8-tetrachlorodibenzo-p-dioxin); other agents to which a large number of workers are exposed included benzene, diesel engine exhaust and environmental tobacco smoke. The study has highlighted several industrial sectors with large proportions of workers potentially exposed to multiple carcinogens. The relevant available data have been used to generate estimates of the prevalence of past exposure to occupational carcinogens to enable the occupational cancer burden in Britain to be estimated. These data are considered adequate for the present purpose, but new data on the prevalence and intensity of current occupational exposure to carcinogens should be collected to ensure that future policy decisions be based on reliable evidence. PMID:22710674

  10. Modeling and Optimization for Morphing Wing Concept Generation

    NASA Technical Reports Server (NTRS)

    Skillen, Michael D.; Crossley, William A.

    2007-01-01

    This report consists of two major parts: 1) the approach to develop morphing wing weight equations, and 2) the approach to size morphing aircraft. Combined, these techniques allow the morphing aircraft to be sized with estimates of the morphing wing weight that are more credible than estimates currently available; aircraft sizing results prior to this study incorporated morphing wing weight estimates based on general heuristics for fixed-wing flaps (a comparable "morphing" component) but, in general, these results were unsubstantiated. This report will show that the method of morphing wing weight prediction does, in fact, drive the aircraft sizing code to different results and that accurate morphing wing weight estimates are essential to credible aircraft sizing results.

  11. Worldwide F(ST) estimates relative to five continental-scale populations.

    PubMed

    Steele, Christopher D; Court, Denise Syndercombe; Balding, David J

    2014-11-01

    We estimate the population genetics parameter FST (also referred to as the fixation index) from short tandem repeat (STR) allele frequencies, comparing many worldwide human subpopulations at approximately the national level with continental-scale populations. FST is commonly used to measure population differentiation, and is important in forensic DNA analysis to account for remote shared ancestry between a suspect and an alternative source of the DNA. We estimate FST comparing subpopulations with a hypothetical ancestral population, which is the approach most widely used in population genetics, and also compare a subpopulation with a sampled reference population, which is more appropriate for forensic applications. Both estimation methods are likelihood-based, in which FST is related to the variance of the multinomial-Dirichlet distribution for allele counts. Overall, we find low FST values, with posterior 97.5 percentiles < 3% when comparing a subpopulation with the most appropriate population, and even for inter-population comparisons we find FST < 5%. These are much smaller than single nucleotide polymorphism-based inter-continental FST estimates, and are also about half the magnitude of STR-based estimates from population genetics surveys that focus on distinct ethnic groups rather than a general population. Our findings support the use of FST up to 3% in forensic calculations, which corresponds to some current practice.

  12. Invited commentary: Lost in estimation--searching for alternatives to markov chains to fit complex Bayesian models.

    PubMed

    Molitor, John

    2012-03-01

    Bayesian methods have seen an increase in popularity in a wide variety of scientific fields, including epidemiology. One of the main reasons for their widespread application is the power of the Markov chain Monte Carlo (MCMC) techniques generally used to fit these models. As a result, researchers often implicitly associate Bayesian models with MCMC estimation procedures. However, Bayesian models do not always require Markov-chain-based methods for parameter estimation. This is important, as MCMC estimation methods, while generally quite powerful, are complex and computationally expensive and suffer from convergence problems related to the manner in which they generate correlated samples used to estimate probability distributions for parameters of interest. In this issue of the Journal, Cole et al. (Am J Epidemiol. 2012;175(5):368-375) present an interesting paper that discusses non-Markov-chain-based approaches to fitting Bayesian models. These methods, though limited, can overcome some of the problems associated with MCMC techniques and promise to provide simpler approaches to fitting Bayesian models. Applied researchers will find these estimation approaches intuitively appealing and will gain a deeper understanding of Bayesian models through their use. However, readers should be aware that other non-Markov-chain-based methods are currently in active development and have been widely published in other fields.

  13. Disaster debris estimation using high-resolution polarimetric stereo-SAR

    NASA Astrophysics Data System (ADS)

    Koyama, Christian N.; Gokon, Hideomi; Jimbo, Masaru; Koshimura, Shunichi; Sato, Motoyuki

    2016-10-01

    This paper addresses the problem of debris estimation which is one of the most important initial challenges in the wake of a disaster like the Great East Japan Earthquake and Tsunami. Reasonable estimates of the debris have to be made available to decision makers as quickly as possible. Current approaches to obtain this information are far from being optimal as they usually rely on manual interpretation of optical imagery. We have developed a novel approach for the estimation of tsunami debris pile heights and volumes for improved emergency response. The method is based on a stereo-synthetic aperture radar (stereo-SAR) approach for very high-resolution polarimetric SAR. An advanced gradient-based optical-flow estimation technique is applied for optimal image coregistration of the low-coherence non-interferometric data resulting from the illumination from opposite directions and in different polarizations. By applying model based decomposition of the coherency matrix, only the odd bounce scattering contributions are used to optimize echo time computation. The method exclusively considers the relative height differences from the top of the piles to their base to achieve a very fine resolution in height estimation. To define the base, a reference point on non-debris-covered ground surface is located adjacent to the debris pile targets by exploiting the polarimetric scattering information. The proposed technique is validated using in situ data of real tsunami debris taken on a temporary debris management site in the tsunami affected area near Sendai city, Japan. The estimated height error is smaller than 0.6 m RMSE. The good quality of derived pile heights allows for a voxel-based estimation of debris volumes with a RMSE of 1099 m3. Advantages of the proposed method are fast computation time, and robust height and volume estimation of debris piles without the need for pre-event data or auxiliary information like DEM, topographic maps or GCPs.

  14. Betavoltaic battery performance: Comparison of modeling and experiment.

    PubMed

    Svintsov, A A; Krasnov, A A; Polikarpov, M A; Polyakov, A Y; Yakimov, E B

    2018-07-01

    A verification of the Monte Carlo simulation software for the prediction of short circuit current value is carried out using the Ni-63 source with the activity of 2.7 mCi/cm 2 and converters based on Si p-i-n diodes and SiC and GaN Schottky diodes. A comparison of experimentally measured and calculated short circuit current values confirms the validity of the proposed modeling method, with the difference in the measured and calculated short circuit current values not exceeding 25% and the error in the predicted output power values being below 30%. Effects of the protective layer formed on the Ni-63 radioactive film and of the passivating film on the semiconductor converters on the energy deposited inside the converters are estimated. The maximum attainable betavoltaic cell parameters are estimated. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Investigation on the structural characterization of pulsed p-type porous silicon

    NASA Astrophysics Data System (ADS)

    Wahab, N. H. Abd; Rahim, A. F. Abd; Mahmood, A.; Yusof, Y.

    2017-08-01

    P-type Porous silicon (PS) was sucessfully formed by using an electrochemical pulse etching (PC) and conventional direct current (DC) etching techniques. The PS was etched in the Hydrofluoric (HF) based solution at a current density of J = 10 mA/cm2 for 30 minutes from a crystalline silicon wafer with (100) orientation. For the PC process, the current was supplied through a pulse generator with 14 ms cycle time (T) with 10 ms on time (Ton) and pause time (Toff) of 4 ms respectively. FESEM, EDX, AFM, and XRD have been used to characterize the morphological properties of the PS. FESEM images showed that pulse PS (PPC) sample produces more uniform circular structures with estimated average pore sizes of 42.14 nm compared to DC porous (PDC) sample with estimated average size of 16.37nm respectively. The EDX spectrum for both samples showed higher Si content with minimal presence of oxide.

  16. A class of semiparametric cure models with current status data.

    PubMed

    Diao, Guoqing; Yuan, Ao

    2018-02-08

    Current status data occur in many biomedical studies where we only know whether the event of interest occurs before or after a particular time point. In practice, some subjects may never experience the event of interest, i.e., a certain fraction of the population is cured or is not susceptible to the event of interest. We consider a class of semiparametric transformation cure models for current status data with a survival fraction. This class includes both the proportional hazards and the proportional odds cure models as two special cases. We develop efficient likelihood-based estimation and inference procedures. We show that the maximum likelihood estimators for the regression coefficients are consistent, asymptotically normal, and asymptotically efficient. Simulation studies demonstrate that the proposed methods perform well in finite samples. For illustration, we provide an application of the models to a study on the calcification of the hydrogel intraocular lenses.

  17. An interactive program for pharmacokinetic modeling.

    PubMed

    Lu, D R; Mao, F

    1993-05-01

    A computer program, PharmK, was developed for pharmacokinetic modeling of experimental data. The program was written in C computer language based on the high-level user-interface Macintosh operating system. The intention was to provide a user-friendly tool for users of Macintosh computers. An interactive algorithm based on the exponential stripping method is used for the initial parameter estimation. Nonlinear pharmacokinetic model fitting is based on the maximum likelihood estimation method and is performed by the Levenberg-Marquardt method based on chi 2 criterion. Several methods are available to aid the evaluation of the fitting results. Pharmacokinetic data sets have been examined with the PharmK program, and the results are comparable with those obtained with other programs that are currently available for IBM PC-compatible and other types of computers.

  18. Influence function based variance estimation and missing data issues in case-cohort studies.

    PubMed

    Mark, S D; Katki, H

    2001-12-01

    Recognizing that the efficiency in relative risk estimation for the Cox proportional hazards model is largely constrained by the total number of cases, Prentice (1986) proposed the case-cohort design in which covariates are measured on all cases and on a random sample of the cohort. Subsequent to Prentice, other methods of estimation and sampling have been proposed for these designs. We formalize an approach to variance estimation suggested by Barlow (1994), and derive a robust variance estimator based on the influence function. We consider the applicability of the variance estimator to all the proposed case-cohort estimators, and derive the influence function when known sampling probabilities in the estimators are replaced by observed sampling fractions. We discuss the modifications required when cases are missing covariate information. The missingness may occur by chance, and be completely at random; or may occur as part of the sampling design, and depend upon other observed covariates. We provide an adaptation of S-plus code that allows estimating influence function variances in the presence of such missing covariates. Using examples from our current case-cohort studies on esophageal and gastric cancer, we illustrate how our results our useful in solving design and analytic issues that arise in practice.

  19. Geophysical mapping of palsa peatland permafrost

    NASA Astrophysics Data System (ADS)

    Sjöberg, Y.; Marklund, P.; Pettersson, R.; Lyon, S. W.

    2014-10-01

    Permafrost peatlands are hydrological and biogeochemical hotspots in the discontinuous permafrost zone. Non-intrusive geophysical methods offer possibility to map current permafrost spatial distributions in these environments. In this study, we estimate the depths to the permafrost table surface and base across a peatland in northern Sweden, using ground penetrating radar and electrical resistivity tomography. Seasonal thaw frost tables (at ~0.5 m depth), taliks (2.1-6.7 m deep), and the permafrost base (at ~16 m depth) could be detected. Higher occurrences of taliks were discovered at locations with a lower relative height of permafrost landforms indicative of lower ground ice content at these locations. These results highlight the added value of combining geophysical techniques for assessing spatial distribution of permafrost within the rapidly changing sporadic permafrost zone. For example, based on a simple thought experiment for the site considered here, we estimated that the thickest permafrost could thaw out completely within the next two centuries. There is a clear need, thus, to benchmark current permafrost distributions and characteristics particularly in under studied regions of the pan-arctic.

  20. Geophysical mapping of palsa peatland permafrost

    NASA Astrophysics Data System (ADS)

    Sjöberg, Y.; Marklund, P.; Pettersson, R.; Lyon, S. W.

    2015-03-01

    Permafrost peatlands are hydrological and biogeochemical hotspots in the discontinuous permafrost zone. Non-intrusive geophysical methods offer a possibility to map current permafrost spatial distributions in these environments. In this study, we estimate the depths to the permafrost table and base across a peatland in northern Sweden, using ground penetrating radar and electrical resistivity tomography. Seasonal thaw frost tables (at ~0.5 m depth), taliks (2.1-6.7 m deep), and the permafrost base (at ~16 m depth) could be detected. Higher occurrences of taliks were discovered at locations with a lower relative height of permafrost landforms, which is indicative of lower ground ice content at these locations. These results highlight the added value of combining geophysical techniques for assessing spatial distributions of permafrost within the rapidly changing sporadic permafrost zone. For example, based on a back-of-the-envelope calculation for the site considered here, we estimated that the permafrost could thaw completely within the next 3 centuries. Thus there is a clear need to benchmark current permafrost distributions and characteristics, particularly in under studied regions of the pan-Arctic.

  1. Probable Maximum Precipitation in the U.S. Pacific Northwest in a Changing Climate: PMP UNDER CLIMATE CHANGE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xiaodong; Hossain, Faisal; Leung, L. Ruby

    The safety of large and aging water infrastructures is gaining attention in water management given the accelerated rate of change in landscape, climate and society. In current engineering practice, such safety is ensured by the design of infrastructure for the Probable Maximum Precipitation (PMP). Recently, several physics-based numerical modeling approaches have been proposed to modernize the conventional and ad hoc PMP estimation approach. However, the underlying physics has not been investigated and thus differing PMP estimates are obtained without clarity on their interpretation. In this study, we present a hybrid approach that takes advantage of both traditional engineering wisdom andmore » modern climate science to estimate PMP for current and future climate conditions. The traditional PMP approach is improved and applied to outputs from an ensemble of five CMIP5 models. This hybrid approach is applied in the Pacific Northwest (PNW) to produce ensemble PMP estimation for the historical (1970-2016) and future (2050-2099) time periods. The new historical PMP estimates are verified by comparing them with the traditional estimates. PMP in the PNW will increase by 50% of the current level by 2099 under the RCP8.5 scenario. Most of the increase is caused by warming, which mainly affects moisture availability, with minor contributions from changes in storm efficiency in the future. Moist track change tends to reduce the future PMP. Compared with extreme precipitation, ensemble PMP exhibits higher internal variation. Thus high-quality data of both precipitation and related meteorological fields (temperature, wind fields) are required to reduce uncertainties in the ensemble PMP estimates.« less

  2. Estimating animal population density using passive acoustics.

    PubMed

    Marques, Tiago A; Thomas, Len; Martin, Stephen W; Mellinger, David K; Ward, Jessica A; Moretti, David J; Harris, Danielle; Tyack, Peter L

    2013-05-01

    Reliable estimation of the size or density of wild animal populations is very important for effective wildlife management, conservation and ecology. Currently, the most widely used methods for obtaining such estimates involve either sighting animals from transect lines or some form of capture-recapture on marked or uniquely identifiable individuals. However, many species are difficult to sight, and cannot be easily marked or recaptured. Some of these species produce readily identifiable sounds, providing an opportunity to use passive acoustic data to estimate animal density. In addition, even for species for which other visually based methods are feasible, passive acoustic methods offer the potential for greater detection ranges in some environments (e.g. underwater or in dense forest), and hence potentially better precision. Automated data collection means that surveys can take place at times and in places where it would be too expensive or dangerous to send human observers. Here, we present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field. We review the types of data and methodological approaches currently available to researchers and we provide a framework for acoustics-based density estimation, illustrated with examples from real-world case studies. We mention moving sensor platforms (e.g. towed acoustics), but then focus on methods involving sensors at fixed locations, particularly hydrophones to survey marine mammals, as acoustic-based density estimation research to date has been concentrated in this area. Primary among these are methods based on distance sampling and spatially explicit capture-recapture. The methods are also applicable to other aquatic and terrestrial sound-producing taxa. We conclude that, despite being in its infancy, density estimation based on passive acoustic data likely will become an important method for surveying a number of diverse taxa, such as sea mammals, fish, birds, amphibians, and insects, especially in situations where inferences are required over long periods of time. There is considerable work ahead, with several potentially fruitful research areas, including the development of (i) hardware and software for data acquisition, (ii) efficient, calibrated, automated detection and classification systems, and (iii) statistical approaches optimized for this application. Further, survey design will need to be developed, and research is needed on the acoustic behaviour of target species. Fundamental research on vocalization rates and group sizes, and the relation between these and other factors such as season or behaviour state, is critical. Evaluation of the methods under known density scenarios will be important for empirically validating the approaches presented here. © 2012 The Authors. Biological Reviews © 2012 Cambridge Philosophical Society.

  3. Estimating animal population density using passive acoustics

    PubMed Central

    Marques, Tiago A; Thomas, Len; Martin, Stephen W; Mellinger, David K; Ward, Jessica A; Moretti, David J; Harris, Danielle; Tyack, Peter L

    2013-01-01

    Reliable estimation of the size or density of wild animal populations is very important for effective wildlife management, conservation and ecology. Currently, the most widely used methods for obtaining such estimates involve either sighting animals from transect lines or some form of capture-recapture on marked or uniquely identifiable individuals. However, many species are difficult to sight, and cannot be easily marked or recaptured. Some of these species produce readily identifiable sounds, providing an opportunity to use passive acoustic data to estimate animal density. In addition, even for species for which other visually based methods are feasible, passive acoustic methods offer the potential for greater detection ranges in some environments (e.g. underwater or in dense forest), and hence potentially better precision. Automated data collection means that surveys can take place at times and in places where it would be too expensive or dangerous to send human observers. Here, we present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field. We review the types of data and methodological approaches currently available to researchers and we provide a framework for acoustics-based density estimation, illustrated with examples from real-world case studies. We mention moving sensor platforms (e.g. towed acoustics), but then focus on methods involving sensors at fixed locations, particularly hydrophones to survey marine mammals, as acoustic-based density estimation research to date has been concentrated in this area. Primary among these are methods based on distance sampling and spatially explicit capture-recapture. The methods are also applicable to other aquatic and terrestrial sound-producing taxa. We conclude that, despite being in its infancy, density estimation based on passive acoustic data likely will become an important method for surveying a number of diverse taxa, such as sea mammals, fish, birds, amphibians, and insects, especially in situations where inferences are required over long periods of time. There is considerable work ahead, with several potentially fruitful research areas, including the development of (i) hardware and software for data acquisition, (ii) efficient, calibrated, automated detection and classification systems, and (iii) statistical approaches optimized for this application. Further, survey design will need to be developed, and research is needed on the acoustic behaviour of target species. Fundamental research on vocalization rates and group sizes, and the relation between these and other factors such as season or behaviour state, is critical. Evaluation of the methods under known density scenarios will be important for empirically validating the approaches presented here. PMID:23190144

  4. The role of global cloud climatologies in validating numerical models

    NASA Technical Reports Server (NTRS)

    HARSHVARDHAN

    1991-01-01

    Reliable estimates of the components of the surface radiation budget are important in studies of ocean-atmosphere interaction, land-atmosphere interaction, ocean circulation and in the validation of radiation schemes used in climate models. The methods currently under consideration must necessarily make certain assumptions regarding both the presence of clouds and their vertical extent. Because of the uncertainties in assumed cloudiness, all these methods involve perhaps unacceptable uncertainties. Here, a theoretical framework that avoids the explicit computation of cloud fraction and the location of cloud base in estimating the surface longwave radiation is presented. Estimates of the global surface downward fluxes and the oceanic surface net upward fluxes were made for four months (April, July, October and January) in 1985 to 1986. These estimates are based on a relationship between cloud radiative forcing at the top of the atmosphere and the surface obtained from a general circulation model. The radiation code is the version used in the UCLA/GLA general circulation model (GCM). The longwave cloud radiative forcing at the top of the atmosphere as obtained from Earth Radiation Budget Experiment (ERBE) measurements is used to compute the forcing at the surface by means of the GCM-derived relationship. This, along with clear-sky fluxes from the computations, yield maps of the downward longwave fluxes and net upward longwave fluxes at the surface. The calculated results are discussed and analyzed. The results are consistent with current meteorological knowledge and explainable on the basis of previous theoretical and observational works; therefore, it can be concluded that this method is applicable as one of the ways to obtain the surface longwave radiation fields from currently available satellite data.

  5. Discrete analysis of spatial-sensitivity models

    NASA Technical Reports Server (NTRS)

    Nielsen, Kenneth R. K.; Wandell, Brian A.

    1988-01-01

    Procedures for reducing the computational burden of current models of spatial vision are described, the simplifications being consistent with the prediction of the complete model. A method for using pattern-sensitivity measurements to estimate the initial linear transformation is also proposed which is based on the assumption that detection performance is monotonic with the vector length of the sensor responses. It is shown how contrast-threshold data can be used to estimate the linear transformation needed to characterize threshold performance.

  6. Estimating Discharge, Depth and Bottom Friction in Sand Bed Rivers Using Surface Currents and Water Surface Elevation Observations

    NASA Astrophysics Data System (ADS)

    Simeonov, J.; Czapiga, M. J.; Holland, K. T.

    2017-12-01

    We developed an inversion model for river bathymetry estimation using measurements of surface currents, water surface elevation slope and shoreline position. The inversion scheme is based on explicit velocity-depth and velocity-slope relationships derived from the along-channel momentum balance and mass conservation. The velocity-depth relationship requires the discharge value to quantitatively relate the depth to the measured velocity field. The ratio of the discharge and the bottom friction enter as a coefficient in the velocity-slope relationship and is determined by minimizing the difference between the predicted and the measured streamwise variation of the total head. Completing the inversion requires an estimate of the bulk friction, which in the case of sand bed rivers is a strong function of the size of dune bedforms. We explored the accuracy of existing and new empirical closures that relate the bulk roughness to parameters such as the median grain size diameter, ratio of shear velocity to sediment fall velocity or the Froude number. For given roughness parameterization, the inversion solution is determined iteratively since the hydraulic roughness depends on the unknown depth. We first test the new hydraulic roughness parameterization using estimates of the Manning roughness in sand bed rivers based on field measurements. The coupled inversion and roughness model is then tested using in situ and remote sensing measurements of the Kootenai River east of Bonners Ferry, ID.

  7. Estimating millet production for famine early warning: An application of crop simulation modelling using satellite and ground-based data in Burkina Faso

    USGS Publications Warehouse

    Thornton, P. K.; Bowen, W. T.; Ravelo, A.C.; Wilkens, P. W.; Farmer, G.; Brock, J.; Brink, J. E.

    1997-01-01

    Early warning of impending poor crop harvests in highly variable environments can allow policy makers the time they need to take appropriate action to ameliorate the effects of regional food shortages on vulnerable rural and urban populations. Crop production estimates for the current season can be obtained using crop simulation models and remotely sensed estimates of rainfall in real time, embedded in a geographic information system that allows simple analysis of simulation results. A prototype yield estimation system was developed for the thirty provinces of Burkina Faso. It is based on CERES-Millet, a crop simulation model of the growth and development of millet (Pennisetum spp.). The prototype was used to estimate millet production in contrasting seasons and to derive production anomaly estimates for the 1986 season. Provincial yields simulated halfway through the growing season were generally within 15% of their final (end-of-season) values. Although more work is required to produce an operational early warning system of reasonable credibility, the methodology has considerable potential for providing timely estimates of regional production of the major food crops in countries of sub-Saharan Africa.

  8. A Spatial Method to Calculate Small-Scale Fisheries Extent

    NASA Astrophysics Data System (ADS)

    Johnson, A. F.; Moreno-Báez, M.; Giron-Nava, A.; Corominas, J.; Erisman, B.; Ezcurra, E.; Aburto-Oropeza, O.

    2016-02-01

    Despite global catch per unit effort having redoubled since the 1950's, the global fishing fleet is estimated to be twice the size that the oceans can sustainably support. In order to gauge the collateral impacts of fishing intensity, we must be able to estimate the spatial extent and amount of fishing vessels in the oceans. Methods that do currently exist are built around electronic tracking and log book systems and generally focus on industrial fisheries. Spatial extent for small-scale fisheries therefore remains elusive for many small-scale fishing fleets; even though these fisheries land the same biomass for human consumption as industrial fisheries. Current methods are data-intensive and require extensive extrapolation when estimated across large spatial scales. We present an accessible, spatial method of calculating the extent of small-scale fisheries based on two simple measures that are available, or at least easily estimable, in even the most data poor fisheries: the number of boats and the local coastal human population. We demonstrate this method is fishery-type independent and can be used to quantitatively evaluate the efficacy of growth in small-scale fisheries. This method provides an important first step towards estimating the fishing extent of the small-scale fleet, globally.

  9. Thermal noise calculation method for precise estimation of the signal-to-noise ratio of ultra-low-field MRI with an atomic magnetometer.

    PubMed

    Yamashita, Tatsuya; Oida, Takenori; Hamada, Shoji; Kobayashi, Tetsuo

    2012-02-01

    In recent years, there has been considerable interest in developing an ultra-low-field magnetic resonance imaging (ULF-MRI) system using an optically pumped atomic magnetometer (OPAM). However, a precise estimation of the signal-to-noise ratio (SNR) of ULF-MRI has not been carried out. Conventionally, to calculate the SNR of an MR image, thermal noise, also called Nyquist noise, has been estimated by considering a resistor that is electrically equivalent to a biological-conductive sample and is connected in series to a pickup coil. However, this method has major limitations in that the receiver has to be a coil and that it cannot be applied directly to a system using OPAM. In this paper, we propose a method to estimate the thermal noise of an MRI system using OPAM. We calculate the thermal noise from the variance of the magnetic sensor output produced by current-dipole moments that simulate thermally fluctuating current sources in a biological sample. We assume that the random magnitude of the current dipole in each volume element of the biological sample is described by the Maxwell-Boltzmann distribution. The sensor output produced by each current-dipole moment is calculated either by an analytical formula or a numerical method based on the boundary element method. We validate the proposed method by comparing our results with those obtained by conventional methods that consider resistors connected in series to a pickup coil using single-layered sphere, multi-layered sphere, and realistic head models. Finally, we apply the proposed method to the ULF-MRI model using OPAM as the receiver with multi-layered sphere and realistic head models and estimate their SNR. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. Risk Estimation Modeling and Feasibility Testing for a Mobile eHealth Intervention for Binge Drinking Among Young People: The D-ARIANNA (Digital-Alcohol RIsk Alertness Notifying Network for Adolescents and young adults) Project.

    PubMed

    Carrà, Giuseppe; Crocamo, Cristina; Schivalocchi, Alessandro; Bartoli, Francesco; Carretta, Daniele; Brambilla, Giulia; Clerici, Massimo

    2015-01-01

    Binge drinking is common among young people but often relevant risk factors are not recognized. eHealth apps, attractive for young people, may be useful to enhance awareness of this problem. We aimed at developing a current risk estimation model for binge drinking, incorporated into an eHealth app--D-ARIANNA (Digital-Alcohol RIsk Alertness Notifying Network for Adolescents and young adults)--for young people. A longitudinal approach with phase 1 (risk estimation), phase 2 (design), and phase 3 (feasibility) was followed. Risk/protective factors identified from the literature were used to develop a current risk estimation model for binge drinking. Relevant odds ratios were subsequently pooled through meta-analytic techniques with a random-effects model, deriving weighted estimates to be introduced in a final model. A set of questions, matching identified risk factors, were nested in a questionnaire and assessed for wording, content, and acceptability in focus groups involving 110 adolescents and young adults. Ten risk factors (5 modifiable) and 2 protective factors showed significant associations with binge drinking and were included in the model. Their weighted coefficients ranged between -0.71 (school proficiency) and 1.90 (cannabis use). The model, nested in an eHealth app questionnaire, provides in percent an overall current risk score, accompanied by appropriate images. Factors that mostly contribute are shown in summary messages. Minor changes have been realized after focus groups review. Most of the subjects (74%) regarded the eHealth app as helpful to assess binge drinking risk. We could produce an evidence-based eHealth app for young people, evaluating current risk for binge drinking. Its effectiveness will be tested in a large trial.

  11. Data-based hybrid tension estimation and fault diagnosis of cold rolling continuous annealing processes.

    PubMed

    Liu, Qiang; Chai, Tianyou; Wang, Hong; Qin, Si-Zhao Joe

    2011-12-01

    The continuous annealing process line (CAPL) of cold rolling is an important unit to improve the mechanical properties of steel strips in steel making. In continuous annealing processes, strip tension is an important factor, which indicates whether the line operates steadily. Abnormal tension profile distribution along the production line can lead to strip break and roll slippage. Therefore, it is essential to estimate the whole tension profile in order to prevent the occurrence of faults. However, in real annealing processes, only a limited number of strip tension sensors are installed along the machine direction. Since the effects of strip temperature, gas flow, bearing friction, strip inertia, and roll eccentricity can lead to nonlinear tension dynamics, it is difficult to apply the first-principles induced model to estimate the tension profile distribution. In this paper, a novel data-based hybrid tension estimation and fault diagnosis method is proposed to estimate the unmeasured tension between two neighboring rolls. The main model is established by an observer-based method using a limited number of measured tensions, speeds, and currents of each roll, where the tension error compensation model is designed by applying neural networks principal component regression. The corresponding tension fault diagnosis method is designed using the estimated tensions. Finally, the proposed tension estimation and fault diagnosis method was applied to a real CAPL in a steel-making company, demonstrating the effectiveness of the proposed method.

  12. A NATIONAL COASTAL ASSESSMENT OF COASTAL SEDIMENT CONDITION

    EPA Science Inventory

    One element of the Environmental Monitoring and Assessment Program's National Coastal Assessment is to estimate the current status, extent, changes and trends in the condition of the Nation's coastal sediments on a national basis. Based on NCA monitoring activities from 1999-2001...

  13. Innovative methods for calculation of freeway travel time using limited data : executive summary report.

    DOT National Transportation Integrated Search

    2008-08-01

    ODOTs policy for Dynamic Message Sign : utilization requires travel time(s) to be displayed as : a default message. The current method of : calculating travel time involves a workstation : operator estimating the travel time based upon : observati...

  14. Model-Based, Noninvasive Monitoring of Intracranial Pressure

    DTIC Science & Technology

    2012-10-01

    nICP) estimate requires simultaneous measurement of the waveforms of arterial blood pressure ( ABP ), obtained via radial artery catheter or finger...initial database comprises subarachnoid hemorrhage patients in neuro-intensive care at our partner hospital, for whom ICP, ABP and CBFV are currently

  15. Head movement compensation in real-time magnetoencephalographic recordings.

    PubMed

    Little, Graham; Boe, Shaun; Bardouille, Timothy

    2014-01-01

    Neurofeedback- and brain-computer interface (BCI)-based interventions can be implemented using real-time analysis of magnetoencephalographic (MEG) recordings. Head movement during MEG recordings, however, can lead to inaccurate estimates of brain activity, reducing the efficacy of the intervention. Most real-time applications in MEG have utilized analyses that do not correct for head movement. Effective means of correcting for head movement are needed to optimize the use of MEG in such applications. Here we provide preliminary validation of a novel analysis technique, real-time source estimation (rtSE), that measures head movement and generates corrected current source time course estimates in real-time. rtSE was applied while recording a calibrated phantom to determine phantom position localization accuracy and source amplitude estimation accuracy under stationary and moving conditions. Results were compared to off-line analysis methods to assess validity of the rtSE technique. The rtSE method allowed for accurate estimation of current source activity at the source-level in real-time, and accounted for movement of the source due to changes in phantom position. The rtSE technique requires modifications and specialized analysis of the following MEG work flow steps.•Data acquisition•Head position estimation•Source localization•Real-time source estimation This work explains the technical details and validates each of these steps.

  16. Time-to-impact estimation in passive missile warning systems

    NASA Astrophysics Data System (ADS)

    Şahıngıl, Mehmet Cihan

    2017-05-01

    A missile warning system can detect the incoming missile threat(s) and automatically cue the other Electronic Attack (EA) systems in the suit, such as Directed Infrared Counter Measure (DIRCM) system and/or Counter Measure Dispensing System (CMDS). Most missile warning systems are currently based on passive sensor technology operating in either Solar Blind Ultraviolet (SBUV) or Midwave Infrared (MWIR) bands on which there is an intensive emission from the exhaust plume of the threatening missile. Although passive missile warning systems have some clear advantages over pulse-Doppler radar (PDR) based active missile warning systems, they show poorer performance in terms of time-to-impact (TTI) estimation which is critical for optimizing the countermeasures and also "passive kill assessment". In this paper, we consider this problem, namely, TTI estimation from passive measurements and present a TTI estimation scheme which can be used in passive missile warning systems. Our problem formulation is based on Extended Kalman Filter (EKF). The algorithm uses the area parameter of the threat plume which is derived from the used image frame.

  17. Geochemical Evidence for Calcification from the Drake Passage Time-series

    NASA Astrophysics Data System (ADS)

    Munro, D. R.; Lovenduski, N. S.; Takahashi, T.; Stephens, B. B.; Newberger, T.; Dierssen, H. M.; Randolph, K. L.; Freeman, N. M.; Bushinsky, S. M.; Key, R. M.; Sarmiento, J. L.; Sweeney, C.

    2016-12-01

    Satellite imagery suggests high particulate inorganic carbon within a circumpolar region north of the Antarctic Polar Front (APF), but in situ evidence for calcification in this region is sparse. Given the geochemical relationship between calcification and total alkalinity (TA), seasonal changes in surface concentrations of potential alkalinity (PA), which accounts for changes in TA due to variability in salinity and nitrate, can be used as a means to evaluate satellite-based calcification algorithms. Here, we use surface carbonate system measurements collected from 2002 to 2016 for the Drake Passage Time-series (DPT) to quantify rates of calcification across the Antarctic Circumpolar Current. We also use vertical PA profiles collected during two cruises across the Drake Passage in March 2006 and September 2009 to estimate the calcium carbonate to organic carbon export ratio. We find geochemical evidence for calcification both north and south of the APF with the highest rates observed north of the APF. Calcification estimates from the DPT are compared to satellite-based estimates and estimates based on hydrographic data from other regions around the Southern Ocean.

  18. Availability and Distribution of Base Flow in Lower Honokohau Stream, Island of Maui

    USGS Publications Warehouse

    Fontaine, Richard A.

    2003-01-01

    Honokohau Stream is one of the few perennial streams in the Lahaina District of West Maui. Current Honokohau water-use practices often lead to conflicts among water users, which are most evident during periods of base flow. To better manage the resource, data are needed that describe the availability and distribution of base flow in lower Honokohau Stream and how base flow is affected by streamflow diversion and return-flow practices. Flow-duration discharges for percentiles ranging from 50 to 95 percent were estimated at 13 locations on lower Honokohau Stream using data from a variety of sources. These sources included (1) available U.S. Geological Survey discharge data, (2) published summaries of Maui Land & Pineapple Company, Inc. diversion and water development-tunnel data, (3) seepage run and low-flow partial-record discharge measurements made for this study, and (4) current (2003) water diversion and return-flow practices. These flow-duration estimates provide a detailed characterization of the distribution and availability of base flow in lower Honokohau Stream. Estimates of base-flow statistics indicate the significant effect of Honokohau Ditch diversions on flow in the stream. Eighty-six percent of the total flow upstream from the ditch is diverted from the stream. Immediately downstream from the diversion dam there is no flow in the stream 91.2 percent of the time, except for minor leakage through the dam. Flow releases at the Taro Gate, from Honokohau Ditch back into the stream, are inconsistent and were found to be less than the target release of 1.55 cubic feet per second on 9 of the 10 days on which measurements were made. Previous estimates of base-flow availability downstream from the Taro Gate release range from 2.32 to 4.6 cubic feet per second (1.5 to 3.0 million gallons per day). At the two principal sites where water is currently being diverted for agricultural use in the valley (MacDonald's and Chun's Dams), base flows of 2.32 cubic feet per second (1.5 million gallons per day) are available more than 95 percent of the time at MacDonald's Dam and 80 percent of the time at Chun's Dam. Base flows of 4.6 cubic feet per second (3.0 million gallons per day) are available 65 and 56 percent of the time, respectively. A base-flow water-accounting model was developed to estimate how flow-duration discharges for 13 sites on Honokohau Stream would change in response to a variety of flow release and diversion practices. A sample application of the model indicates that there is a 1 to 1 relation between changes in flow release rates at the Taro Gate and base flow upstream from MacDonald's Dam. At Chun's Dam the relation between Taro Gate releases and base flow varies with flow-duration percentiles. At the 95th and 60th percentiles, differences in base flow at Chun's Dam would equal about 50 and 90 percent of the change at the Taro Gate.

  19. Revisiting Boundary Perturbation Theory for Inhomogeneous Transport Problems

    DOE PAGES

    Favorite, Jeffrey A.; Gonzalez, Esteban

    2017-03-10

    Adjoint-based first-order perturbation theory is applied again to boundary perturbation problems. Rahnema developed a perturbation estimate that gives an accurate first-order approximation of a flux or reaction rate within a radioactive system when the boundary is perturbed. When the response of interest is the flux or leakage current on the boundary, the Roussopoulos perturbation estimate has long been used. The Rahnema and Roussopoulos estimates differ in one term. Our paper shows that the Rahnema and Roussopoulos estimates can be derived consistently, using different responses, from a single variational functional (due to Gheorghiu and Rahnema), resolving any apparent contradiction. In analyticmore » test problems, Rahnema’s estimate and the Roussopoulos estimate produce exact first derivatives of the response of interest when appropriately applied. We also present a realistic, nonanalytic test problem.« less

  20. Improved Satellite Estimation of Near-Surface Humidity Using Vertical Water Vapor Profile Information

    NASA Astrophysics Data System (ADS)

    Tomita, H.; Hihara, T.; Kubota, M.

    2018-01-01

    Near-surface air-specific humidity is a key variable in the estimation of air-sea latent heat flux and evaporation from the ocean surface. An accurate estimation over the global ocean is required for studies on global climate, air-sea interactions, and water cycles. Current remote sensing techniques are problematic and a major source of errors for flux and evaporation. Here we propose a new method to estimate surface humidity using satellite microwave radiometer instruments, based on a new finding about the relationship between multichannel brightness temperatures measured by satellite sensors, surface humidity, and vertical moisture structure. Satellite estimations using the new method were compared with in situ observations to evaluate this method, confirming that it could significantly improve satellite estimations with high impact on satellite estimation of latent heat flux. We recommend the adoption of this method for any satellite microwave radiometer observations.

  1. Data-Rate Estimation for Autonomous Receiver Operation

    NASA Technical Reports Server (NTRS)

    Tkacenko, A.; Simon, M. K.

    2005-01-01

    In this article, we present a series of algorithms for estimating the data rate of a signal whose admissible data rates are integer base, integer powered multiples of a known basic data rate. These algorithms can be applied to the Electra radio currently used in the Deep Space Network (DSN), which employs data rates having the above relationship. The estimation is carried out in an autonomous setting in which very little a priori information is assumed. It is done by exploiting an elegant property of the split symbol moments estimator (SSME), which is traditionally used to estimate the signal-to-noise ratio (SNR) of the received signal. By quantizing the assumed symbol-timing error or jitter, we present an all-digital implementation of the SSME which can be used to jointly estimate the data rate, SNR, and jitter. Simulation results presented show that these joint estimation algorithms perform well, even in the low SNR regions typically encountered in the DSN.

  2. Survival analysis for the missing censoring indicator model using kernel density estimation techniques

    PubMed Central

    Subramanian, Sundarraman

    2008-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423

  3. Survival analysis for the missing censoring indicator model using kernel density estimation techniques.

    PubMed

    Subramanian, Sundarraman

    2006-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented.

  4. Estimating Contraceptive Prevalence Using Logistics Data for Short-Acting Methods: Analysis Across 30 Countries.

    PubMed

    Cunningham, Marc; Bock, Ariella; Brown, Niquelle; Sacher, Suzy; Hatch, Benjamin; Inglis, Andrew; Aronovich, Dana

    2015-09-01

    Contraceptive prevalence rate (CPR) is a vital indicator used by country governments, international donors, and other stakeholders for measuring progress in family planning programs against country targets and global initiatives as well as for estimating health outcomes. Because of the need for more frequent CPR estimates than population-based surveys currently provide, alternative approaches for estimating CPRs are being explored, including using contraceptive logistics data. Using data from the Demographic and Health Surveys (DHS) in 30 countries, population data from the United States Census Bureau International Database, and logistics data from the Procurement Planning and Monitoring Report (PPMR) and the Pipeline Monitoring and Procurement Planning System (PipeLine), we developed and evaluated 3 models to generate country-level, public-sector contraceptive prevalence estimates for injectable contraceptives, oral contraceptives, and male condoms. Models included: direct estimation through existing couple-years of protection (CYP) conversion factors, bivariate linear regression, and multivariate linear regression. Model evaluation consisted of comparing the referent DHS prevalence rates for each short-acting method with the model-generated prevalence rate using multiple metrics, including mean absolute error and proportion of countries where the modeled prevalence rate for each method was within 1, 2, or 5 percentage points of the DHS referent value. For the methods studied, family planning use estimates from public-sector logistics data were correlated with those from the DHS, validating the quality and accuracy of current public-sector logistics data. Logistics data for oral and injectable contraceptives were significantly associated (P<.05) with the referent DHS values for both bivariate and multivariate models. For condoms, however, that association was only significant for the bivariate model. With the exception of the CYP-based model for condoms, models were able to estimate public-sector prevalence rates for each short-acting method to within 2 percentage points in at least 85% of countries. Public-sector contraceptive logistics data are strongly correlated with public-sector prevalence rates for short-acting methods, demonstrating the quality of current logistics data and their ability to provide relatively accurate prevalence estimates. The models provide a starting point for generating interim estimates of contraceptive use when timely survey data are unavailable. All models except the condoms CYP model performed well; the regression models were most accurate but the CYP model offers the simplest calculation method. Future work extending the research to other modern methods, relating subnational logistics data with prevalence rates, and tracking that relationship over time is needed. © Cunningham et al.

  5. Estimating Contraceptive Prevalence Using Logistics Data for Short-Acting Methods: Analysis Across 30 Countries

    PubMed Central

    Cunningham, Marc; Brown, Niquelle; Sacher, Suzy; Hatch, Benjamin; Inglis, Andrew; Aronovich, Dana

    2015-01-01

    Background: Contraceptive prevalence rate (CPR) is a vital indicator used by country governments, international donors, and other stakeholders for measuring progress in family planning programs against country targets and global initiatives as well as for estimating health outcomes. Because of the need for more frequent CPR estimates than population-based surveys currently provide, alternative approaches for estimating CPRs are being explored, including using contraceptive logistics data. Methods: Using data from the Demographic and Health Surveys (DHS) in 30 countries, population data from the United States Census Bureau International Database, and logistics data from the Procurement Planning and Monitoring Report (PPMR) and the Pipeline Monitoring and Procurement Planning System (PipeLine), we developed and evaluated 3 models to generate country-level, public-sector contraceptive prevalence estimates for injectable contraceptives, oral contraceptives, and male condoms. Models included: direct estimation through existing couple-years of protection (CYP) conversion factors, bivariate linear regression, and multivariate linear regression. Model evaluation consisted of comparing the referent DHS prevalence rates for each short-acting method with the model-generated prevalence rate using multiple metrics, including mean absolute error and proportion of countries where the modeled prevalence rate for each method was within 1, 2, or 5 percentage points of the DHS referent value. Results: For the methods studied, family planning use estimates from public-sector logistics data were correlated with those from the DHS, validating the quality and accuracy of current public-sector logistics data. Logistics data for oral and injectable contraceptives were significantly associated (P<.05) with the referent DHS values for both bivariate and multivariate models. For condoms, however, that association was only significant for the bivariate model. With the exception of the CYP-based model for condoms, models were able to estimate public-sector prevalence rates for each short-acting method to within 2 percentage points in at least 85% of countries. Conclusions: Public-sector contraceptive logistics data are strongly correlated with public-sector prevalence rates for short-acting methods, demonstrating the quality of current logistics data and their ability to provide relatively accurate prevalence estimates. The models provide a starting point for generating interim estimates of contraceptive use when timely survey data are unavailable. All models except the condoms CYP model performed well; the regression models were most accurate but the CYP model offers the simplest calculation method. Future work extending the research to other modern methods, relating subnational logistics data with prevalence rates, and tracking that relationship over time is needed. PMID:26374805

  6. Simple estimation of induced electric fields in nervous system tissues for human exposure to non-uniform electric fields at power frequency

    NASA Astrophysics Data System (ADS)

    Tarao, Hiroo; Miyamoto, Hironobu; Korpinen, Leena; Hayashi, Noriyuki; Isaka, Katsuo

    2016-06-01

    Most results regarding induced current in the human body related to electric field dosimetry have been calculated under uniform field conditions. We have found in previous work that a contact current is a more suitable way to evaluate induced electric fields, even in the case of exposure to non-uniform fields. If the relationship between induced currents and external non-uniform fields can be understood, induced electric fields in nervous system tissues may be able to be estimated from measurements of ambient non-uniform fields. In the present paper, we numerically calculated the induced electric fields and currents in a human model by considering non-uniform fields based on distortion by a cubic conductor under an unperturbed electric field of 1 kV m-1 at 60 Hz. We investigated the relationship between a non-uniform external electric field with no human present and the induced current through the neck, and the relationship between the current through the neck and the induced electric fields in nervous system tissues such as the brain, heart, and spinal cord. The results showed that the current through the neck can be formulated by means of an external electric field at the central position of the human head, and the distance between the conductor and the human model. As expected, there is a strong correlation between the current through the neck and the induced electric fields in the nervous system tissues. The combination of these relationships indicates that induced electric fields in these tissues can be estimated solely by measurements of the external field at a point and the distance from the conductor.

  7. Low-noise current amplifier based on mesoscopic Josephson junction.

    PubMed

    Delahaye, J; Hassel, J; Lindell, R; Sillanpää, M; Paalanen, M; Seppä, H; Hakonen, P

    2003-02-14

    We used the band structure of a mesoscopic Josephson junction to construct low-noise amplifiers. By taking advantage of the quantum dynamics of a Josephson junction, i.e., the interplay of interlevel transitions and the Coulomb blockade of Cooper pairs, we created transistor-like devices, Bloch oscillating transistors, with considerable current gain and high-input impedance. In these transistors, the correlated supercurrent of Cooper pairs is controlled by a small base current made up of single electrons. Our devices reached current and power gains on the order of 30 and 5, respectively. The noise temperature was estimated to be around 1 kelvin, but noise temperatures of less than 0.1 kelvin can be realistically achieved. These devices provide quantum-electronic building blocks that will be useful at low temperatures in low-noise circuit applications with an intermediate impedance level.

  8. Development of Real Time Implementation of 5/5 Rule based Fuzzy Logic Controller Shunt Active Power Filter for Power Quality Improvement

    NASA Astrophysics Data System (ADS)

    Puhan, Pratap Sekhar; Ray, Pravat Kumar; Panda, Gayadhar

    2016-12-01

    This paper presents the effectiveness of 5/5 Fuzzy rule implementation in Fuzzy Logic Controller conjunction with indirect control technique to enhance the power quality in single phase system, An indirect current controller in conjunction with Fuzzy Logic Controller is applied to the proposed shunt active power filter to estimate the peak reference current and capacitor voltage. Current Controller based pulse width modulation (CCPWM) is used to generate the switching signals of voltage source inverter. Various simulation results are presented to verify the good behaviour of the Shunt active Power Filter (SAPF) with proposed two levels Hysteresis Current Controller (HCC). For verification of Shunt Active Power Filter in real time, the proposed control algorithm has been implemented in laboratory developed setup in dSPACE platform.

  9. Attention as an effect not a cause

    PubMed Central

    Krauzlis, Richard J.; Bollimunta, Anil; Arcizet, Fabrice; Wang, Lupeng

    2014-01-01

    Attention is commonly thought to be important for managing the limited resources available in sensory areas of neocortex. Here we present an alternative view that attention arises as a byproduct of circuits centered on the basal ganglia involved in value-based decision-making. The central idea is that decision-making depends on properly estimating the current state of the animal and its environment, and that the weighted inputs to the currently prevailing estimate give rise to the filter-like properties of attention. After outlining this new framework, we describe findings from physiology, anatomy, computational and clinical work that support this point of view. We conclude that the brain mechanisms responsible for attention employ a conserved circuit motif that predates the emergence of the neocortex. PMID:24953964

  10. Gone to the Beach — Using GIS to infer how people value ...

    EPA Pesticide Factsheets

    Estimating the non-market value of beaches for saltwater recreation is complex. An individual’s preference for a beach depends on their perception of beach characteristics. When choosing one beach over another, an individual balances these personal preferences with any additional costs including travel time and/or fees to access the beach. This trade-off can be used to infer how people value different beach characteristics; especially when beaches are free to the public, beach value estimates rely heavily on accurate travel times. A current case study focused on public access on Cape Cod, MA will be used to demonstrate how travel costs can be used to determine the service area of different beaches, and model expected use of those beaches based on demographics. We will describe several of the transportation networks and route services available and compare a few based on their ability to meet our specific requirements of scale and seasonal travel time accuracy. We are currently developing a recreational demand model, based on visitation data and beach characteristics, that will allow decision makers to predict the benefits of different levels of water quality improvement. An important part of that model is the time required for potential recreation participants to get to different beaches. This presentation will describe different ways to estimate travel times and the advantages/disadvantages for our particular application. It will go on to outline how freely a

  11. Present status of astronomical constants

    NASA Astrophysics Data System (ADS)

    Fukushima, T.

    Given was the additional information to the previous report on the recent progress in the determinations of astronomical constants (Fukushima 2000). First noted was the revision of LG as 6.969290134×10-10 based on the proposal to shift its status from a primary to a defining constant (Petit 2000). Next focused was the significant update of the correction to the current precession constant, Δp, based on the recent LLR-based determination (Chapront et al. 2000) as -0.3164+/-0.0030"/cy. By combining this and the equal weighted average of VLBI determinations (Mathews et al. 2000; Petrov 2000; Shirai and Fukushima 2000; Vondrak and Ron 2000) as -0.2968+/-0.0043"/cy, we derived the best estimate of precession constant as p = 5028.790+/-0.005"/cy. Also redetermined were some other quantities related to the precession formula; namely the offsets of Celestial Ephemeris Pole of the International Celestial Reference System as &Deltaψ0sinɛ0 = (-17.0+/-0.3) mas and Δɛ0 = (-5.1+/-0.3) mas. As a result, the obliquity of the ecliptic at the epoch J2000.0 was estimated as ɛ0 = 23°26'21."4059+/-0."0003. As a summary, presented was the (revised) IAU 2000 File of Current Best Estimates of astronomical constants, which is to replace the former 1994 version (Standish 1995).

  12. Impacts of phenology on estimation of actual evapotranspiration with VegET model

    NASA Astrophysics Data System (ADS)

    Kovalskyy, V.; Henebry, G. M.

    2009-12-01

    The VegET model provides spatially explicit estimation of actual evapotranspiration (AET). Currently, it uses a climatology based on AVHRR NDVI image time series to modulate fluxes during growing seasons (Senay 2008). This step simplifies the model formulation, but it also introduces errors by ignoring the interannual variation in phenology. We report on a study to evaluate the effects of using an NDVI climatology in VegET rather than current season values. Using flux tower data from three sites across the US Corn Belt, we found that currently the model overestimates the duration of season. With the standard deviation of more than one week, the model results in an additional 50 to 70 mm of AET per season, which can account for about 10% of seasonal AET in drier western sites. The model showed only modest sensitivity to variation in growing season weather. This lack of sensitivity greatly decreased model accuracy during drought years: Pearson correlation coefficients between model estimates and observed values dropped from about 0.7 to 0.5, depending on vegetation type. We also evaluated an alternative approach to drive the canopy component of evapotranspiration, the Event Driven Phenology Model (EDPM). The parameterization of VegET with EDPM-simulated canopy dynamics improved the correlation by 0.1 or more and reduced the RMSE on daily AET estimates by 0.3 mm. By accounting for the progress of phenology during a particular growing season, the EDPM improves AET estimation over an NDVI climatology.

  13. Using satellite laser ranging to measure ice mass change in Greenland and Antarctica

    NASA Astrophysics Data System (ADS)

    Bonin, Jennifer A.; Chambers, Don P.; Cheng, Minkang

    2018-01-01

    A least squares inversion of satellite laser ranging (SLR) data over Greenland and Antarctica could extend gravimetry-based estimates of mass loss back to the early 1990s and fill any future gap between the current Gravity Recovery and Climate Experiment (GRACE) and the future GRACE Follow-On mission. The results of a simulation suggest that, while separating the mass change between Greenland and Antarctica is not possible at the limited spatial resolution of the SLR data, estimating the total combined mass change of the two areas is feasible. When the method is applied to real SLR and GRACE gravity series, we find significantly different estimates of inverted mass loss. There are large, unpredictable, interannual differences between the two inverted data types, making us conclude that the current 5×5 spherical harmonic SLR series cannot be used to stand in for GRACE. However, a comparison with the longer IMBIE time series suggests that on a 20-year time frame, the inverted SLR series' interannual excursions may average out, and the long-term mass loss estimate may be reasonable.

  14. Development of Neuromorphic Sift Operator with Application to High Speed Image Matching

    NASA Astrophysics Data System (ADS)

    Shankayi, M.; Saadatseresht, M.; Bitetto, M. A. V.

    2015-12-01

    There was always a speed/accuracy challenge in photogrammetric mapping process, including feature detection and matching. Most of the researches have improved algorithm's speed with simplifications or software modifications which increase the accuracy of the image matching process. This research tries to improve speed without enhancing the accuracy of the same algorithm using Neuromorphic techniques. In this research we have developed a general design of a Neuromorphic ASIC to handle algorithms such as SIFT. We also have investigated neural assignment in each step of the SIFT algorithm. With a rough estimation based on delay of the used elements including MAC and comparator, we have estimated the resulting chip's performance for 3 scenarios, Full HD movie (Videogrammetry), 24 MP (UAV photogrammetry), and 88 MP image sequence. Our estimations led to approximate 3000 fps for Full HD movie, 250 fps for 24 MP image sequence and 68 fps for 88MP Ultracam image sequence which can be a huge improvement for current photogrammetric processing systems. We also estimated the power consumption of less than10 watts which is not comparable to current workflows.

  15. Burden of typhoid fever in low-income and middle-income countries: a systematic, literature-based update with risk-factor adjustment.

    PubMed

    Mogasale, Vittal; Maskery, Brian; Ochiai, R Leon; Lee, Jung Seok; Mogasale, Vijayalaxmi V; Ramani, Enusa; Kim, Young Eun; Park, Jin Kyung; Wierzba, Thomas F

    2014-10-01

    No access to safe water is an important risk factor for typhoid fever, yet risk-level heterogeneity is unaccounted for in previous global burden estimates. Since WHO has recommended risk-based use of typhoid polysaccharide vaccine, we revisited the burden of typhoid fever in low-income and middle-income countries (LMICs) after adjusting for water-related risk. We estimated the typhoid disease burden from studies done in LMICs based on blood-culture-confirmed incidence rates applied to the 2010 population, after correcting for operational issues related to surveillance, limitations of diagnostic tests, and water-related risk. We derived incidence estimates, correction factors, and mortality estimates from systematic literature reviews. We did scenario analyses for risk factors, diagnostic sensitivity, and case fatality rates, accounting for the uncertainty in these estimates and we compared them with previous disease burden estimates. The estimated number of typhoid fever cases in LMICs in 2010 after adjusting for water-related risk was 11·9 million (95% CI 9·9-14·7) cases with 129 000 (75 000-208 000) deaths. By comparison, the estimated risk-unadjusted burden was 20·6 million (17·5-24·2) cases and 223 000 (131 000-344 000) deaths. Scenario analyses indicated that the risk-factor adjustment and updated diagnostic test correction factor derived from systematic literature reviews were the drivers of differences between the current estimate and past estimates. The risk-adjusted typhoid fever burden estimate was more conservative than previous estimates. However, by distinguishing the risk differences, it will allow assessment of the effect at the population level and will facilitate cost-effectiveness calculations for risk-based vaccination strategies for future typhoid conjugate vaccine. Copyright © 2014 Mogasale et al. Open Access article distributed under the terms of CC BY-NC-SA. Published by .. All rights reserved.

  16. New insights of the Northern Current in the Western Mediterranean Sea from Gliders data: Mean structure, Transport, and Seasonal Variability

    NASA Astrophysics Data System (ADS)

    Bosse, Anthony; Testor, Pierre; Mortier, Laurent; Beguery, Laurent; Bernardet, Karim; Taillandier, Vincent; d'Ortenzio, Fabrizio; Prieur, Louis; Coppola, Laurent; Bourrin, François

    2013-04-01

    In the last 5 years, an unprecedented effort in the sampling of the Northern Current (NC) has been carried out using gliders which collected more than 50 000 profiles down to 1000m maximum along a few repeated sections perpendicular to the French coast. Based on this dataset, this study presents a very first quantitative picture of the NC on 0-1000m depth. We show its mean structure of temperature and salinity characterized by the different Water Masses of the basin (Atlantic Water, Winter Intermediate Water, Levantine Intermediate Water and Western Mediterranean Deep Water) for each season and at different location. Geostrophic currents are derived from the integration of the thermal-wind balance using the mean glider-estimate of the current during each dive as a reference. Estimates of the heat, salt, and volume transport are then computed in order to draw an heat and salt budget of the NC. The results show a strong seasonal variability due to the intense surface buoyancy loss in winter resulting in a vertical mixing offshore that makes the mixed layer depth reaching several hundreds of meters in the whole basin and in a very particular area down to the bottom of the sea-floor (deep convection area). The horizontal density gradient intensifies in winter leading to geostrophic currents that are more intense and more confined to the continental slope, and thus to the enhancement of the mesoscale activity (meandering, formation of eddies through baroclinic instability...). The mean transport estimates of the NC is found to be about 2-3Sv greater than previous spurious estimates. The heat budget of the NC also provides an estimate of the mean across shore heat/salt flux directly impacting the region in the Gulf of Lion where deep ocean convection, a key process in the thermohaline circulation of the Mediterranean Sea, can occur in Winter.

  17. Extracting Prior Distributions from a Large Dataset of In-Situ Measurements to Support SWOT-based Estimation of River Discharge

    NASA Astrophysics Data System (ADS)

    Hagemann, M.; Gleason, C. J.

    2017-12-01

    The upcoming (2021) Surface Water and Ocean Topography (SWOT) NASA satellite mission aims, in part, to estimate discharge on major rivers worldwide using reach-scale measurements of stream width, slope, and height. Current formalizations of channel and floodplain hydraulics are insufficient to fully constrain this problem mathematically, resulting in an infinitely large solution set for any set of satellite observations. Recent work has reformulated this problem in a Bayesian statistical setting, in which the likelihood distributions derive directly from hydraulic flow-law equations. When coupled with prior distributions on unknown flow-law parameters, this formulation probabilistically constrains the parameter space, and results in a computationally tractable description of discharge. Using a curated dataset of over 200,000 in-situ acoustic Doppler current profiler (ADCP) discharge measurements from over 10,000 USGS gaging stations throughout the United States, we developed empirical prior distributions for flow-law parameters that are not observable by SWOT, but that are required in order to estimate discharge. This analysis quantified prior uncertainties on quantities including cross-sectional area, at-a-station hydraulic geometry width exponent, and discharge variability, that are dependent on SWOT-observable variables including reach-scale statistics of width and height. When compared against discharge estimation approaches that do not use this prior information, the Bayesian approach using ADCP-derived priors demonstrated consistently improved performance across a range of performance metrics. This Bayesian approach formally transfers information from in-situ gaging stations to remote-sensed estimation of discharge, in which the desired quantities are not directly observable. Further investigation using large in-situ datasets is therefore a promising way forward in improving satellite-based estimates of river discharge.

  18. A Bayesian Hierarchical Modeling Scheme for Estimating Erosion Rates Under Current Climate Conditions

    NASA Astrophysics Data System (ADS)

    Lowman, L.; Barros, A. P.

    2014-12-01

    Computational modeling of surface erosion processes is inherently difficult because of the four-dimensional nature of the problem and the multiple temporal and spatial scales that govern individual mechanisms. Landscapes are modified via surface and fluvial erosion and exhumation, each of which takes place over a range of time scales. Traditional field measurements of erosion/exhumation rates are scale dependent, often valid for a single point-wise location or averaging over large aerial extents and periods with intense and mild erosion. We present a method of remotely estimating erosion rates using a Bayesian hierarchical model based upon the stream power erosion law (SPEL). A Bayesian approach allows for estimating erosion rates using the deterministic relationship given by the SPEL and data on channel slopes and precipitation at the basin and sub-basin scale. The spatial scale associated with this framework is the elevation class, where each class is characterized by distinct morphologic behavior observed through different modes in the distribution of basin outlet elevations. Interestingly, the distributions of first-order outlets are similar in shape and extent to the distribution of precipitation events (i.e. individual storms) over a 14-year period between 1998-2011. We demonstrate an application of the Bayesian hierarchical modeling framework for five basins and one intermontane basin located in the central Andes between 5S and 20S. Using remotely sensed data of current annual precipitation rates from the Tropical Rainfall Measuring Mission (TRMM) and topography from a high resolution (3 arc-seconds) digital elevation map (DEM), our erosion rate estimates are consistent with decadal-scale estimates based on landslide mapping and sediment flux observations and 1-2 orders of magnitude larger than most millennial and million year timescale estimates from thermochronology and cosmogenic nuclides.

  19. Simulated effects of projected pumping on the availability of freshwater in the Evangeline Aquifer in an area southwest of Corpus Christi, Texas

    USGS Publications Warehouse

    Groschen, George E.

    1985-01-01

    Two simulations of the projected pumping a low estimate, as much as 46.2 cubic feet per second during 2011-20; and a high estimate, as much as 60.0 cubic feet per second during the same period indicate that no further regional water-quality deterioration is likely to occur. Many important properties and conditions are estimated from poor or insufficient field data, and possible ranges of these properties and conditions are tested. In spite of the errors and data deficiencies, the results are based on the best estimates currently available. The reliability of the conclusions rests on the adequacy of the data and the demonstrated sensitivity of the model results to errors in estimates of these properties.

  20. Evaluation of a segment-based LANDSAT full-frame approach to corp area estimation

    NASA Technical Reports Server (NTRS)

    Bauer, M. E. (Principal Investigator); Hixson, M. M.; Davis, S. M.

    1981-01-01

    As the registration of LANDSAT full frames enters the realm of current technology, sampling methods should be examined which utilize other than the segment data used for LACIE. The effect of separating the functions of sampling for training and sampling for area estimation. The frame selected for analysis was acquired over north central Iowa on August 9, 1978. A stratification of he full-frame was defined. Training data came from segments within the frame. Two classification and estimation procedures were compared: statistics developed on one segment were used to classify that segment, and pooled statistics from the segments were used to classify a systematic sample of pixels. Comparisons to USDA/ESCS estimates illustrate that the full-frame sampling approach can provide accurate and precise area estimates.

  1. State of charge estimation in Ni-MH rechargeable batteries

    NASA Astrophysics Data System (ADS)

    Milocco, R. H.; Castro, B. E.

    In this work we estimate the state of charge (SOC) of Ni-MH rechargeable batteries using the Kalman filter based on a simplified electrochemical model. First, we derive the complete electrochemical model of the battery which includes diffusional processes and kinetic reactions in both Ni and MH electrodes. The full model is further reduced in a cascade of two parts, a linear time invariant dynamical sub-model followed by a static nonlinearity. Both parts are identified using the current and potential measured at the terminals of the battery with a simple 1-D minimization procedure. The inverse of the static nonlinearity together with a Kalman filter provide the SOC estimation as a linear estimation problem. Experimental results with commercial batteries are provided to illustrate the estimation procedure and to show the performance.

  2. Estimating Real-Time Zenith Tropospheric Delay over Africa Using IGS-RTS Products

    NASA Astrophysics Data System (ADS)

    Abdelazeem, M.

    2017-12-01

    Zenith Tropospheric Delay (ZTD) is a crucial parameter for atmospheric modeling, severe weather monitoring and forecasting applications. Currently, the international global navigation satellite system (GNSS) real-time service (IGS-RTS) products are used extensively in real-time atmospheric modeling applications. The objective of this study is to develop a real time zenith tropospheric delay estimation model over Africa using the IGS-RTS products. The real-time ZTDs are estimated based on the real-time precise point positioning (PPP) solution. GNSS observations from a number of reference stations are processed over a period of 7 days. Then, the estimated real-time ZTDs are compared with the IGS tropospheric products counterparts. The findings indicate that the estimated real-time ZTDs have millimeter level accuracy in comparison with the IGS counterparts.

  3. Prognostics and Health Monitoring: Application to Electric Vehicles

    NASA Technical Reports Server (NTRS)

    Kulkarni, Chetan S.

    2017-01-01

    As more and more autonomous electric vehicles emerge in our daily operation progressively, a very critical challenge lies in accurate prediction of remaining useful life of the systemssubsystems, specifically the electrical powertrain. In case of electric aircrafts, computing remaining flying time is safety-critical, since an aircraft that runs out of power (battery charge) while in the air will eventually lose control leading to catastrophe. In order to tackle and solve the prediction problem, it is essential to have awareness of the current state and health of the system, especially since it is necessary to perform condition-based predictions. To be able to predict the future state of the system, it is also required to possess knowledge of the current and future operations of the vehicle.Our research approach is to develop a system level health monitoring safety indicator either to the pilotautopilot for the electric vehicles which runs estimation and prediction algorithms to estimate remaining useful life of the vehicle e.g. determine state-of-charge in batteries. Given models of the current and future system behavior, a general approach of model-based prognostics can be employed as a solution to the prediction problem and further for decision making.

  4. Alumina Concentration Detection Based on the Kernel Extreme Learning Machine.

    PubMed

    Zhang, Sen; Zhang, Tao; Yin, Yixin; Xiao, Wendong

    2017-09-01

    The concentration of alumina in the electrolyte is of great significance during the production of aluminum. The amount of the alumina concentration may lead to unbalanced material distribution and low production efficiency and affect the stability of the aluminum reduction cell and current efficiency. The existing methods cannot meet the needs for online measurement because industrial aluminum electrolysis has the characteristics of high temperature, strong magnetic field, coupled parameters, and high nonlinearity. Currently, there are no sensors or equipment that can detect the alumina concentration on line. Most companies acquire the alumina concentration from the electrolyte samples which are analyzed through an X-ray fluorescence spectrometer. To solve the problem, the paper proposes a soft sensing model based on a kernel extreme learning machine algorithm that takes the kernel function into the extreme learning machine. K-fold cross validation is used to estimate the generalization error. The proposed soft sensing algorithm can detect alumina concentration by the electrical signals such as voltages and currents of the anode rods. The predicted results show that the proposed approach can give more accurate estimations of alumina concentration with faster learning speed compared with the other methods such as the basic ELM, BP, and SVM.

  5. Can we estimate plasma density in ICP driver through electrical parameters in RF circuit?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bandyopadhyay, M., E-mail: mainak@iter-india.org; Sudhir, Dass, E-mail: dass.sudhir@iter-india.org; Chakraborty, A., E-mail: arunkc@iter-india.org

    2015-04-08

    To avoid regular maintenance, invasive plasma diagnostics with probes are not included in the inductively coupled plasma (ICP) based ITER Neutral Beam (NB) source design. Even non-invasive probes like optical emission spectroscopic diagnostics are also not included in the present ITER NB design due to overall system design and interface issues. As a result, negative ion beam current through the extraction system in the ITER NB negative ion source is the only measurement which indicates plasma condition inside the ion source. However, beam current not only depends on the plasma condition near the extraction region but also on the perveancemore » condition of the ion extractor system and negative ion stripping. Nevertheless, inductively coupled plasma production region (RF driver region) is placed at distance (∼ 30cm) from the extraction region. Due to that, some uncertainties are expected to be involved if one tries to link beam current with plasma properties inside the RF driver. Plasma characterization in source RF driver region is utmost necessary to maintain the optimum condition for source operation. In this paper, a method of plasma density estimation is described, based on density dependent plasma load calculation.« less

  6. The international food unit: a new measurement aid that can improve portion size estimation.

    PubMed

    Bucher, T; Weltert, M; Rollo, M E; Smith, S P; Jia, W; Collins, C E; Sun, M

    2017-09-12

    Portion size education tools, aids and interventions can be effective in helping prevent weight gain. However consumers have difficulties in estimating food portion sizes and are confused by inconsistencies in measurement units and terminologies currently used. Visual cues are an important mediator of portion size estimation, but standardized measurement units are required. In the current study, we present a new food volume estimation tool and test the ability of young adults to accurately quantify food volumes. The International Food Unit™ (IFU™) is a 4x4x4 cm cube (64cm 3 ), subdivided into eight 2 cm sub-cubes for estimating smaller food volumes. Compared with currently used measures such as cups and spoons, the IFU™ standardizes estimation of food volumes with metric measures. The IFU™ design is based on binary dimensional increments and the cubic shape facilitates portion size education and training, memory and recall, and computer processing which is binary in nature. The performance of the IFU™ was tested in a randomized between-subject experiment (n = 128 adults, 66 men) that estimated volumes of 17 foods using four methods; the IFU™ cube, a deformable modelling clay cube, a household measuring cup or no aid (weight estimation). Estimation errors were compared between groups using Kruskall-Wallis tests and post-hoc comparisons. Estimation errors differed significantly between groups (H(3) = 28.48, p < .001). The volume estimations were most accurate in the group using the IFU™ cube (Mdn = 18.9%, IQR = 50.2) and least accurate using the measuring cup (Mdn = 87.7%, IQR = 56.1). The modelling clay cube led to a median error of 44.8% (IQR = 41.9). Compared with the measuring cup, the estimation errors using the IFU™ were significantly smaller for 12 food portions and similar for 5 food portions. Weight estimation was associated with a median error of 23.5% (IQR = 79.8). The IFU™ improves volume estimation accuracy compared to other methods. The cubic shape was perceived as favourable, with subdivision and multiplication facilitating volume estimation. Further studies should investigate whether the IFU™ can facilitate portion size training and whether portion size education using the IFU™ is effective and sustainable without the aid. A 3-dimensional IFU™ could serve as a reference object for estimating food volume.

  7. A proposed framework for consensus-based lung tumour volume auto-segmentation in 4D computed tomography imaging

    NASA Astrophysics Data System (ADS)

    Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V.; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L.; Beauchemin, Steven S.; Rodrigues, George; Gaede, Stewart

    2015-02-01

    This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51  ±  1.92) to (97.27  ±  0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.

  8. A proposed framework for consensus-based lung tumour volume auto-segmentation in 4D computed tomography imaging.

    PubMed

    Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L; Beauchemin, Steven S; Rodrigues, George; Gaede, Stewart

    2015-02-21

    This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51  ±  1.92) to (97.27  ±  0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.

  9. Human Body 3D Posture Estimation Using Significant Points and Two Cameras

    PubMed Central

    Juang, Chia-Feng; Chen, Teng-Chang; Du, Wei-Chin

    2014-01-01

    This paper proposes a three-dimensional (3D) human posture estimation system that locates 3D significant body points based on 2D body contours extracted from two cameras without using any depth sensors. The 3D significant body points that are located by this system include the head, the center of the body, the tips of the feet, the tips of the hands, the elbows, and the knees. First, a linear support vector machine- (SVM-) based segmentation method is proposed to distinguish the human body from the background in red, green, and blue (RGB) color space. The SVM-based segmentation method uses not only normalized color differences but also included angle between pixels in the current frame and the background in order to reduce shadow influence. After segmentation, 2D significant points in each of the two extracted images are located. A significant point volume matching (SPVM) method is then proposed to reconstruct the 3D significant body point locations by using 2D posture estimation results. Experimental results show that the proposed SVM-based segmentation method shows better performance than other gray level- and RGB-based segmentation approaches. This paper also shows the effectiveness of the 3D posture estimation results in different postures. PMID:24883422

  10. A Comparison of Vertical Deformations Derived from Space-based Gravimetry, Ground-based Sensors, and Model-based Hydrologic Loading over the Western United States

    NASA Astrophysics Data System (ADS)

    Yin, G.; Forman, B. A.; Loomis, B. D.; Luthcke, S. B.

    2017-12-01

    Vertical deformation of the Earth's crust due to the movement and redistribution of terrestrial freshwater can be studied using satellite measurements, ground-based sensors, hydrologic models, or a combination thereof. This current study explores the relationship between vertical deformation estimates derived from mass concentrations (mascons) from the Gravity Recovery and Climate Experiment (GRACE), vertical deformation from ground-based Global Positioning System (GPS) observations collected from the Plate Boundary Observatory (PBO), and hydrologic loading estimates based on model output from the NASA Catchment Land Surface Model (Catchment). A particular focus is made to snow-dominated basins where mass accumulates during the snow season and subsequently runs off during the ablation season. The mean seasonal cycle and the effects of atmospheric loading, non-tidal ocean loading, and glacier isostatic adjustment (GIA) are removed from the GPS observations in order to derive the vertical displacement caused predominately by hydrological processes. A low-pass filter is applied to GPS observations to remove high frequency noise. Correlation coefficients between GRACE- and GPS-based estimates at all PBO sites are calculated. GRACE-derived and Catchment-derived displacements are subtracted from the GPS height variations, respectively, in order to compute the root mean square (RMS) reduction as a means of studying the consistency between the three different methods. Results show that in most sites, the three methods exhibit good agreement. Exceptions to this generalization include the Central Valley of California where extensive groundwater pumping is witnessed in the GRACE- and GPS-based estimates, but not in the Catchment-based estimates because anthropogenic groundwater pumping activities are not included in the Catchment model. The relatively good agreement between GPS- and GRACE-derived vertical crustal displacements suggests that ground-based GPS has tremendous potential for a Bayesian merger with GRACE-based estimates in order to provide a higher resolution (in space and time) of terrestrial water storage.

  11. Evaluation of the HF-Radar network system around Taiwan using normalized cumulative Lagrangian separation.

    NASA Astrophysics Data System (ADS)

    Fredj, Erick; Kohut, Josh; Roarty, Hugh; Lai, Jian-Wu

    2017-04-01

    The Lagrangian separation distance between the endpoints of simulated and observed drifter trajectories is often used to assess the performance of numerical particle trajectory models. However, the separation distance fails to indicate relative model performance in weak and strong current regions, such as over continental shelves and the adjacent deep ocean. A skill score described in detail by (Lui et.al. 2011) was applied to estimate the cumulative Lagrangian separation distances normalized by the associated cumulative trajectory lengths. In contrast, the Lagrangian separation distance alone gives a misleading result. The proposed dimensionless skill score is particularly useful when the number of drifter trajectories is limited and neither a conventional Eulerian-based velocity nor a Lagrangian based probability density function may be estimated. The skill score assesses The Taiwan Ocean Radar Observing System (TOROS) performance. TOROS consists of 17 SeaSonde type radars around the Taiwan Island. The currents off Taiwan are significantly influenced by the nearby Kuroshio current. The main stream of the Kuroshio flows along the east coast of Taiwan to the north throughout the year. Sometimes its branch current also bypasses the south end of Taiwan and goes north along the west coast of Taiwan. The Kuroshio is also prone to seasonal change in its speed of flow, current capacity, distribution width, and depth. The evaluations of HF-Radar National Taiwanese network performance using Lagrangian drifter records demonstrated the high quality and robustness of TOROS HF-Radar data using a purely trajectory-based non-dimensional index. Yonggang Liu and Robert H. Weisberg, "Evaluation of trajectory modeling in different dynamic regions using normalized cumulative Lagrangian separation", Journal of Geophysical Research, Vol. 116, C09013, doi:10.1029/2010JC006837, 2011

  12. How Do Land-Use and Climate Change Affect Watershed Health? A Scenario-Based Analysis

    EPA Science Inventory

    With the growing emphasis on biofuel crops and potential impacts of climate variability and change, there is a need to quantify their effects on hydrological processes for developing watershed management plans. Environmental consequences are currently estimated by utilizing comp...

  13. Green Infrastructure and Stormwater Utility Credit Design for Sustainability

    EPA Science Inventory

    A current trend in funding urban stormwater programs relies on the issuance of stormwater utilities (i.e., fees) based on some measure of impervious surface (e.g., actual, estimated, average), and local programs vary greatly, dependent upon state law, municipal ordinances, and co...

  14. Risk and the physics of clinical prediction.

    PubMed

    McEvoy, John W; Diamond, George A; Detrano, Robert C; Kaul, Sanjay; Blaha, Michael J; Blumenthal, Roger S; Jones, Steven R

    2014-04-15

    The current paradigm of primary prevention in cardiology uses traditional risk factors to estimate future cardiovascular risk. These risk estimates are based on prediction models derived from prospective cohort studies and are incorporated into guideline-based initiation algorithms for commonly used preventive pharmacologic treatments, such as aspirin and statins. However, risk estimates are more accurate for populations of similar patients than they are for any individual patient. It may be hazardous to presume that the point estimate of risk derived from a population model represents the most accurate estimate for a given patient. In this review, we exploit principles derived from physics as a metaphor for the distinction between predictions regarding populations versus patients. We identify the following: (1) predictions of risk are accurate at the level of populations but do not translate directly to patients, (2) perfect accuracy of individual risk estimation is unobtainable even with the addition of multiple novel risk factors, and (3) direct measurement of subclinical disease (screening) affords far greater certainty regarding the personalized treatment of patients, whereas risk estimates often remain uncertain for patients. In conclusion, shifting our focus from prediction of events to detection of disease could improve personalized decision-making and outcomes. We also discuss innovative future strategies for risk estimation and treatment allocation in preventive cardiology. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. A case study of the Thunderstorm Research International Project storm of July 11, 1978. I - Analysis of the data base

    NASA Technical Reports Server (NTRS)

    Nisbet, John S.; Barnard, Theresa A.; Forbes, Gregory S.; Krider, E. Philip; Lhermitte, Roger

    1990-01-01

    The data obtained at the time of the Thunderstorm Research International Project storm at the Kennedy Space Center on July 11, 1978 are analyzed in a model-independent manner. The data base included data from three Doppler radars, a lightning detection and ranging system and a network of 25 electric field mills, and rain gages. Electric field measurements were used to analyze the charge moments transferred by lightning flashes, and the data were fitted to Weibull distributions; these were used to estimate statistical parameters of the lightning for both intracloud and cloud-to-ground flashes and to estimate the fraction of the flashes which were below the observation threshold. The displacement and the conduction current densities were calculated from electric field measurements between flashes. These values were used to derive the magnitudes and the locations of dipole and monopole generators by least squares fitting the measured Maxwell current densities to the displacement-dominated equations.

  16. Carbon Ion Radiotherapy At Gunma University: Currently Indicated Cancer And Estimation Of Need

    NASA Astrophysics Data System (ADS)

    Ohno, Tatsuya; Nakano, Takashi; Kanai, Tatsuaki; Yamada, Satoru

    2011-06-01

    Carbon ion radiotherapy for the first patient at Gunma University Heavy Ion Medical Center (GHMC) was initiated in March of 2010. The major specifications of the facility were determined based on the experience of clinical treatments at National Institute of Radiological Sciences (NIRS). The currently indicated sites of cancer treatment at GHMC are lung, prostate, head and neck, liver, rectum, bone and soft tissue. In order to evaluate the potential need for treatment in the region including Gunma prefecture and the adjacent 4 prefectures, an estimation model was constructed based on the Japanese cancer registration system, regular structure surveys by the Cancer Societies, and published articles on each cancer type. Carbon ion RT was potentially indicated for 8,085 patients and realistically for 1,527 patients, corresponding to 10% and 2% of the newly diagnosed cancer patients in the region. Prostate cancer (541 patients) followed by lung cancer (436 patients), and liver cancer (313 patients) were the most commonly diagnosed cancers.

  17. PockDrug-Server: a new web server for predicting pocket druggability on holo and apo proteins

    PubMed Central

    Hussein, Hiba Abi; Borrel, Alexandre; Geneix, Colette; Petitjean, Michel; Regad, Leslie; Camproux, Anne-Claude

    2015-01-01

    Predicting protein pocket's ability to bind drug-like molecules with high affinity, i.e. druggability, is of major interest in the target identification phase of drug discovery. Therefore, pocket druggability investigations represent a key step of compound clinical progression projects. Currently computational druggability prediction models are attached to one unique pocket estimation method despite pocket estimation uncertainties. In this paper, we propose ‘PockDrug-Server’ to predict pocket druggability, efficient on both (i) estimated pockets guided by the ligand proximity (extracted by proximity to a ligand from a holo protein structure) and (ii) estimated pockets based solely on protein structure information (based on amino atoms that form the surface of potential binding cavities). PockDrug-Server provides consistent druggability results using different pocket estimation methods. It is robust with respect to pocket boundary and estimation uncertainties, thus efficient using apo pockets that are challenging to estimate. It clearly distinguishes druggable from less druggable pockets using different estimation methods and outperformed recent druggability models for apo pockets. It can be carried out from one or a set of apo/holo proteins using different pocket estimation methods proposed by our web server or from any pocket previously estimated by the user. PockDrug-Server is publicly available at: http://pockdrug.rpbs.univ-paris-diderot.fr. PMID:25956651

  18. PockDrug-Server: a new web server for predicting pocket druggability on holo and apo proteins.

    PubMed

    Hussein, Hiba Abi; Borrel, Alexandre; Geneix, Colette; Petitjean, Michel; Regad, Leslie; Camproux, Anne-Claude

    2015-07-01

    Predicting protein pocket's ability to bind drug-like molecules with high affinity, i.e. druggability, is of major interest in the target identification phase of drug discovery. Therefore, pocket druggability investigations represent a key step of compound clinical progression projects. Currently computational druggability prediction models are attached to one unique pocket estimation method despite pocket estimation uncertainties. In this paper, we propose 'PockDrug-Server' to predict pocket druggability, efficient on both (i) estimated pockets guided by the ligand proximity (extracted by proximity to a ligand from a holo protein structure) and (ii) estimated pockets based solely on protein structure information (based on amino atoms that form the surface of potential binding cavities). PockDrug-Server provides consistent druggability results using different pocket estimation methods. It is robust with respect to pocket boundary and estimation uncertainties, thus efficient using apo pockets that are challenging to estimate. It clearly distinguishes druggable from less druggable pockets using different estimation methods and outperformed recent druggability models for apo pockets. It can be carried out from one or a set of apo/holo proteins using different pocket estimation methods proposed by our web server or from any pocket previously estimated by the user. PockDrug-Server is publicly available at: http://pockdrug.rpbs.univ-paris-diderot.fr. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. An inventory of irrigated lands for selected counties within the state of California based on LANDSAT and supporting aircraft data

    NASA Technical Reports Server (NTRS)

    Colwell, R. N. (Principal Investigator)

    1977-01-01

    The author has identified the following significant results: (1) Goals of the irrigated lands project were addressed by the design and implementation of a multiphase sampling scheme that was founded on the utilization of a LANDSAT-based remote sensing system. (2) The synoptic coverage of LANDSAT and the eighteen day orbit cycle allowed the project to study agricultural test sites in a variety of environmental regions and monitor the development of crops throughout the major growing season. (3) The capability to utilize multidate imagery is crucial to the reliable estimation of irrigated acreage in California where multiple cropping is widespread and current estimation systems must rely on single data survey techniques. (4) In addition, the magnitude of agricultural acreage in California makes estimation by conventional methods impossible.

  20. Assessing fire emissions from tropical savanna and forests of central Brazil

    NASA Technical Reports Server (NTRS)

    Riggan, Philip J.; Brass, James A.; Lockwood, Robert N.

    1993-01-01

    Wildfires in tropical forest and savanna are a strong source of trace gas and particulate emissions to the atmosphere, but estimates of the continental-scale impacts are limited by large uncertainties in the rates of fire occurrence and biomass combustion. Satellite-based remote sensing offers promise for characterizing fire physical properties and impacts on the environment, but currently available sensors saturate over high-radiance targets and provide only indications of regions and times at which fires are extensive and their areal rate of growing as recorded in ash layers. Here we describe an approach combining satellite- and aircraft-based remote sensing with in situ measurements of smoke to estimate emissions from central Brazil. These estimates will improve global accounting of radiation-absorbing gases and particulates that may be contributing to climate change and will provide strategic data for fire management.

  1. Gyrotron-driven high current ECR ion source for boron-neutron capture therapy neutron generator

    NASA Astrophysics Data System (ADS)

    Skalyga, V.; Izotov, I.; Golubev, S.; Razin, S.; Sidorov, A.; Maslennikova, A.; Volovecky, A.; Kalvas, T.; Koivisto, H.; Tarvainen, O.

    2014-12-01

    Boron-neutron capture therapy (BNCT) is a perspective treatment method for radiation resistant tumors. Unfortunately its development is strongly held back by a several physical and medical problems. Neutron sources for BNCT currently are limited to nuclear reactors and accelerators. For wide spread of BNCT investigations more compact and cheap neutron source would be much more preferable. In present paper an approach for compact D-D neutron generator creation based on a high current ECR ion source is suggested. Results on dense proton beams production are presented. A possibility of ion beams formation with current density up to 600 mA/cm2 is demonstrated. Estimations based on obtained experimental results show that neutron target bombarded by such deuteron beams would theoretically yield a neutron flux density up to 6·1010 cm-2/s. Thus, neutron generator based on a high-current deuteron ECR source with a powerful plasma heating by gyrotron radiation could fulfill the BNCT requirements significantly lower price, smaller size and ease of operation in comparison with existing reactors and accelerators.

  2. Precise Image-Based Motion Estimation for Autonomous Small Body Exploration

    NASA Technical Reports Server (NTRS)

    Johnson, Andrew Edie; Matthies, Larry H.

    2000-01-01

    We have developed and tested a software algorithm that enables onboard autonomous motion estimation near small bodies using descent camera imagery and laser altimetry. Through simulation and testing, we have shown that visual feature tracking can decrease uncertainty in spacecraft motion to a level that makes landing on small, irregularly shaped, bodies feasible. Possible future work will include qualification of the algorithm as a flight experiment for the Deep Space 4/Champollion comet lander mission currently under study at the Jet Propulsion Laboratory.

  3. ULTIMA: Array of ground-based magnetometer arrays for monitoring magnetospheric and ionospheric perturbations on a global scale

    NASA Astrophysics Data System (ADS)

    Yumoto, K.; Chi, P. J.; Angelopoulos, V.; Connors, M. G.; Engebretson, M. J.; Fraser, B. J.; Mann, I. R.; Milling, D. K.; Moldwin, M. B.; Russell, C. T.; Stolle, C.; Tanskanen, E.; Vallante, M.; Yizengaw, E.; Zesta, E.

    2012-12-01

    ULTIMA (Ultra Large Terrestrial International Magnetic Array) is an international consortium that aims at promoting collaborative research on the magnetosphere, ionosphere, and upper atmosphere through the use of ground-based magnetic field observatories. ULTIMA is joined by individual magnetometer arrays in different countries/regions, and the current regular-member arrays are Australian, AUTUMN, CARISMA, DTU Space, Falcon, IGPP-LANL, IMAGE, MACCS, MAGDAS, McMAC, MEASURE, THEMIS, and SAMBA. The Chair of ULTIMA has been K. Yumoto (MAGDAS), and its Secretary has been P. Chi (McMAC, Falcon). In this paper we perform case studies in which we estimate the global patterns of (1) near-Earth currents and (2) magnetic pulsations; these phenomena are observed over wide areas on the ground, thus suitable for the aims of ULTIMA. We analyze these two phenomena during (a) quiet period and (b) magnetic storm period. We compare the differences between these two periods by drawing the global maps of the ionospheric equivalent currents (which include the effects of all the near-Earth currents) and pulsation amplitudes. For ionospheric Sq currents at low latitudes during quiet periods, MAGDAS data covering an entire solar cycle has yielded a detailed statistical model, and we can use it as a reference for the aforementioned comparison. We also estimate the azimuthal wave numbers of pulsations and compare the amplitude distribution of pulsations with the distribution of highly energetic (in MeV range) particles simultaneously observed at geosynchronous satellites.

  4. Estimation of phytoplankton production from space: current status and future potential of satellite remote sensing.

    PubMed

    Joint; Groom

    2000-07-30

    A new generation of ocean colour satellites is now operational, with frequent observation of the global ocean. This paper reviews the potential to estimate marine primary production from satellite images. The procedures involved in retrieving estimates of phytoplankton biomass, as pigment concentrations, are discussed. Algorithms are applied to SeaWiFS ocean colour data to indicate seasonal variations in phytoplankton biomass in the Celtic Sea, on the continental shelf to the south west of the UK. Algorithms to estimate primary production rates from chlorophyll concentration are compared and the advantages and disadvantage discussed. The simplest algorithms utilise correlations between chlorophyll concentration and production rate and one equation is used to estimate daily primary production rates for the western English Channel and Celtic Sea; these estimates compare favourably with published values. Primary production for the central Celtic Sea in the period April to September inclusive is estimated from SeaWiFS data to be 102 gC m(-2) in 1998 and 93 gC m(-2) in 1999; published estimates, based on in situ incubations, are ca. 80 gC m(-2). The satellite data demonstrate large variations in primary production between 1998 and 1999, with a significant increase in late summer in 1998 which did not occur in 1999. Errors are quantified for the estimation of primary production from simple algorithms based on satellite-derived chlorophyll concentration. These data show the potential to obtain better estimates of marine primary production than are possible with ship-based methods, with the ability to detect short-lived phytoplankton blooms. In addition, the potential to estimate new production from satellite data is discussed.

  5. Neural Network-Based Retrieval of Surface and Root Zone Soil Moisture using Multi-Frequency Remotely-Sensed Observations

    NASA Astrophysics Data System (ADS)

    Hamed Alemohammad, Seyed; Kolassa, Jana; Prigent, Catherine; Aires, Filipe; Gentine, Pierre

    2017-04-01

    Knowledge of root zone soil moisture is essential in studying plant's response to different stress conditions since plant photosynthetic activity and transpiration rate are constrained by the water available through their roots. Current global root zone soil moisture estimates are based on either outputs from physical models constrained by observations, or assimilation of remotely-sensed microwave-based surface soil moisture estimates with physical model outputs. However, quality of these estimates are limited by the accuracy of the model representations of physical processes (such as radiative transfer, infiltration, percolation, and evapotranspiration) as well as errors in the estimates of the surface parameters. Additionally, statistical approaches provide an alternative efficient platform to develop root zone soil moisture retrieval algorithms from remotely-sensed observations. In this study, we present a new neural network based retrieval algorithm to estimate surface and root zone soil moisture from passive microwave observations of SMAP satellite (L-band) and AMSR2 instrument (X-band). SMAP early morning observations are ideal for surface soil moisture retrieval. AMSR2 mid-night observations are used here as an indicator of plant hydraulic properties that are related to root zone soil moisture. The combined observations from SMAP and AMSR2 together with other ancillary observations including the Solar-Induced Fluorescence (SIF) estimates from GOME-2 instrument provide necessary information to estimate surface and root zone soil moisture. The algorithm is applied to observations from the first 18 months of SMAP mission and retrievals are validated against in-situ observations and other global datasets.

  6. Estimating the Impact of Adding C-Reactive Protein as a Criterion for Lipid Lowering Treatment in the United States

    PubMed Central

    Schwartz, Lisa M.; Kerin, Kevin; Welch, H. Gilbert

    2007-01-01

    Background There is growing interest in using C-reactive protein (CRP) levels to help select patients for lipid lowering therapy—although this practice is not yet supported by evidence of benefit in a randomized trial. Objective To estimate the number of Americans potentially affected if a CRP criteria were adopted as an additional indication for lipid lowering therapy. To provide context, we also determined how well current lipid lowering guidelines are being implemented. Methods We analyzed nationally representative data to determine how many Americans age 35 and older meet current National Cholesterol Education Program (NCEP) treatment criteria (a combination of risk factors and their Framingham risk score). We then determined how many of the remaining individuals would meet criteria for treatment using 2 different CRP-based strategies: (1) narrow: treat individuals at intermediate risk (i.e., 2 or more risk factors and an estimated 10–20% risk of coronary artery disease over the next 10 years) with CRP > 3 mg/L and (2) broad: treat all individuals with CRP > 3 mg/L. Data source Analyses are based on the 2,778 individuals participating in the 1999–2002 National Health and Nutrition Examination Survey with complete data on cardiac risk factors, fasting lipid levels, CRP, and use of lipid lowering agents. Main measures The estimated number and proportion of American adults meeting NCEP criteria who take lipid-lowering drugs, and the additional number who would be eligible based on CRP testing. Results About 53 of the 153 million Americans aged 35 and older meet current NCEP criteria (that do not involve CRP) for lipid-lowering treatment. Sixty-five percent, however, are not currently being treated, even among those at highest risk (i.e., patients with established heart disease or its risk equivalent)—62% are untreated. Adopting the narrow and broad CRP strategies would make an additional 2.1 and 25.3 million Americans eligible for treatment, respectively. The latter strategy would make over half the adults age 35 and older eligible for lipid-lowering therapy, with most of the additionally eligible (57%) coming from the lowest NCEP heart risk category (i.e., 0–1 risk factors). Conclusion There is substantial underuse of lipid lowering therapy for American adults at high risk for coronary disease. Rather than adopting CRP-based strategies, which would make millions more lower risk patients eligible for treatment (and for whom treatment benefit has not yet been demonstrated in a randomized trial), we should ensure the treatment of currently defined high-risk patients for whom the benefit of therapy is established. PMID:17356986

  7. Comparative analysis for various redox flow batteries chemistries using a cost performance model

    NASA Astrophysics Data System (ADS)

    Crawford, Alasdair; Viswanathan, Vilayanur; Stephenson, David; Wang, Wei; Thomsen, Edwin; Reed, David; Li, Bin; Balducci, Patrick; Kintner-Meyer, Michael; Sprenkle, Vincent

    2015-10-01

    The total energy storage system cost is determined by means of a robust performance-based cost model for multiple flow battery chemistries. Systems aspects such as shunt current losses, pumping losses and various flow patterns through electrodes are accounted for. The system cost minimizing objective function determines stack design by optimizing the state of charge operating range, along with current density and current-normalized flow. The model cost estimates are validated using 2-kW stack performance data for the same size electrodes and operating conditions. Using our validated tool, it has been demonstrated that an optimized all-vanadium system has an estimated system cost of < 350 kWh-1 for 4-h application. With an anticipated decrease in component costs facilitated by economies of scale from larger production volumes, coupled with performance improvements enabled by technology development, the system cost is expected to decrease to 160 kWh-1 for a 4-h application, and to 100 kWh-1 for a 10-h application. This tool has been shared with the redox flow battery community to enable cost estimation using their stack data and guide future direction.

  8. Underwater Inherent Optical Properties Estimation Using a Depth Aided Deep Neural Network.

    PubMed

    Yu, Zhibin; Wang, Yubo; Zheng, Bing; Zheng, Haiyong; Wang, Nan; Gu, Zhaorui

    2017-01-01

    Underwater inherent optical properties (IOPs) are the fundamental clues to many research fields such as marine optics, marine biology, and underwater vision. Currently, beam transmissometers and optical sensors are considered as the ideal IOPs measuring methods. But these methods are inflexible and expensive to be deployed. To overcome this problem, we aim to develop a novel measuring method using only a single underwater image with the help of deep artificial neural network. The power of artificial neural network has been proved in image processing and computer vision fields with deep learning technology. However, image-based IOPs estimation is a quite different and challenging task. Unlike the traditional applications such as image classification or localization, IOP estimation looks at the transparency of the water between the camera and the target objects to estimate multiple optical properties simultaneously. In this paper, we propose a novel Depth Aided (DA) deep neural network structure for IOPs estimation based on a single RGB image that is even noisy. The imaging depth information is considered as an aided input to help our model make better decision.

  9. 3D motion and strain estimation of the heart: initial clinical findings

    NASA Astrophysics Data System (ADS)

    Barbosa, Daniel; Hristova, Krassimira; Loeckx, Dirk; Rademakers, Frank; Claus, Piet; D'hooge, Jan

    2010-03-01

    The quantitative assessment of regional myocardial function remains an important goal in clinical cardiology. As such, tissue Doppler imaging and speckle tracking based methods have been introduced to estimate local myocardial strain. Recently, volumetric ultrasound has become more readily available, allowing therefore the 3D estimation of motion and myocardial deformation. Our lab has previously presented a method based on spatio-temporal elastic registration of ultrasound volumes to estimate myocardial motion and deformation in 3D, overcoming the spatial limitations of the existing methods. This method was optimized on simulated data sets in previous work and is currently tested in a clinical setting. In this manuscript, 10 healthy volunteers, 10 patient with myocardial infarction and 10 patients with arterial hypertension were included. The cardiac strain values extracted with the proposed method were compared with the ones estimated with 1D tissue Doppler imaging and 2D speckle tracking in all patient groups. Although the absolute values of the 3D strain components assessed by this new methodology were not identical to the reference methods, the relationship between the different patient groups was similar.

  10. Comparison of blood flow models and acquisitions for quantitative myocardial perfusion estimation from dynamic CT

    NASA Astrophysics Data System (ADS)

    Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R.; La Riviere, Patrick J.; Alessio, Adam M.

    2014-04-01

    Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)-1, cardiac output = 3, 5, 8 L min-1). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This suggests that there is no particular advantage between quantitative estimation methods nor to performing dose reduction via tube current reduction compared to temporal sampling reduction. These data are important for optimizing implementation of cardiac dynamic CT in clinical practice and in prospective CT MBF trials.

  11. Estimating infertility prevalence in low-to-middle-income countries: an application of a current duration approach to Demographic and Health Survey data

    PubMed Central

    Cox, Carie M.; Tunçalp, Özge; McLain, Alexander C.; Thoma, Marie E.

    2017-01-01

    Abstract STUDY QUESTION Can infertility prevalence be estimated using a current duration (CD) approach when applied to nationally representative Demographic and Health Survey (DHS) data collected routinely in low- or middle-income countries? SUMMARY ANSWER Our analysis suggests that a CD approach applied to DHS data from Nigeria provides infertility prevalence estimates comparable to other smaller studies in the same region. WHAT IS KNOWN ALREADY Despite associations with serious negative health, social and economic outcomes, infertility in developing countries is a marginalized issue in sexual and reproductive health. Obtaining reliable, nationally representative prevalence estimates is critical to address the issue, but methodological and resource challenges have impeded this goal. STUDY DESIGN, SIZE, DURATION This cross-sectional study was based on standard information available in the DHS core questionnaire and data sets, which are collected routinely among participating low-to-middle-income countries. Our research question was examined among women participating in the 2013 Nigeria DHS (n = 38 948). Among women eligible for the study, 98% were interviewed. PARTICIPANTS/MATERIALS, SETTING, METHODS We applied a CD approach (i.e. current length of time-at-risk of pregnancy) to estimate time-to-pregnancy (TTP) and 12-month infertility prevalence among women ‘at risk’ of pregnancy at the time of interview (n = 7063). Women who were 18–44 years old, married or cohabitating, sexually active within the past 4 weeks and not currently using contraception (and had not been sterilized) were included in the analysis. Estimates were based on parametric survival methods using bootstrap methods (500 bootstrap replicates) to obtain 95% CIs. MAIN RESULTS AND THE ROLE OF CHANCE The estimated median TTP among couples at risk of pregnancy was 5.1 months (95% CI: 4.2–6.3). The estimated percentage of infertile couples was 31.1% (95% CI: 27.9–34.7%)—consistent with other smaller studies from Nigeria. Primary infertility (17.4%, 95% CI: 12.9–23.8%) was substantially lower than secondary infertility (34.1%, 95% CI: 30.3–39.3%) in this population. Overall estimates for TTP >24 or >36 months dropped to 17.7% (95% CI: 15.7–20%) and 11.5% (95% CI: 10.2–13%), respectively. Subgroup analyses showed that estimates varied by age, coital frequency and fertility intentions, while being in a polygynous relationship showed minimal impact. LIMITATIONS, REASONS FOR CAUTION The CD approach may be limited by assumptions on when exposure to risk of pregnancy began and methodologic assumptions required for estimation, which may be less accurate for particular subgroups or populations. Unrecognized pregnancies may have also biased our findings; however, we attempted to address this in our exclusion criteria. Limiting to married/cohabiting couples may have excluded women who are no longer in a relationship after being blamed for infertility. Although probably rare in this setting, we lack information on couples undergoing infertility treatment. Like other TTP measurement approaches, pregnancies resulting from contraceptive failure are not included, which may bias estimates. WIDER IMPLICATIONS OF THE FINDINGS Nationally representative estimates of TTP and infertility based on a clinical definition of 12 months have been limited within developing countries. This approach represents a pragmatic advance in our ability to measure and monitor infertility in the developing world, with potentially far-reaching implications for policies and programs intended to address reproductive health. STUDY FUNDING/COMPETING INTERESTS There are no competing interests and no financial support was provided for this study. Financial support for Open Access publication was provided by the World Health Organization. PMID:28204493

  12. Cross-shift changes in FEV1 in relation to wood dust exposure: the implications of different exposure assessment methods

    PubMed Central

    Schlunssen, V; Sigsgaard, T; Schaumburg, I; Kromhout, H

    2004-01-01

    Background: Exposure-response analyses in occupational studies rely on the ability to distinguish workers with regard to exposures of interest. Aims: To evaluate different estimates of current average exposure in an exposure-response analysis on dust exposure and cross-shift decline in FEV1 among woodworkers. Methods: Personal dust samples (n = 2181) as well as data on lung function parameters were available for 1560 woodworkers from 54 furniture industries. The exposure to wood dust for each worker was calculated in eight different ways using individual measurements, group based exposure estimates, a weighted estimate of individual and group based exposure estimates, and predicted values from mixed models. Exposure-response relations on cross-shift changes in FEV1 and exposure estimates were explored. Results: A positive exposure-response relation between average dust exposure and cross-shift FEV1 was shown for non-smokers only and appeared to be most pronounced among pine workers. In general, the highest slope and standard error (SE) was revealed for grouping by a combination of task and factory size, the lowest slope and SE was revealed for estimates based on individual measurements, with the weighted estimate and the predicted values in between. Grouping by quintiles of average exposure for task and factory combinations revealed low slopes and high SE, despite a high contrast. Conclusion: For non-smokers, average dust exposure and cross-shift FEV1 were associated in an exposure dependent manner, especially among pine workers. This study confirms the consequences of using different exposure assessment strategies studying exposure-response relations. It is possible to optimise exposure assessment combining information from individual and group based exposure estimates, for instance by applying predicted values from mixed effects models. PMID:15377768

  13. Estimating Snow Water Storage in North America Using CLM4, DART, and Snow Radiance Data Assimilation

    NASA Technical Reports Server (NTRS)

    Kwon, Yonghwan; Yang, Zong-Liang; Zhao, Long; Hoar, Timothy J.; Toure, Ally M.; Rodell, Matthew

    2016-01-01

    This paper addresses continental-scale snow estimates in North America using a recently developed snow radiance assimilation (RA) system. A series of RA experiments with the ensemble adjustment Kalman filter are conducted by assimilating the Advanced Microwave Scanning Radiometer for Earth Observing System (AMSR-E) brightness temperature T(sub B) at 18.7- and 36.5-GHz vertical polarization channels. The overall RA performance in estimating snow depth for North America is improved by simultaneously updating the Community Land Model, version 4 (CLM4), snow/soil states and radiative transfer model (RTM) parameters involved in predicting T(sub B) based on their correlations with the prior T(sub B) (i.e., rule-based RA), although degradations are also observed. The RA system exhibits a more mixed performance for snow cover fraction estimates. Compared to the open-loop run (0.171m RMSE), the overall snow depth estimates are improved by 1.6% (0.168m RMSE) in the rule-based RA whereas the default RA (without a rule) results in a degradation of 3.6% (0.177mRMSE). Significant improvement of the snow depth estimates in the rule-based RA as observed for tundra snow class (11.5%, p < 0.05) and bare soil land-cover type (13.5%, p < 0.05). However, the overall improvement is not significant (p = 0.135) because snow estimates are degraded or marginally improved for other snow classes and land covers, especially the taiga snow class and forest land cover (7.1% and 7.3% degradations, respectively). The current RA system needs to be further refined to enhance snow estimates for various snow types and forested regions.

  14. Sensorless position estimator applied to nonlinear IPMC model

    NASA Astrophysics Data System (ADS)

    Bernat, Jakub; Kolota, Jakub

    2016-11-01

    This paper addresses the issue of estimating position for an ionic polymer metal composite (IPMC) known as electro active polymer (EAP). The key step is the construction of a sensorless mode considering only current feedback. This work takes into account nonlinearities caused by electrochemical effects in the material. Owing to the recent observer design technique, the authors obtained both Lyapunov function based estimation law as well as sliding mode observer. To accomplish the observer design, the IPMC model was identified through a series of experiments. The research comprises time domain measurements. The identification process was completed by means of geometric scaling of three test samples. In the proposed design, the estimated position accurately tracks the polymer position, which is illustrated by the experiments.

  15. SQUID-based current sensing noise thermometry for quantum resistors at dilution refrigerator temperatures

    NASA Astrophysics Data System (ADS)

    Kleinbaum, Ethan; Shingla, Vidhi; Csáthy, G. A.

    2017-03-01

    We present a dc Superconducting QUantum Interference Device (SQUID)-based current amplifier with an estimated input referred noise of only 2.3 fA/√{Hz}. Because of such a low amplifier noise, the circuit is useful for Johnson noise thermometry of quantum resistors in the kΩ range down to mK temperatures. In particular, we demonstrate that our circuit does not contribute appreciable noise to the Johnson noise of a 3.25 kΩ resistor down to 16 mK. Our circuit is a useful alternative to the commonly used High Electron Mobility Transistor-based amplifiers, but in contrast to the latter, it offers a much reduced 1/f noise. In comparison to SQUIDs interfaced with cryogenic current comparators, our circuit has similar low noise levels, but it is easier to build and to shield from magnetic pickup.

  16. Crowdsourcing-Assisted Radio Environment Database for V2V Communication.

    PubMed

    Katagiri, Keita; Sato, Koya; Fujii, Takeo

    2018-04-12

    In order to realize reliable Vehicle-to-Vehicle (V2V) communication systems for autonomous driving, the recognition of radio propagation becomes an important technology. However, in the current wireless distributed network systems, it is difficult to accurately estimate the radio propagation characteristics because of the locality of the radio propagation caused by surrounding buildings and geographical features. In this paper, we propose a measurement-based radio environment database for improving the accuracy of the radio environment estimation in the V2V communication systems. The database first gathers measurement datasets of the received signal strength indicator (RSSI) related to the transmission/reception locations from V2V systems. By using the datasets, the average received power maps linked with transmitter and receiver locations are generated. We have performed measurement campaigns of V2V communications in the real environment to observe RSSI for the database construction. Our results show that the proposed method has higher accuracy of the radio propagation estimation than the conventional path loss model-based estimation.

  17. Estimating PM2.5-associated mortality increase in California due to the Volkswagen emission control defeat device

    NASA Astrophysics Data System (ADS)

    Wang, Tianyang; Jerrett, Michael; Sinsheimer, Peter; Zhu, Yifang

    2016-11-01

    The Volkswagen Group of America (VW) was found by the US Environmental Protection Agency (EPA) and the California Air Resources Board (CARB) to have installed "defeat devices" and emit more oxides of nitrogen (NOx) than permitted under current EPA standards. In this paper, we quantify the hidden NOx emissions from this so-called VW scandal and the resulting public health impacts in California. The NOx emissions are calculated based on VW road test data and the CARB Emission Factors (EMFAC) model. Cumulative hidden NOx emissions from 2009 to 2015 were estimated to be over 3500 tons. Adult mortality changes were estimated based on ambient fine particulate matter (PM2.5) change due to secondary nitrate formation and the related concentration-response functions. We estimated that hidden NOx emissions from 2009 to 2015 have resulted in a total of 12 PM2.5-associated adult mortality increases in California. Most of the mortality increase happened in metropolitan areas, due to their high population and vehicle density.

  18. Non-invasive pressure difference estimation from PC-MRI using the work-energy equation

    PubMed Central

    Donati, Fabrizio; Figueroa, C. Alberto; Smith, Nicolas P.; Lamata, Pablo; Nordsletten, David A.

    2015-01-01

    Pressure difference is an accepted clinical biomarker for cardiovascular disease conditions such as aortic coarctation. Currently, measurements of pressure differences in the clinic rely on invasive techniques (catheterization), prompting development of non-invasive estimates based on blood flow. In this work, we propose a non-invasive estimation procedure deriving pressure difference from the work-energy equation for a Newtonian fluid. Spatial and temporal convergence is demonstrated on in silico Phase Contrast Magnetic Resonance Image (PC-MRI) phantoms with steady and transient flow fields. The method is also tested on an image dataset generated in silico from a 3D patient-specific Computational Fluid Dynamics (CFD) simulation and finally evaluated on a cohort of 9 subjects. The performance is compared to existing approaches based on steady and unsteady Bernoulli formulations as well as the pressure Poisson equation. The new technique shows good accuracy, robustness to noise, and robustness to the image segmentation process, illustrating the potential of this approach for non-invasive pressure difference estimation. PMID:26409245

  19. Crowdsourcing-Assisted Radio Environment Database for V2V Communication †

    PubMed Central

    Katagiri, Keita; Fujii, Takeo

    2018-01-01

    In order to realize reliable Vehicle-to-Vehicle (V2V) communication systems for autonomous driving, the recognition of radio propagation becomes an important technology. However, in the current wireless distributed network systems, it is difficult to accurately estimate the radio propagation characteristics because of the locality of the radio propagation caused by surrounding buildings and geographical features. In this paper, we propose a measurement-based radio environment database for improving the accuracy of the radio environment estimation in the V2V communication systems. The database first gathers measurement datasets of the received signal strength indicator (RSSI) related to the transmission/reception locations from V2V systems. By using the datasets, the average received power maps linked with transmitter and receiver locations are generated. We have performed measurement campaigns of V2V communications in the real environment to observe RSSI for the database construction. Our results show that the proposed method has higher accuracy of the radio propagation estimation than the conventional path loss model-based estimation. PMID:29649174

  20. Event-Based Stereo Depth Estimation Using Belief Propagation.

    PubMed

    Xie, Zhen; Chen, Shengyong; Orchard, Garrick

    2017-01-01

    Compared to standard frame-based cameras, biologically-inspired event-based sensors capture visual information with low latency and minimal redundancy. These event-based sensors are also far less prone to motion blur than traditional cameras, and still operate effectively in high dynamic range scenes. However, classical framed-based algorithms are not typically suitable for these event-based data and new processing algorithms are required. This paper focuses on the problem of depth estimation from a stereo pair of event-based sensors. A fully event-based stereo depth estimation algorithm which relies on message passing is proposed. The algorithm not only considers the properties of a single event but also uses a Markov Random Field (MRF) to consider the constraints between the nearby events, such as disparity uniqueness and depth continuity. The method is tested on five different scenes and compared to other state-of-art event-based stereo matching methods. The results show that the method detects more stereo matches than other methods, with each match having a higher accuracy. The method can operate in an event-driven manner where depths are reported for individual events as they are received, or the network can be queried at any time to generate a sparse depth frame which represents the current state of the network.

  1. Estimation of the potential leakage of the chemical munitions based on two hydrodynamical models implemented for the Baltic Sea

    NASA Astrophysics Data System (ADS)

    Jakacki, Jaromir; Golenko, Mariya

    2014-05-01

    Two hydrodynamical models (Princeton Ocean Model (POM) and Parallel Ocean Program (POP)) have been implemented for the Baltic Sea area that consists of locations of the dumped chemical munitions during II War World. The models have been configured based on similar data source - bathymetry, initial conditions and external forces were implemented based on identical data. The horizontal resolutions of the models are also very similar. Several simulations with different initial conditions have been done. Comparison and analysis of the bottom currents from both models have been performed. Based on it estimating of the dangerous area and critical time have been done. Also lagrangian particle tracking and passive tracer were implemented and based on these results probability of the appearing dangerous doses and its time evolution have been presented. This work has been performed in the frame of the MODUM project financially supported by NATO.

  2. [Comparison of Flu Outbreak Reporting Standards Based on Transmission Dynamics Model].

    PubMed

    Yang, Guo-jing; Yi, Qing-jie; Li, Qin; Zeng, Qing

    2016-05-01

    To compare the current two flu outbreak reporting standards for the purpose of better prevention and control of flu outbreaks. A susceptible-exposed-infectious/asymptomatic-removed (SEIAR) model without interventions was set up first, followed by a model with interventions based on real situation. Simulated interventions were developed based on the two reporting standards, and evaluated by estimated duration of outbreaks, cumulative new cases, cumulative morbidity rates, decline in percentage of morbidity rates, and cumulative secondary cases. The basic reproductive number of the outbreak was estimated as 8. 2. The simulation produced similar results as the real situation. The effect of interventions based on reporting standard one (10 accumulated new cases in a week) was better than that of interventions based on reporting standard two (30 accumulated new cases in a week). The reporting standard one (10 accumulated new cases in a week) is more effective for prevention and control of flu outbreaks.

  3. Micro CT based truth estimation of nodule volume

    NASA Astrophysics Data System (ADS)

    Kinnard, L. M.; Gavrielides, M. A.; Myers, K. J.; Zeng, R.; Whiting, B.; Lin-Gibson, S.; Petrick, N.

    2010-03-01

    With the advent of high-resolution CT, three-dimensional (3D) methods for nodule volumetry have been introduced, with the hope that such methods will be more accurate and consistent than currently used planar measures of size. However, the error associated with volume estimation methods still needs to be quantified. Volume estimation error is multi-faceted in the sense that there is variability associated with the patient, the software tool and the CT system. A primary goal of our current research efforts is to quantify the various sources of measurement error and, when possible, minimize their effects. In order to assess the bias of an estimate, the actual value, or "truth," must be known. In this work we investigate the reliability of micro CT to determine the "true" volume of synthetic nodules. The advantage of micro CT over other truthing methods is that it can provide both absolute volume and shape information in a single measurement. In the current study we compare micro CT volume truth to weight-density truth for spherical, elliptical, spiculated and lobulated nodules with diameters from 5 to 40 mm, and densities of -630 and +100 HU. The percent differences between micro CT and weight-density volume for -630 HU nodules range from [-21.7%, -0.6%] (mean= -11.9%) and the differences for +100 HU nodules range from [-0.9%, 3.0%] (mean=1.7%).

  4. Analysis of ProSEDS Test of Bare-Tether Collection

    NASA Technical Reports Server (NTRS)

    Sanmartin, J. R.; Lorenzini, E. C.; Estes, R. D.; Charro, M.; Cosmo, M. L.

    2003-01-01

    NASA's tether experiment ProSEDS will be placed in orbit on board a Delta-II rocket to test bare-tether electron collection, deorbiting of the rocket second stage, and the system dynamic stability. ProSEDS performance will vary because ambient conditions change along the orbit and tether-circuit bulk elements at the cathodic end follow the step-by-step sequence for the current cycles of operating modes (open-circuit, shunt and resistor modes for primary cycles; shunt and battery modes for secondary cycles). In this work we discuss expected ProSEDS values of the ratio L,/L*, which jointly with cathodic bulk elements determines bias and current tether profiles; L, is tether length, and L* (changing with tether temperature and ionospheric plasma density and magnetic field) is a characteristic length gauging ohmic versus baretether collection impedances. We discuss how to test bare-tether electron collection during primary cycles, using probe measurements of plasma density, measurements of cathodic current in resistor and shunt modes, and an estimate of tether temperature based on ProSEDS orbital position at the particular cycle concerned. We discuss how a temperature misestimate might occasionally affect the test of bare-tether collection, and how introducing the battery mode in some primary cycles, for an additional current measurement, could obviate the need of a temperature estimate. We also show how to test bare-tether collection by estimating orbit-decay rate from measurements of cathodic current for the shunt and battery modes of secondary cycles.

  5. Counting Pakistanis in the Middle East: problems and policy implications.

    PubMed

    Stahl, C W; Farooq-i-azam

    1990-01-01

    "Using Pakistan as a case study, this article focuses on the difficulties of measuring both the outflow of workers over time and the stock abroad at any particular time. The various estimates of the number of Pakistanis in the Middle East are evaluated and an alternative estimate is provided based on hitherto unused data from two major surveys of returning workers. The alternative estimate differs substantially from the others, the difference being attributed principally to clandestine worker immigration. The concluding section discusses the policy implications of inaccurate information about the numbers of workers abroad and the likely effects of the current Persian Gulf crisis on Pakistan's economy." excerpt

  6. Revisiting the social cost of carbon

    PubMed Central

    Nordhaus, William D.

    2017-01-01

    The social cost of carbon (SCC) is a central concept for understanding and implementing climate change policies. This term represents the economic cost caused by an additional ton of carbon dioxide emissions or its equivalent. The present study presents updated estimates based on a revised DICE model (Dynamic Integrated model of Climate and the Economy). The study estimates that the SCC is $31 per ton of CO2 in 2010 US$ for the current period (2015). For the central case, the real SCC grows at 3% per year over the period to 2050. The paper also compares the estimates with those from other sources. PMID:28143934

  7. Counterbalance of cutting force for advanced milling operations

    NASA Astrophysics Data System (ADS)

    Tsai, Nan-Chyuan; Shih, Li-Wen; Lee, Rong-Mao

    2010-05-01

    The goal of this work is to concurrently counterbalance the dynamic cutting force and regulate the spindle position deviation under various milling conditions by integrating active magnetic bearing (AMB) technique, fuzzy logic algorithm and an adaptive self-tuning feedback loop. Since the dynamics of milling system is highly determined by a few operation conditions, such as speed of spindle, cut depth and feedrate, therefore the dynamic model for cutting process is more appropriate to be constructed by experiments, instead of using theoretical approach. The experimental data, either for idle or cutting, are utilized to establish the database of milling dynamics so that the system parameters can be on-line estimated by employing the proposed fuzzy logic algorithm as the cutting mission is engaged. Based on the estimated milling system model and preset operation conditions, i.e., spindle speed, cut depth and feedrate, the current cutting force can be numerically estimated. Once the current cutting force can be real-time estimated, the corresponding compensation force can be exerted by the equipped AMB to counterbalance the cutting force, in addition to the spindle position regulation by feedback of spindle position. On the other hand, for the magnetic force is nonlinear with respect to the applied electric current and air gap, the characteristics of the employed AMB is investigated also by experiments and a nonlinear mathematic model, in terms of air gap between spindle and electromagnetic pole and coil current, is developed. At the end, the experimental simulations on realistic milling are presented to verify the efficacy of the fuzzy controller for spindle position regulation and the capability of the dynamic cutting force counterbalance.

  8. Assessment of Energy Production Potential from Ocean Currents along the United States Coastline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haas, Kevin

    Increasing energy consumption and depleting reserves of fossil fuels have resulted in growing interest in alternative renewable energy from the ocean. Ocean currents are an alternative source of clean energy due to their inherent reliability, persistence and sustainability. General ocean circulations exist in the form of large rotating ocean gyres, and feature extremely rapid current flow in the western boundaries due to the Coriolis Effect. The Gulf Stream system is formed by the western boundary current of the North Atlantic Ocean that flows along the east coastline of the United States, and therefore is of particular interest as a potentialmore » energy resource for the United States. This project created a national database of ocean current energy resources to help advance awareness and market penetration in ocean current energy resource assessment. The database, consisting of joint velocity magnitude and direction probability histograms, was created from data created by seven years of numerical model simulations. The accuracy of the database was evaluated by ORNL?s independent validation effort documented in a separate report. Estimates of the total theoretical power resource contained in the ocean currents were calculated utilizing two separate approaches. Firstly, the theoretical energy balance in the Gulf Stream system was examined using the two-dimensional ocean circulation equations based on the assumptions of the Stommel model for subtropical gyres with the quasi-geostrophic balance between pressure gradient, Coriolis force, wind stress and friction driving the circulation. Parameters including water depth, natural dissipation rate and wind stress are calibrated in the model so that the model can reproduce reasonable flow properties including volume flux and energy flux. To represent flow dissipation due to turbines additional turbine drag coefficient is formulated and included in the model. Secondly, to determine the reasonableness of the total power estimates from the Stommel model and to help determine the size and capacity of arrays necessary to extract the maximum theoretical power, further estimates of the available power based on the distribution of the kinetic power density in the undisturbed flow was completed. This used estimates of the device spacing and scaling to sum up the total power that the devices would produce. The analysis has shown that considering extraction over a region comprised of the Florida Current portion of the Gulf Stream system, the average power dissipated ranges between 4-6 GW with a mean around 5.1 GW. This corresponds to an average of approximately 45 TWh/yr. However, if the extraction area comprises the entire portion of the Gulf Stream within 200 miles of the US coastline from Florida to North Carolina, the average power dissipated becomes 18.6 GW or 163 TWh/yr. A web based GIS interface, http://www.oceancurrentpower.gatech.edu/, was developed for dissemination of the data. The website includes GIS layers of monthly and yearly mean ocean current velocity and power density for ocean currents along the entire coastline of the United States, as well as joint and marginal probability histograms for current velocities at a horizontal resolution of 4-7 km with 10-25 bins over depth. Various tools are provided for viewing, identifying, filtering and downloading the data.« less

  9. Adult current smoking: differences in definitions and prevalence estimates--NHIS and NSDUH, 2008.

    PubMed

    Ryan, Heather; Trosclair, Angela; Gfroerer, Joe

    2012-01-01

    To compare prevalence estimates and assess issues related to the measurement of adult cigarette smoking in the National Health Interview Survey (NHIS) and the National Survey on Drug Use and Health (NSDUH). 2008 data on current cigarette smoking and current daily cigarette smoking among adults ≥18 years were compared. The standard NHIS current smoking definition, which screens for lifetime smoking ≥100 cigarettes, was used. For NSDUH, both the standard current smoking definition, which does not screen, and a modified definition applying the NHIS current smoking definition (i.e., with screen) were used. NSDUH consistently yielded higher current cigarette smoking estimates than NHIS and lower daily smoking estimates. However, with use of the modified NSDUH current smoking definition, a notable number of subpopulation estimates became comparable between surveys. Younger adults and racial/ethnic minorities were most impacted by the lifetime smoking screen, with Hispanics being the most sensitive to differences in smoking variable definitions among all subgroups. Differences in current cigarette smoking definitions appear to have a greater impact on smoking estimates in some sub-populations than others. Survey mode differences may also limit intersurvey comparisons and trend analyses. Investigators are cautioned to use data most appropriate for their specific research questions.

  10. A Novel Strain-Based Method to Estimate Tire Conditions Using Fuzzy Logic for Intelligent Tires.

    PubMed

    Garcia-Pozuelo, Daniel; Olatunbosun, Oluremi; Yunta, Jorge; Yang, Xiaoguang; Diaz, Vicente

    2017-02-10

    The so-called intelligent tires are one of the most promising research fields for automotive engineers. These tires are equipped with sensors which provide information about vehicle dynamics. Up to now, the commercial intelligent tires only provide information about inflation pressure and their contribution to stability control systems is currently very limited. Nowadays one of the major problems for intelligent tire development is how to embed feasible and low cost sensors to obtain reliable information such as inflation pressure, vertical load or rolling speed. These parameters provide key information for vehicle dynamics characterization. In this paper, we propose a novel algorithm based on fuzzy logic to estimate the mentioned parameters by means of a single strain-based system. Experimental tests have been carried out in order to prove the suitability and durability of the proposed on-board strain sensor system, as well as its low cost advantages, and the accuracy of the obtained estimations by means of fuzzy logic.

  11. Ventilation potential during the emissions survey in Toluca Valley, Mexico

    NASA Astrophysics Data System (ADS)

    Ruiz Angulo, A.; Peralta, O.; Jurado, O. E.; Ortinez, A.; Grutter de la Mora, M.; Rivera, C.; Gutierrez, W.; Gonzalez, E.

    2017-12-01

    During the late-spring early-summer measurements of emissions and pollutants were carried out during a survey campaign at four different locations within the Toluca Valley. The current emissions inventory typically estimates the generation of pollutants based on pre-estimated values representing an entire sector function of their activities. However, those factors are not always based direct measurements. The emissions from the Toluca Valley are rather large and they could affect the air quality of Mexico City Valley. The air masses interchange between those two valleys is not very well understood; however, based on the measurements obtained during the 3 months campaign we looked carefully at the daily variability of the wind finding a clear signal for mountain-valley breeze. The ventilation coefficient is estimated and the correlations with the concentrations at the 4 locations and in a far away station in Mexico City are addressed in this work. Finally, we discuss the implication of the ventilation capacity in air quality for the system of Valleys that include Mexico City.

  12. A Novel Strain-Based Method to Estimate Tire Conditions Using Fuzzy Logic for Intelligent Tires

    PubMed Central

    Garcia-Pozuelo, Daniel; Olatunbosun, Oluremi; Yunta, Jorge; Yang, Xiaoguang; Diaz, Vicente

    2017-01-01

    The so-called intelligent tires are one of the most promising research fields for automotive engineers. These tires are equipped with sensors which provide information about vehicle dynamics. Up to now, the commercial intelligent tires only provide information about inflation pressure and their contribution to stability control systems is currently very limited. Nowadays one of the major problems for intelligent tire development is how to embed feasible and low cost sensors to obtain reliable information such as inflation pressure, vertical load or rolling speed. These parameters provide key information for vehicle dynamics characterization. In this paper, we propose a novel algorithm based on fuzzy logic to estimate the mentioned parameters by means of a single strain-based system. Experimental tests have been carried out in order to prove the suitability and durability of the proposed on-board strain sensor system, as well as its low cost advantages, and the accuracy of the obtained estimations by means of fuzzy logic. PMID:28208631

  13. Assessment of Solid Sorbent Systems for Post-Combustion Carbon Dioxide Capture at Coal-Fired Power Plants

    NASA Astrophysics Data System (ADS)

    Glier, Justin C.

    In an effort to lower future CO2 emissions, a wide range of technologies are being developed to scrub CO2 from the flue gases of fossil fuel-based electric power and industrial plants. This thesis models one of several early-stage post-combustion CO2 capture technologies, solid sorbent-based CO2 capture process, and presents performance and cost estimates of this system on pulverized coal power plants. The spreadsheet-based software package Microsoft Excel was used in conjunction with AspenPlus modelling results and the Integrated Environmental Control Model to develop performance and cost estimates for the solid sorbent-based CO2 capture technology. A reduced order model also was created to facilitate comparisons among multiple design scenarios. Assumptions about plant financing and utilization, as well as uncertainties in heat transfer and material design that affect heat exchanger and reactor design were found to produce a wide range of cost estimates for solid sorbent-based systems. With uncertainties included, costs for a supercritical power plant with solid sorbent-based CO2 capture ranged from 167 to 533 per megawatt hour for a first-of-a-kind installation (with all costs in constant 2011 US dollars) based on a 90% confidence interval. The median cost was 209/MWh. Post-combustion solid sorbent-based CO2 capture technology is then evaluated in terms of the potential cost for a mature system based on historic experience as technologies are improved with sequential iterations of the currently available system. The range costs for a supercritical power plant with solid sorbent-based CO2 capture was found to be 118 to 189 per megawatt hour with a nominal value of 163 per megawatt hour given the expected range of technological improvement in the capital and operating costs and efficiency of the power plant after 100 GW of cumulative worldwide experience. These results suggest that the solid sorbent-based system will not be competitive with currently available liquid amine-systems in the absence of significant new improvements in solid sorbent properties and process system design to reduce the heat exchange surface area in the regenerator and cross-flow heat exchanger. Finally, the importance of these estimates for policy makers is discussed.

  14. Relationship and variation of qPCR and culturable enterococci estimates in ambient surface waters are predictable

    USGS Publications Warehouse

    Whitman, Richard L.; Ge, Zhongfu; Nevers, Meredith B.; Boehm, Alexandria B.; Chern, Eunice C.; Haugland, Richard A.; Lukasik, Ashley M.; Molina, Marirosa; Przybyla-Kelly, Kasia; Shively, Dawn A.; White, Emily M.; Zepp, Richard G.; Byappanahalli, Muruleedhara N.

    2010-01-01

    The quantitative polymerase chain reaction (qPCR) method provides rapid estimates of fecal indicator bacteria densities that have been indicated to be useful in the assessment of water quality. Primarily because this method provides faster results than standard culture-based methods, the U.S. Environmental Protection Agency is currently considering its use as a basis for revised ambient water quality criteria. In anticipation of this possibility, we sought to examine the relationship between qPCR-based and culture-based estimates of enterococci in surface waters. Using data from several research groups, we compared enterococci estimates by the two methods in water samples collected from 37 sites across the United States. A consistent linear pattern in the relationship between cell equivalents (CCE), based on the qPCR method, and colony-forming units (CFU), based on the traditional culturable method, was significant (P 10CFU > 2.0/100 mL) while uncertainty increases at lower CFU values. It was further noted that the relative error in replicated qPCR estimates was generally higher than that in replicated culture counts even at relatively high target levels, suggesting a greater need for replicated analyses in the qPCR method to reduce relative error. Further studies evaluating the relationship between culture and qPCR should take into account analytical uncertainty as well as potential differences in results of these methods that may arise from sample variability, different sources of pollution, and environmental factors.

  15. Comparison of bootstrap approaches for estimation of uncertainties of DTI parameters.

    PubMed

    Chung, SungWon; Lu, Ying; Henry, Roland G

    2006-11-01

    Bootstrap is an empirical non-parametric statistical technique based on data resampling that has been used to quantify uncertainties of diffusion tensor MRI (DTI) parameters, useful in tractography and in assessing DTI methods. The current bootstrap method (repetition bootstrap) used for DTI analysis performs resampling within the data sharing common diffusion gradients, requiring multiple acquisitions for each diffusion gradient. Recently, wild bootstrap was proposed that can be applied without multiple acquisitions. In this paper, two new approaches are introduced called residual bootstrap and repetition bootknife. We show that repetition bootknife corrects for the large bias present in the repetition bootstrap method and, therefore, better estimates the standard errors. Like wild bootstrap, residual bootstrap is applicable to single acquisition scheme, and both are based on regression residuals (called model-based resampling). Residual bootstrap is based on the assumption that non-constant variance of measured diffusion-attenuated signals can be modeled, which is actually the assumption behind the widely used weighted least squares solution of diffusion tensor. The performances of these bootstrap approaches were compared in terms of bias, variance, and overall error of bootstrap-estimated standard error by Monte Carlo simulation. We demonstrate that residual bootstrap has smaller biases and overall errors, which enables estimation of uncertainties with higher accuracy. Understanding the properties of these bootstrap procedures will help us to choose the optimal approach for estimating uncertainties that can benefit hypothesis testing based on DTI parameters, probabilistic fiber tracking, and optimizing DTI methods.

  16. An evaluation of risk estimation procedures for mixtures of carcinogens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, J.S.; Chen, J.J.

    1999-12-01

    The estimation of health risks from exposure to a mixture of chemical carcinogens is generally based on the combination of information from several available single compound studies. The current practice of directly summing the upper bound risk estimates of individual carcinogenic components as an upper bound on the total risk of a mixture is known to be generally too conservative. Gaylor and Chen (1996, Risk Analysis) proposed a simple procedure to compute an upper bound on the total risk using only the upper confidence limits and central risk estimates of individual carcinogens. The Gaylor-Chen procedure was derived based on anmore » underlying assumption of the normality for the distributions of individual risk estimates. IN this paper the authors evaluated the Gaylor-Chen approach in terms the coverages of the upper confidence limits on the true risks of individual carcinogens. In general, if the coverage probabilities for the individual carcinogens are all approximately equal to the nominal level, then the Gaylor-Chen approach should perform well. However, the Gaylor-Chen approach can be conservative or anti-conservative if some of all individual upper confidence limit estimates are conservative or anti-conservative.« less

  17. Estimating wildfire behavior and effects

    Treesearch

    Frank A. Albini

    1976-01-01

    This paper presents a brief survey of the research literature on wildfire behavior and effects and assembles formulae and graphical computation aids based on selected theoretical and empirical models. The uses of mathematical fire behavior models are discussed, and the general capabilities and limitations of currently available models are outlined.

  18. Individual differences in transcranial electrical stimulation current density

    PubMed Central

    Russell, Michael J; Goodman, Theodore; Pierson, Ronald; Shepherd, Shane; Wang, Qiang; Groshong, Bennett; Wiley, David F

    2013-01-01

    Transcranial electrical stimulation (TCES) is effective in treating many conditions, but it has not been possible to accurately forecast current density within the complex anatomy of a given subject's head. We sought to predict and verify TCES current densities and determine the variability of these current distributions in patient-specific models based on magnetic resonance imaging (MRI) data. Two experiments were performed. The first experiment estimated conductivity from MRIs and compared the current density results against actual measurements from the scalp surface of 3 subjects. In the second experiment, virtual electrodes were placed on the scalps of 18 subjects to model simulated current densities with 2 mA of virtually applied stimulation. This procedure was repeated for 4 electrode locations. Current densities were then calculated for 75 brain regions. Comparison of modeled and measured external current in experiment 1 yielded a correlation of r = .93. In experiment 2, modeled individual differences were greatest near the electrodes (ten-fold differences were common), but simulated current was found in all regions of the brain. Sites that were distant from the electrodes (e.g. hypothalamus) typically showed two-fold individual differences. MRI-based modeling can effectively predict current densities in individual brains. Significant variation occurs between subjects with the same applied electrode configuration. Individualized MRI-based modeling should be considered in place of the 10-20 system when accurate TCES is needed. PMID:24285948

  19. Direct estimates of low-level radiation risks of lung cancer at two NRC-compliant nuclear installations: why are the new risk estimates 20 to 200 times the old official estimates?

    PubMed

    Bross, I D; Driscoll, D L

    1981-01-01

    An official report on the health hazards to nuclear submarine workers at the Portsmouth Naval Shipyard (PNS), who were exposed to low-level ionizing radiation, was based on a casual inspection of the data and not on statistical analyses of the dosage-response relationships. When these analyses are done, serious hazards from lung cancer and other causes of death are shown. As a result of the recent studies on nuclear workers, the new risk estimates have been found to be much higher than the official estimates currently used in setting NRC permissible levels. The official BEIR estimates are about one lung cancer death per year per million persons per rem[s]. The PNS data show 189 lung cancer deaths per year per million persons per rem.

  20. Adaptive torque estimation of robot joint with harmonic drive transmission

    NASA Astrophysics Data System (ADS)

    Shi, Zhiguo; Li, Yuankai; Liu, Guangjun

    2017-11-01

    Robot joint torque estimation using input and output position measurements is a promising technique, but the result may be affected by the load variation of the joint. In this paper, a torque estimation method with adaptive robustness and optimality adjustment according to load variation is proposed for robot joint with harmonic drive transmission. Based on a harmonic drive model and a redundant adaptive robust Kalman filter (RARKF), the proposed approach can adapt torque estimation filtering optimality and robustness to the load variation by self-tuning the filtering gain and self-switching the filtering mode between optimal and robust. The redundant factor of RARKF is designed as a function of the motor current for tolerating the modeling error and load-dependent filtering mode switching. The proposed joint torque estimation method has been experimentally studied in comparison with a commercial torque sensor and two representative filtering methods. The results have demonstrated the effectiveness of the proposed torque estimation technique.

Top