Sample records for previous estimates based

  1. A revised load estimation procedure for the Susquehanna, Potomac, Patuxent, and Choptank rivers

    USGS Publications Warehouse

    Yochum, Steven E.

    2000-01-01

    The U.S. Geological Survey?s Chesapeake Bay River Input Program has updated the nutrient and suspended-sediment load data base for the Susquehanna, Potomac, Patuxent, and Choptank Rivers using a multiple-window, center-estimate regression methodology. The revised method optimizes the seven-parameter regression approach that has been used historically by the program. The revised method estimates load using the fifth or center year of a sliding 9-year window. Each year a new model is run for each site and constituent, the most recent year is added, and the previous 4 years of estimates are updated. The fifth year in the 9-year window is considered the best estimate and is kept in the data base. The last year of estimation shows the most change from the previous year?s estimate and this change approaches a minimum at the fifth year. Differences between loads computed using this revised methodology and the loads populating the historical data base have been noted but the load estimates do not typically change drastically. The data base resulting from the application of this revised methodology is populated by annual and monthly load estimates that are known with greater certainty than in the previous load data base.

  2. Pricing Medicare's diagnosis-related groups: Charges versus estimated costs

    PubMed Central

    Price, Kurt F.

    1989-01-01

    Hospital payments under Medicare's prospective payment system (PPS) are based on prices established for 474 diagnosis-related groups (DRG's). Previous analyses using 1981 data demonstrated that DRG prices based on charges alone were not that different from prices calculated from estimated costs. Data for 1986 were used in this study to show that the differences between the two sets of DRG prices are much larger than previously reported. If DRG prices were once again based on estimated costs instead of the current charge-based prices, payments would be significantly redistributed. PMID:10313356

  3. Can Nonexperimental Estimates Replicate Estimates Based on Random Assignment in Evaluations of School Choice? A Within-Study Comparison

    ERIC Educational Resources Information Center

    Bifulco, Robert

    2012-01-01

    The ability of nonexperimental estimators to match impact estimates derived from random assignment is examined using data from the evaluation of two interdistrict magnet schools. As in previous within-study comparisons, nonexperimental estimates differ from estimates based on random assignment when nonexperimental estimators are implemented…

  4. Breast and ovarian cancer risks to carriers of the BRCA1 5382insC and 185delAG and BRCA2 6174delT mutations: a combined analysis of 22 population based studies

    PubMed Central

    Antoniou, A; Pharoah, P; Narod, S; Risch, H; Eyfjord, J; Hopper, J; Olsson, H; Johannsson, O; Borg, A; Pasini, B; Radice, P; Manoukian, S; Eccles, D; Tang, N; Olah, E; Anton-Culver, H; Warner, E; Lubinski, J; Gronwald, J; Gorski, B; Tulinius, H; Thorlacius, S; Eerola, H; Nevanlinna, H; Syrjakoski, K; Kallioniemi, O; Thompson, D; Evans, C; Peto, J; Lalloo, F; Evans, D; Easton, D

    2005-01-01

    A recent report estimated the breast cancer risks in carriers of the three Ashkenazi founder mutations to be higher than previously published estimates derived from population based studies. In an attempt to confirm this, the breast and ovarian cancer risks associated with the three Ashkenazi founder mutations were estimated using families included in a previous meta-analysis of populatrion based studies. The estimated breast cancer risks for each of the founder BRCA1 and BRCA2 mutations were similar to the corresponding estimates based on all BRCA1 or BRCA2 mutations in the meta-analysis. These estimates appear to be consistent with the observed prevalence of the mutations in the Ashkenazi Jewish population. PMID:15994883

  5. Dynamic estimator for determining operating conditions in an internal combustion engine

    DOEpatents

    Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob

    2016-01-05

    Methods and systems are provided for estimating engine performance information for a combustion cycle of an internal combustion engine. Estimated performance information for a previous combustion cycle is retrieved from memory. The estimated performance information includes an estimated value of at least one engine performance variable. Actuator settings applied to engine actuators are also received. The performance information for the current combustion cycle is then estimated based, at least in part, on the estimated performance information for the previous combustion cycle and the actuator settings applied during the previous combustion cycle. The estimated performance information for the current combustion cycle is then stored to the memory to be used in estimating performance information for a subsequent combustion cycle.

  6. Projected 1981 exposure estimates using iterative proportional fitting

    DOT National Transportation Integrated Search

    1985-10-01

    1981 VMT estimates categorized by eight driver, vehicle, and environmental : variables are produced. These 1981 estimates are produced using analytical : methods developed in a previous report. The estimates are based on 1977 : NPTS data (the latest ...

  7. ESTIMATION OF THE RATE OF VOC EMISSIONS FROM SOLVENT-BASED INDOOR COATING MATERIALS BASED ON PRODUCT FORMULATION

    EPA Science Inventory

    Two computational methods are proposed for estimation of the emission rate of volatile organic compounds (VOCs) from solvent-based indoor coating materials based on the knowledge of product formulation. The first method utilizes two previously developed mass transfer models with ...

  8. A non-stationary cost-benefit based bivariate extreme flood estimation approach

    NASA Astrophysics Data System (ADS)

    Qi, Wei; Liu, Junguo

    2018-02-01

    Cost-benefit analysis and flood frequency analysis have been integrated into a comprehensive framework to estimate cost effective design values. However, previous cost-benefit based extreme flood estimation is based on stationary assumptions and analyze dependent flood variables separately. A Non-Stationary Cost-Benefit based bivariate design flood estimation (NSCOBE) approach is developed in this study to investigate influence of non-stationarities in both the dependence of flood variables and the marginal distributions on extreme flood estimation. The dependence is modeled utilizing copula functions. Previous design flood selection criteria are not suitable for NSCOBE since they ignore time changing dependence of flood variables. Therefore, a risk calculation approach is proposed based on non-stationarities in both marginal probability distributions and copula functions. A case study with 54-year observed data is utilized to illustrate the application of NSCOBE. Results show NSCOBE can effectively integrate non-stationarities in both copula functions and marginal distributions into cost-benefit based design flood estimation. It is also found that there is a trade-off between maximum probability of exceedance calculated from copula functions and marginal distributions. This study for the first time provides a new approach towards a better understanding of influence of non-stationarities in both copula functions and marginal distributions on extreme flood estimation, and could be beneficial to cost-benefit based non-stationary bivariate design flood estimation across the world.

  9. Covariance-based direction-of-arrival estimation of wideband coherent chirp signals via sparse representation.

    PubMed

    Sha, Zhichao; Liu, Zhengmeng; Huang, Zhitao; Zhou, Yiyu

    2013-08-29

    This paper addresses the problem of direction-of-arrival (DOA) estimation of multiple wideband coherent chirp signals, and a new method is proposed. The new method is based on signal component analysis of the array output covariance, instead of the complicated time-frequency analysis used in previous literatures, and thus is more compact and effectively avoids possible signal energy loss during the hyper-processes. Moreover, the a priori information of signal number is no longer a necessity for DOA estimation in the new method. Simulation results demonstrate the performance superiority of the new method over previous ones.

  10. Asteroid mass estimation with Markov-chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Siltala, L.; Granvik, M.

    2017-09-01

    We have developed a new Markov-chain Monte Carlo-based algorithm for asteroid mass estimation based on mutual encounters and tested it for several different asteroids. Our results are in line with previous literature values but suggest that uncertainties of prior estimates may be misleading as a consequence of using linearized methods.

  11. A Coalescent-Based Estimator of Admixture From DNA Sequences

    PubMed Central

    Wang, Jinliang

    2006-01-01

    A variety of estimators have been developed to use genetic marker information in inferring the admixture proportions (parental contributions) of a hybrid population. The majority of these estimators used allele frequency data, ignored molecular information that is available in markers such as microsatellites and DNA sequences, and assumed that mutations are absent since the admixture event. As a result, these estimators may fail to deliver an estimate or give rather poor estimates when admixture is ancient and thus mutations are not negligible. A previous molecular estimator based its inference of admixture proportions on the average coalescent times between pairs of genes taken from within and between populations. In this article I propose an estimator that considers the entire genealogy of all of the sampled genes and infers admixture proportions from the numbers of segregating sites in DNA sequence samples. By considering the genealogy of all sequences rather than pairs of sequences, this new estimator also allows the joint estimation of other interesting parameters in the admixture model, such as admixture time, divergence time, population size, and mutation rate. Comparative analyses of simulated data indicate that the new coalescent estimator generally yields better estimates of admixture proportions than the previous molecular estimator, especially when the parental populations are not highly differentiated. It also gives reasonably accurate estimates of other admixture parameters. A human mtDNA sequence data set was analyzed to demonstrate the method, and the analysis results are discussed and compared with those from previous studies. PMID:16624918

  12. Burden of typhoid fever in low-income and middle-income countries: a systematic, literature-based update with risk-factor adjustment.

    PubMed

    Mogasale, Vittal; Maskery, Brian; Ochiai, R Leon; Lee, Jung Seok; Mogasale, Vijayalaxmi V; Ramani, Enusa; Kim, Young Eun; Park, Jin Kyung; Wierzba, Thomas F

    2014-10-01

    No access to safe water is an important risk factor for typhoid fever, yet risk-level heterogeneity is unaccounted for in previous global burden estimates. Since WHO has recommended risk-based use of typhoid polysaccharide vaccine, we revisited the burden of typhoid fever in low-income and middle-income countries (LMICs) after adjusting for water-related risk. We estimated the typhoid disease burden from studies done in LMICs based on blood-culture-confirmed incidence rates applied to the 2010 population, after correcting for operational issues related to surveillance, limitations of diagnostic tests, and water-related risk. We derived incidence estimates, correction factors, and mortality estimates from systematic literature reviews. We did scenario analyses for risk factors, diagnostic sensitivity, and case fatality rates, accounting for the uncertainty in these estimates and we compared them with previous disease burden estimates. The estimated number of typhoid fever cases in LMICs in 2010 after adjusting for water-related risk was 11·9 million (95% CI 9·9-14·7) cases with 129 000 (75 000-208 000) deaths. By comparison, the estimated risk-unadjusted burden was 20·6 million (17·5-24·2) cases and 223 000 (131 000-344 000) deaths. Scenario analyses indicated that the risk-factor adjustment and updated diagnostic test correction factor derived from systematic literature reviews were the drivers of differences between the current estimate and past estimates. The risk-adjusted typhoid fever burden estimate was more conservative than previous estimates. However, by distinguishing the risk differences, it will allow assessment of the effect at the population level and will facilitate cost-effectiveness calculations for risk-based vaccination strategies for future typhoid conjugate vaccine. Copyright © 2014 Mogasale et al. Open Access article distributed under the terms of CC BY-NC-SA. Published by .. All rights reserved.

  13. Reevaluation of mid-Pliocene North Atlantic sea surface temperatures

    USGS Publications Warehouse

    Robinson, Marci M.; Dowsett, Harry J.; Dwyer, Gary S.; Lawrence, Kira T.

    2008-01-01

    Multiproxy temperature estimation requires careful attention to biological, chemical, physical, temporal, and calibration differences of each proxy and paleothermometry method. We evaluated mid-Pliocene sea surface temperature (SST) estimates from multiple proxies at Deep Sea Drilling Project Holes 552A, 609B, 607, and 606, transecting the North Atlantic Drift. SST estimates derived from faunal assemblages, foraminifer Mg/Ca, and alkenone unsaturation indices showed strong agreement at Holes 552A, 607, and 606 once differences in calibration, depth, and seasonality were addressed. Abundant extinct species and/or an unrecognized productivity signal in the faunal assemblage at Hole 609B resulted in exaggerated faunal-based SST estimates but did not affect alkenone-derived or Mg/Ca–derived estimates. Multiproxy mid-Pliocene North Atlantic SST estimates corroborate previous studies documenting high-latitude mid-Pliocene warmth and refine previous faunal-based estimates affected by environmental factors other than temperature. Multiproxy investigations will aid SST estimation in high-latitude areas sensitive to climate change and currently underrepresented in SST reconstructions.

  14. Revisiting the global surface energy budgets with maximum-entropy-production model of surface heat fluxes

    NASA Astrophysics Data System (ADS)

    Huang, Shih-Yu; Deng, Yi; Wang, Jingfeng

    2017-09-01

    The maximum-entropy-production (MEP) model of surface heat fluxes, based on contemporary non-equilibrium thermodynamics, information theory, and atmospheric turbulence theory, is used to re-estimate the global surface heat fluxes. The MEP model predicted surface fluxes automatically balance the surface energy budgets at all time and space scales without the explicit use of near-surface temperature and moisture gradient, wind speed and surface roughness data. The new MEP-based global annual mean fluxes over the land surface, using input data of surface radiation, temperature data from National Aeronautics and Space Administration-Clouds and the Earth's Radiant Energy System (NASA CERES) supplemented by surface specific humidity data from the Modern-Era Retrospective Analysis for Research and Applications (MERRA), agree closely with previous estimates. The new estimate of ocean evaporation, not using the MERRA reanalysis data as model inputs, is lower than previous estimates, while the new estimate of ocean sensible heat flux is higher than previously reported. The MEP model also produces the first global map of ocean surface heat flux that is not available from existing global reanalysis products.

  15. Model-Based Engine Control Architecture with an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The non-linear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.

  16. Model-Based Engine Control Architecture with an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The nonlinear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.

  17. Robust Tracking of Small Displacements with a Bayesian Estimator

    PubMed Central

    Dumont, Douglas M.; Byram, Brett C.

    2016-01-01

    Radiation-force-based elasticity imaging describes a group of techniques that use acoustic radiation force (ARF) to displace tissue in order to obtain qualitative or quantitative measurements of tissue properties. Because ARF-induced displacements are on the order of micrometers, tracking these displacements in vivo can be challenging. Previously, it has been shown that Bayesian-based estimation can overcome some of the limitations of a traditional displacement estimator like normalized cross-correlation (NCC). In this work, we describe a Bayesian framework that combines a generalized Gaussian-Markov random field (GGMRF) prior with an automated method for selecting the prior’s width. We then evaluate its performance in the context of tracking the micrometer-order displacements encountered in an ARF-based method like acoustic radiation force impulse (ARFI) imaging. The results show that bias, variance, and mean-square error performance vary with prior shape and width, and that an almost one order-of-magnitude reduction in mean-square error can be achieved by the estimator at the automatically-selected prior width. Lesion simulations show that the proposed estimator has a higher contrast-to-noise ratio but lower contrast than NCC, median-filtered NCC, and the previous Bayesian estimator, with a non-Gaussian prior shape having better lesion-edge resolution than a Gaussian prior. In vivo results from a cardiac, radiofrequency ablation ARFI imaging dataset show quantitative improvements in lesion contrast-to-noise ratio over NCC as well as the previous Bayesian estimator. PMID:26529761

  18. DARK MATTER MASS FRACTION IN LENS GALAXIES: NEW ESTIMATES FROM MICROLENSING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiménez-Vicente, J.; Mediavilla, E.; Kochanek, C. S.

    2015-02-01

    We present a joint estimate of the stellar/dark matter mass fraction in lens galaxies and the average size of the accretion disk of lensed quasars based on microlensing measurements of 27 quasar image pairs seen through 19 lens galaxies. The Bayesian estimate for the fraction of the surface mass density in the form of stars is α = 0.21 ± 0.14 near the Einstein radius of the lenses (∼1-2 effective radii). The estimate for the average accretion disk size is R{sub 1/2}=7.9{sub −2.6}{sup +3.8}√(M/0.3 M{sub ⊙}) light days. The fraction of mass in stars at these radii is significantly largermore » than previous estimates from microlensing studies assuming quasars were point-like. The corresponding local dark matter fraction of 79% is in good agreement with other estimates based on strong lensing or kinematics. The size of the accretion disk inferred in the present study is slightly larger than previous estimates.« less

  19. Effect of previous history of cancer on survival of patients with a second cancer of the head and neck.

    PubMed

    Jégu, Jérémie; Belot, Aurélien; Borel, Christian; Daubisse-Marliac, Laetitia; Trétarre, Brigitte; Ganry, Olivier; Guizard, Anne-Valérie; Bara, Simona; Troussard, Xavier; Bouvier, Véronique; Woronoff, Anne-Sophie; Colonna, Marc; Velten, Michel

    2015-05-01

    To provide head and neck squamous cell carcinoma (HNSCC) survival estimates with respect to patient previous history of cancer. Data from ten French population-based cancer registries were used to establish a cohort of all male patients presenting with a HNSCC diagnosed between 1989 and 2004. Vital status was updated until December 31, 2007. The 5-year overall and net survival estimates were assessed using the Kaplan-Meier and Pohar-Perme estimators, respectively. Multivariate Cox regression models were used to assess the effect of cancer history adjusted for age and year of HNSCC diagnosis. Among the cases of HNSCC, 5553 were localized in the oral cavity, 3646 in the oropharynx, 3793 in the hypopharynx and 4550 in the larynx. From 11.0% to 16.8% of patients presented with a previous history of cancer according to HNSCC. Overall and net survival were closely tied to the presence, or not, of a previous cancer. For example, for carcinoma of the oral cavity, the five-year overall survival was 14.0%, 5.9% and 36.7% in case of previous lung cancer, oesophagus cancer or no cancer history, respectively. Multivariate analyses showed that previous history of cancer was a prognosis factor independent of age and year of diagnosis (p<.001). Previous history of cancer is strongly associated with survival among HNSCC patients. Survival estimates based on patients' previous history of cancer will enable clinicians to assess more precisely the prognosis of their patients with respect to this major comorbid condition. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Gray-world-assumption-based illuminant color estimation using color gamuts with high and low chroma

    NASA Astrophysics Data System (ADS)

    Kawamura, Harumi; Yonemura, Shunichi; Ohya, Jun; Kojima, Akira

    2013-02-01

    A new approach is proposed for estimating illuminant colors from color images under an unknown scene illuminant. The approach is based on a combination of a gray-world-assumption-based illuminant color estimation method and a method using color gamuts. The former method, which is one we had previously proposed, improved on the original method that hypothesizes that the average of all the object colors in a scene is achromatic. Since the original method estimates scene illuminant colors by calculating the average of all the image pixel values, its estimations are incorrect when certain image colors are dominant. Our previous method improves on it by choosing several colors on the basis of an opponent-color property, which is that the average color of opponent colors is achromatic, instead of using all colors. However, it cannot estimate illuminant colors when there are only a few image colors or when the image colors are unevenly distributed in local areas in the color space. The approach we propose in this paper combines our previous method and one using high chroma and low chroma gamuts, which makes it possible to find colors that satisfy the gray world assumption. High chroma gamuts are used for adding appropriate colors to the original image and low chroma gamuts are used for narrowing down illuminant color possibilities. Experimental results obtained using actual images show that even if the image colors are localized in a certain area in the color space, the illuminant colors are accurately estimated, with smaller estimation error average than that generated in the conventional method.

  1. Two Intense Decades of 19th Century Whaling Precipitated Rapid Decline of Right Whales around New Zealand and East Australia

    PubMed Central

    Carroll, Emma L.; Jackson, Jennifer A.; Paton, David; Smith, Tim D.

    2014-01-01

    Right whales (Eubalaena spp.) were the focus of worldwide whaling activities from the 16th to the 20th century. During the first part of the 19th century, the southern right whale (E. australis) was heavily exploited on whaling grounds around New Zealand (NZ) and east Australia (EA). Here we build upon previous estimates of the total catch of NZ and EA right whales by improving and combining estimates from four different fisheries. Two fisheries have previously been considered: shore-based whaling in bays and ship-based whaling offshore. These were both improved by comparison with primary sources and the American offshore whaling catch record was improved by using a sample of logbooks to produce a more accurate catch record in terms of location and species composition. Two fisheries had not been previously integrated into the NZ and EA catch series: ship-based whaling in bays and whaling in the 20th century. To investigate the previously unaddressed problem of offshore whalers operating in bays, we identified a subset of vessels likely to be operating in bays and read available extant logbooks. This allowed us to estimate the total likely catch from bay-whaling by offshore whalers from the number of vessels seasons and whales killed per season: it ranged from 2,989 to 4,652 whales. The revised total estimate of 53,000 to 58,000 southern right whales killed is a considerable increase on the previous estimate of 26,000, partly because it applies fishery-specific estimates of struck and loss rates. Over 80% of kills were taken between 1830 and 1849, indicating a brief and intensive fishery that resulted in the commercial extinction of southern right whales in NZ and EA in just two decades. This conforms to the global trend of increasingly intense and destructive southern right whale fisheries over time. PMID:24690918

  2. Two intense decades of 19th century whaling precipitated rapid decline of right whales around New Zealand and East Australia.

    PubMed

    Carroll, Emma L; Jackson, Jennifer A; Paton, David; Smith, Tim D

    2014-01-01

    Right whales (Eubalaena spp.) were the focus of worldwide whaling activities from the 16th to the 20th century. During the first part of the 19th century, the southern right whale (E. australis) was heavily exploited on whaling grounds around New Zealand (NZ) and east Australia (EA). Here we build upon previous estimates of the total catch of NZ and EA right whales by improving and combining estimates from four different fisheries. Two fisheries have previously been considered: shore-based whaling in bays and ship-based whaling offshore. These were both improved by comparison with primary sources and the American offshore whaling catch record was improved by using a sample of logbooks to produce a more accurate catch record in terms of location and species composition. Two fisheries had not been previously integrated into the NZ and EA catch series: ship-based whaling in bays and whaling in the 20th century. To investigate the previously unaddressed problem of offshore whalers operating in bays, we identified a subset of vessels likely to be operating in bays and read available extant logbooks. This allowed us to estimate the total likely catch from bay-whaling by offshore whalers from the number of vessels seasons and whales killed per season: it ranged from 2,989 to 4,652 whales. The revised total estimate of 53,000 to 58,000 southern right whales killed is a considerable increase on the previous estimate of 26,000, partly because it applies fishery-specific estimates of struck and loss rates. Over 80% of kills were taken between 1830 and 1849, indicating a brief and intensive fishery that resulted in the commercial extinction of southern right whales in NZ and EA in just two decades. This conforms to the global trend of increasingly intense and destructive southern right whale fisheries over time.

  3. A Comparison of the Approaches of Generalizability Theory and Item Response Theory in Estimating the Reliability of Test Scores for Testlet-Composed Tests

    ERIC Educational Resources Information Center

    Lee, Guemin; Park, In-Yong

    2012-01-01

    Previous assessments of the reliability of test scores for testlet-composed tests have indicated that item-based estimation methods overestimate reliability. This study was designed to address issues related to the extent to which item-based estimation methods overestimate the reliability of test scores composed of testlets and to compare several…

  4. A historical reconstruction of ships' fuel consumption and emissions

    NASA Astrophysics Data System (ADS)

    Endresen, Øyvind; Sørgârd, Eirik; Behrens, Hanna Lee; Brett, Per Olaf; Isaksen, Ivar S. A.

    2007-06-01

    Shipping activity has increased considerably over the last century and currently represents a significant contribution to the global emissions of pollutants and greenhouse gases. Despite this, information about the historical development of fuel consumption and emissions is generally limited, with little data published pre-1950 and large deviations reported for estimates covering the last 3 decades. To better understand the historical development in ship emissions and the uncertainties associated with the estimates, we present fuel-based CO2 and SO2 emission inventories from 1925 up to 2002 and activity-based estimates from 1970 up to 2000. The global CO2 emissions from ships in 1925 have been estimated to 229 Tg (CO2), growing to about 634 Tg (CO2) in 2002. The corresponding SO2 emissions are about 2.5 Tg (SO2) and 8.5 Tg (SO2), respectively. Our activity-based estimates of fuel consumption from 1970 to 2000, covering all oceangoing civil ships above or equal to 100 gross tonnage (GT), are lower compared to previous activity-based studies. We have applied a more detailed model approach, which includes variation in the demand for sea transport, as well as operational and technological changes of the past. This study concludes that the main reason for the large deviations found in reported inventories is the applied number of days at sea. Moreover, our modeling indicates that the ship size and the degree of utilization of the fleet, combined with the shift to diesel engines, have been the major factors determining yearly fuel consumption. Interestingly, the model results from around 1973 suggest that the fleet growth is not necessarily followed by increased fuel consumption, as technical and operational characteristics have changed. Results from this study indicate that reported sales over the last 3 decades seems not to be significantly underreported as previous simplified activity-based studies have suggested. The results confirm our previously reported modeling estimates for year 2000. Previous activity-based studies have not considered ships less than 100 GT (e.g., today some 1.3 million fishing vessels), and we suggest that this fleet could account for an important part of the total fuel consumption (˜10%).

  5. SEASONAL NH 3 EMISSIONS FOR THE CONTINENTAL UNITED STATES: INVERSE MODEL ESTIMATION AND EVALUATION

    EPA Science Inventory

    An inverse modeling study has been conducted here to evaluate a prior estimate of seasonal ammonia (NH3) emissions. The prior estimates were based on a previous inverse modeling study and two other bottom-up inventory studies. The results suggest that the prior estim...

  6. Effects of linking a soil-water-balance model with a groundwater-flow model

    USGS Publications Warehouse

    Stanton, Jennifer S.; Ryter, Derek W.; Peterson, Steven M.

    2013-01-01

    A previously published regional groundwater-flow model in north-central Nebraska was sequentially linked with the recently developed soil-water-balance (SWB) model to analyze effects to groundwater-flow model parameters and calibration results. The linked models provided a more detailed spatial and temporal distribution of simulated recharge based on hydrologic processes, improvement of simulated groundwater-level changes and base flows at specific sites in agricultural areas, and a physically based assessment of the relative magnitude of recharge for grassland, nonirrigated cropland, and irrigated cropland areas. Root-mean-squared (RMS) differences between the simulated and estimated or measured target values for the previously published model and linked models were relatively similar and did not improve for all types of calibration targets. However, without any adjustment to the SWB-generated recharge, the RMS difference between simulated and estimated base-flow target values for the groundwater-flow model was slightly smaller than for the previously published model, possibly indicating that the volume of recharge simulated by the SWB code was closer to actual hydrogeologic conditions than the previously published model provided. Groundwater-level and base-flow hydrographs showed that temporal patterns of simulated groundwater levels and base flows were more accurate for the linked models than for the previously published model at several sites, particularly in agricultural areas.

  7. Model-Based IN SITU Parameter Estimation of Ultrasonic Guided Waves in AN Isotropic Plate

    NASA Astrophysics Data System (ADS)

    Hall, James S.; Michaels, Jennifer E.

    2010-02-01

    Most ultrasonic systems employing guided waves for flaw detection require information such as dispersion curves, transducer locations, and expected propagation loss. Degraded system performance may result if assumed parameter values do not accurately reflect the actual environment. By characterizing the propagating environment in situ at the time of test, potentially erroneous a priori estimates are avoided and performance of ultrasonic guided wave systems can be improved. A four-part model-based algorithm is described in the context of previous work that estimates model parameters whereby an assumed propagation model is used to describe the received signals. This approach builds upon previous work by demonstrating the ability to estimate parameters for the case of single mode propagation. Performance is demonstrated on signals obtained from theoretical dispersion curves, finite element modeling, and experimental data.

  8. Distance measures and optimization spaces in quantitative fatty acid signature analysis

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.; Rode, Karyn D.; Budge, Suzanne M.; Thiemann, Gregory W.

    2015-01-01

    Quantitative fatty acid signature analysis has become an important method of diet estimation in ecology, especially marine ecology. Controlled feeding trials to validate the method and estimate the calibration coefficients necessary to account for differential metabolism of individual fatty acids have been conducted with several species from diverse taxa. However, research into potential refinements of the estimation method has been limited. We compared the performance of the original method of estimating diet composition with that of five variants based on different combinations of distance measures and calibration-coefficient transformations between prey and predator fatty acid signature spaces. Fatty acid signatures of pseudopredators were constructed using known diet mixtures of two prey data sets previously used to estimate the diets of polar bears Ursus maritimus and gray seals Halichoerus grypus, and their diets were then estimated using all six variants. In addition, previously published diets of Chukchi Sea polar bears were re-estimated using all six methods. Our findings reveal that the selection of an estimation method can meaningfully influence estimates of diet composition. Among the pseudopredator results, which allowed evaluation of bias and precision, differences in estimator performance were rarely large, and no one estimator was universally preferred, although estimators based on the Aitchison distance measure tended to have modestly superior properties compared to estimators based on the Kullback-Leibler distance measure. However, greater differences were observed among estimated polar bear diets, most likely due to differential estimator sensitivity to assumption violations. Our results, particularly the polar bear example, suggest that additional research into estimator performance and model diagnostics is warranted.

  9. Reconciling estimates of the ratio of heat and salt fluxes at the ice-ocean interface

    NASA Astrophysics Data System (ADS)

    Keitzl, T.; Mellado, J. P.; Notz, D.

    2016-12-01

    The heat exchange between floating ice and the underlying ocean is determined by the interplay of diffusive fluxes directly at the ice-ocean interface and turbulent fluxes away from it. In this study, we examine this interplay through direct numerical simulations of free convection. Our results show that an estimation of the interface flux ratio based on direct measurements of the turbulent fluxes can be difficult because the flux ratio varies with depth. As an alternative, we present a consistent evaluation of the flux ratio based on the total heat and salt fluxes across the boundary layer. This approach allows us to reconcile previous estimates of the ice-ocean interface conditions. We find that the ratio of heat and salt fluxes directly at the interface is 83-100 rather than 33 as determined by previous turbulence measurements in the outer layer. This can cause errors in the estimated ice-ablation rate from field measurements of up to 40% if they are based on the three-equation formulation.

  10. Space Station Furnace Facility. Volume 3: Program cost estimate

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The approach used to estimate costs for the Space Station Furnace Facility (SSFF) is based on a computer program developed internally at Teledyne Brown Engineering (TBE). The program produces time-phased estimates of cost elements for each hardware component, based on experience with similar components. Engineering estimates of the degree of similarity or difference between the current project and the historical data is then used to adjust the computer-produced cost estimate and to fit it to the current project Work Breakdown Structure (WBS). The SSFF Concept as presented at the Requirements Definition Review (RDR) was used as the base configuration for the cost estimate. This program incorporates data on costs of previous projects and the allocation of those costs to the components of one of three, time-phased, generic WBS's. Input consists of a list of similar components for which cost data exist, number of interfaces with their type and complexity, identification of the extent to which previous designs are applicable, and programmatic data concerning schedules and miscellaneous data (travel, off-site assignments). Output is program cost in labor hours and material dollars, for each component, broken down by generic WBS task and program schedule phase.

  11. Reconstructing Spectral Scenes Using Statistical Estimation to Enhance Space Situational Awareness

    DTIC Science & Technology

    2006-12-01

    simultane- ously spatially and spectrally deblur the images collected from ASIS. The algorithms are based on proven estimation theories and do not...collected with any system using a filtering technology known as Electronic Tunable Filters (ETFs). Previous methods to deblur spectral images collected...spectrally deblurring then the previously investigated methods. This algorithm expands on a method used for increasing the spectral resolution in gamma-ray

  12. Genetic mapping of 15 human X chromosomal forensic short tandem repeat (STR) loci by means of multi-core parallelization.

    PubMed

    Diegoli, Toni Marie; Rohde, Heinrich; Borowski, Stefan; Krawczak, Michael; Coble, Michael D; Nothnagel, Michael

    2016-11-01

    Typing of X chromosomal short tandem repeat (X STR) markers has become a standard element of human forensic genetic analysis. Joint consideration of many X STR markers at a time increases their discriminatory power but, owing to physical linkage, requires inter-marker recombination rates to be accurately known. We estimated the recombination rates between 15 well established X STR markers using genotype data from 158 families (1041 individuals) and following a previously proposed likelihood-based approach that allows for single-step mutations. To meet the computational requirements of this family-based type of analysis, we modified a previous implementation so as to allow multi-core parallelization on a high-performance computing system. While we obtained recombination rate estimates larger than zero for all but one pair of adjacent markers within the four previously proposed linkage groups, none of the three X STR pairs defining the junctions of these groups yielded a recombination rate estimate of 0.50. Corroborating previous studies, our results therefore argue against a simple model of independent X chromosomal linkage groups. Moreover, the refined recombination fraction estimates obtained in our study will facilitate the appropriate joint consideration of all 15 investigated markers in forensic analysis. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Comparison of different estimation techniques for biomass concentration in large scale yeast fermentation.

    PubMed

    Hocalar, A; Türker, M; Karakuzu, C; Yüzgeç, U

    2011-04-01

    In this study, previously developed five different state estimation methods are examined and compared for estimation of biomass concentrations at a production scale fed-batch bioprocess. These methods are i. estimation based on kinetic model of overflow metabolism; ii. estimation based on metabolic black-box model; iii. estimation based on observer; iv. estimation based on artificial neural network; v. estimation based on differential evaluation. Biomass concentrations are estimated from available measurements and compared with experimental data obtained from large scale fermentations. The advantages and disadvantages of the presented techniques are discussed with regard to accuracy, reproducibility, number of primary measurements required and adaptation to different working conditions. Among the various techniques, the metabolic black-box method seems to have advantages although the number of measurements required is more than that for the other methods. However, the required extra measurements are based on commonly employed instruments in an industrial environment. This method is used for developing a model based control of fed-batch yeast fermentations. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  14. ON ESTIMATING FORCE-FREENESS BASED ON OBSERVED MAGNETOGRAMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, X. M.; Zhang, M.; Su, J. T., E-mail: xmzhang@nao.cas.cn

    It is a common practice in the solar physics community to test whether or not measured photospheric or chromospheric vector magnetograms are force-free, using the Maxwell stress as a measure. Some previous studies have suggested that magnetic fields of active regions in the solar chromosphere are close to being force-free whereas there is no consistency among previous studies on whether magnetic fields of active regions in the solar photosphere are force-free or not. Here we use three kinds of representative magnetic fields (analytical force-free solutions, modeled solar-like force-free fields, and observed non-force-free fields) to discuss how measurement issues such asmore » limited field of view (FOV), instrument sensitivity, and measurement error could affect the estimation of force-freeness based on observed magnetograms. Unlike previous studies that focus on discussing the effect of limited FOV or instrument sensitivity, our calculation shows that just measurement error alone can significantly influence the results of estimates of force-freeness, due to the fact that measurement errors in horizontal magnetic fields are usually ten times larger than those in vertical fields. This property of measurement errors, interacting with the particular form of a formula for estimating force-freeness, would result in wrong judgments of the force-freeness: a truly force-free field may be mistakenly estimated as being non-force-free and a truly non-force-free field may be estimated as being force-free. Our analysis calls for caution when interpreting estimates of force-freeness based on measured magnetograms, and also suggests that the true photospheric magnetic field may be further away from being force-free than it currently appears to be.« less

  15. The current economic burden of illness of osteoporosis in Canada

    PubMed Central

    Burke, N.; Von Keyserlingk, C.; Leslie, W. D.; Morin, S. N.; Adachi, J. D.; Papaioannou, A.; Bessette, L.; Brown, J. P.; Pericleous, L.; Tarride, J.

    2016-01-01

    Summary We estimate the current burden of illness of osteoporosis in Canada is double ($4.6 billion) our previous estimates ($2.3 billion) due to improved data capture of the multiple encounters and services that accompany a fracture: emergency room, admissions to acute and step-down non-acute institutions, rehabilitation, home-assisted or long-term residency support. Introduction We previously estimated the economic burden of illness of osteoporosis-attributable fractures in Canada for the year 2008 to be $2.3 billion in the base case and as much as $3.9 billion. The aim of this study is to update the estimate of the economic burden of illness for osteoporosis-attributable fractures for Canada based on newly available home care and long-term care (LTC) data. Methods Multiple national databases were used for the fiscal-year ending March 31, 2011 (FY 2010/2011) for acute institutional care, emergency visits, day surgery, secondary admissions for rehabilitation, and complex continuing care, as well as national dispensing data for osteoporosis medications. Gaps in national data were supplemented by provincial and community survey data. Osteoporosis-attributable fractures for Canadians age 50+ were identified by ICD-10-CA codes. Costs were expressed in 2014 dollars. Results In FY 2010/2011, the number of osteoporosis-attributable fractures was 131,443 resulting in 64,884 acute care admissions and 983,074 acute hospital days. Acute care costs were $1.5 billion, an 18 % increase since 2008. The cost of LTC was 33.4 times the previous estimate ($31 million versus $1.03 billion) because of improved data capture. The cost for rehabilitation and secondary admissions increased 3.4 fold, while drug costs decreased 19 %. The overall cost of osteoporosis was over $4.6 billion, an increase of 83 % from the 2008 estimate. Conclusion Since the 2008 estimate, new Canadian data on home care and LTC are available which provided a better estimate of the burden of osteoporosis in Canada. This suggests that our previous estimates were seriously underestimated. PMID:27166680

  16. The current economic burden of illness of osteoporosis in Canada.

    PubMed

    Hopkins, R B; Burke, N; Von Keyserlingk, C; Leslie, W D; Morin, S N; Adachi, J D; Papaioannou, A; Bessette, L; Brown, J P; Pericleous, L; Tarride, J

    2016-10-01

    We estimate the current burden of illness of osteoporosis in Canada is double ($4.6 billion) our previous estimates ($2.3 billion) due to improved data capture of the multiple encounters and services that accompany a fracture: emergency room, admissions to acute and step-down non-acute institutions, rehabilitation, home-assisted or long-term residency support. We previously estimated the economic burden of illness of osteoporosis-attributable fractures in Canada for the year 2008 to be $2.3 billion in the base case and as much as $3.9 billion. The aim of this study is to update the estimate of the economic burden of illness for osteoporosis-attributable fractures for Canada based on newly available home care and long-term care (LTC) data. Multiple national databases were used for the fiscal-year ending March 31, 2011 (FY 2010/2011) for acute institutional care, emergency visits, day surgery, secondary admissions for rehabilitation, and complex continuing care, as well as national dispensing data for osteoporosis medications. Gaps in national data were supplemented by provincial and community survey data. Osteoporosis-attributable fractures for Canadians age 50+ were identified by ICD-10-CA codes. Costs were expressed in 2014 dollars. In FY 2010/2011, the number of osteoporosis-attributable fractures was 131,443 resulting in 64,884 acute care admissions and 983,074 acute hospital days. Acute care costs were $1.5 billion, an 18 % increase since 2008. The cost of LTC was 33.4 times the previous estimate ($31 million versus $1.03 billion) because of improved data capture. The cost for rehabilitation and secondary admissions increased 3.4 fold, while drug costs decreased 19 %. The overall cost of osteoporosis was over $4.6 billion, an increase of 83 % from the 2008 estimate. Since the 2008 estimate, new Canadian data on home care and LTC are available which provided a better estimate of the burden of osteoporosis in Canada. This suggests that our previous estimates were seriously underestimated.

  17. Robust Modal Filtering and Control of the X-56A Model with Simulated Fiber Optic Sensor Failures

    NASA Technical Reports Server (NTRS)

    Suh, Peter M.; Chin, Alexander W.; Marvis, Dimitri N.

    2014-01-01

    The X-56A aircraft is a remotely-piloted aircraft with flutter modes intentionally designed into the flight envelope. The X-56A program must demonstrate flight control while suppressing all unstable modes. A previous X-56A model study demonstrated a distributed-sensing-based active shape and active flutter suppression controller. The controller relies on an estimator which is sensitive to bias. This estimator is improved herein, and a real-time robust estimator is derived and demonstrated on 1530 fiber optic sensors. It is shown in simulation that the estimator can simultaneously reject 230 worst-case fiber optic sensor failures automatically. These sensor failures include locations with high leverage (or importance). To reduce the impact of leverage outliers, concentration based on a Mahalanobis trim criterion is introduced. A redescending M-estimator with Tukey bisquare weights is used to improve location and dispersion estimates within each concentration step in the presence of asymmetry (or leverage). A dynamic simulation is used to compare the concentrated robust estimator to a state-of-the-art real-time robust multivariate estimator. The estimators support a previously-derived mu-optimal shape controller. It is found that during the failure scenario, the concentrated modal estimator keeps the system stable.

  18. Robust Modal Filtering and Control of the X-56A Model with Simulated Fiber Optic Sensor Failures

    NASA Technical Reports Server (NTRS)

    Suh, Peter M.; Chin, Alexander W.; Mavris, Dimitri N.

    2016-01-01

    The X-56A aircraft is a remotely-piloted aircraft with flutter modes intentionally designed into the flight envelope. The X-56A program must demonstrate flight control while suppressing all unstable modes. A previous X-56A model study demonstrated a distributed-sensing-based active shape and active flutter suppression controller. The controller relies on an estimator which is sensitive to bias. This estimator is improved herein, and a real-time robust estimator is derived and demonstrated on 1530 fiber optic sensors. It is shown in simulation that the estimator can simultaneously reject 230 worst-case fiber optic sensor failures automatically. These sensor failures include locations with high leverage (or importance). To reduce the impact of leverage outliers, concentration based on a Mahalanobis trim criterion is introduced. A redescending M-estimator with Tukey bisquare weights is used to improve location and dispersion estimates within each concentration step in the presence of asymmetry (or leverage). A dynamic simulation is used to compare the concentrated robust estimator to a state-of-the-art real-time robust multivariate estimator. The estimators support a previously-derived mu-optimal shape controller. It is found that during the failure scenario, the concentrated modal estimator keeps the system stable.

  19. The parent magma of the Nakhla (SNC) meteorite: Reconciliation of composition estimates from magmatic inclusions and element partitioning

    NASA Technical Reports Server (NTRS)

    Treiman, A. H.

    1993-01-01

    The composition of the parent magma of the Nakhla meteorite was difficult to determine, because it is accumulate rock, enriched in olivine and augite relative to a basalt magma. A parent magma composition is estimated from electron microprobe area analyses of magmatic inclusions in olivine. This composition is consistent with an independent estimate based on the same inclusions, and with chemical equilibria with the cores of Nakhla's augites. This composition reconciles most of the previous estimates of Nakhla's magma composition, and obviates the need for complex magmatic processes. Inconsistency between this composition and those calculated previously suggests that magma flowed through and crystallized into Nakhla as it cooled.

  20. A revised timescale for human evolution based on ancient mitochondrial genomes

    PubMed Central

    Johnson, Philip L.F.; Bos, Kirsten; Lari, Martina; Bollongino, Ruth; Sun, Chengkai; Giemsch, Liane; Schmitz, Ralf; Burger, Joachim; Ronchitelli, Anna Maria; Martini, Fabio; Cremonesi, Renata G.; Svoboda, Jiří; Bauer, Peter; Caramelli, David; Castellano, Sergi; Reich, David; Pääbo, Svante; Krause, Johannes

    2016-01-01

    Summary Background Recent analyses of de novo DNA mutations in modern humans have suggested a nuclear substitution rate that is approximately half that of previous estimates based on fossil calibration. This result has led to suggestions that major events in human evolution occurred far earlier than previously thought. Result Here we use mitochondrial genome sequences from 10 securely dated ancient modern humans spanning 40,000 years as calibration points for the mitochondrial clock, thus yielding a direct estimate of the mitochondrial substitution rate. Our clock yields mitochondrial divergence times that are in agreement with earlier estimates based on calibration points derived from either fossils or archaeological material. In particular, our results imply a separation of non-Africans from the most closely related sub-Saharan African mitochondrial DNAs (haplogroup L3) of less than 62,000-95,000 years ago. Conclusion Though single loci like mitochondrial DNA (mtDNA) can only provide biased estimates of population split times, they can provide valid upper bounds; our results exclude most of the older dates for African and non-African split times recently suggested by de novo mutation rate estimates in the nuclear genome. PMID:23523248

  1. A revised timescale for human evolution based on ancient mitochondrial genomes.

    PubMed

    Fu, Qiaomei; Mittnik, Alissa; Johnson, Philip L F; Bos, Kirsten; Lari, Martina; Bollongino, Ruth; Sun, Chengkai; Giemsch, Liane; Schmitz, Ralf; Burger, Joachim; Ronchitelli, Anna Maria; Martini, Fabio; Cremonesi, Renata G; Svoboda, Jiří; Bauer, Peter; Caramelli, David; Castellano, Sergi; Reich, David; Pääbo, Svante; Krause, Johannes

    2013-04-08

    Recent analyses of de novo DNA mutations in modern humans have suggested a nuclear substitution rate that is approximately half that of previous estimates based on fossil calibration. This result has led to suggestions that major events in human evolution occurred far earlier than previously thought. Here, we use mitochondrial genome sequences from ten securely dated ancient modern humans spanning 40,000 years as calibration points for the mitochondrial clock, thus yielding a direct estimate of the mitochondrial substitution rate. Our clock yields mitochondrial divergence times that are in agreement with earlier estimates based on calibration points derived from either fossils or archaeological material. In particular, our results imply a separation of non-Africans from the most closely related sub-Saharan African mitochondrial DNAs (haplogroup L3) that occurred less than 62-95 kya. Though single loci like mitochondrial DNA (mtDNA) can only provide biased estimates of population divergence times, they can provide valid upper bounds. Our results exclude most of the older dates for African and non-African population divergences recently suggested by de novo mutation rate estimates in the nuclear genome. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Improving North American forest biomass estimates from literature synthesis and meta-analysis of existing biomass equations

    Treesearch

    David C. Chojnacky; Jennifer C. Jenkins; Amanda K. Holland

    2009-01-01

    Thousands of published equations purport to estimate biomass of individual trees. These equations are often based on very small samples, however, and can provide widely different estimates for trees of the same species. We addressed this issue in a previous study by devising 10 new equations that estimated total aboveground biomass for all species in North America (...

  3. RESEARCH: An Ecoregional Approach to the Economic Valuation of Land- and Water-Based Recreation in the United States

    PubMed

    Bhat; Bergstrom; Teasley; Bowker; Cordell

    1998-01-01

    / This paper describes a framework for estimating the economic value of outdoor recreation across different ecoregions. Ten ecoregions in the continental United States were defined based on similarly functioning ecosystem characters. The individual travel cost method was employed to estimate recreation demand functions for activities such as motor boating and waterskiing, developed and primitive camping, coldwater fishing, sightseeing and pleasure driving, and big game hunting for each ecoregion. While our ecoregional approach differs conceptually from previous work, our results appear consistent with the previous travel cost method valuation studies.KEY WORDS: Recreation; Ecoregion; Travel cost method; Truncated Poisson model

  4. Estimation of tiger densities in India using photographic captures and recaptures

    USGS Publications Warehouse

    Karanth, U.; Nichols, J.D.

    1998-01-01

    Previously applied methods for estimating tiger (Panthera tigris) abundance using total counts based on tracks have proved unreliable. In this paper we use a field method proposed by Karanth (1995), combining camera-trap photography to identify individual tigers based on stripe patterns, with capture-recapture estimators. We developed a sampling design for camera-trapping and used the approach to estimate tiger population size and density in four representative tiger habitats in different parts of India. The field method worked well and provided data suitable for analysis using closed capture-recapture models. The results suggest the potential for applying this methodology for estimating abundances, survival rates and other population parameters in tigers and other low density, secretive animal species with distinctive coat patterns or other external markings. Estimated probabilities of photo-capturing tigers present in the study sites ranged from 0.75 - 1.00. The estimated mean tiger densities ranged from 4.1 (SE hat= 1.31) to 11.7 (SE hat= 1.93) tigers/100 km2. The results support the previous suggestions of Karanth and Sunquist (1995) that densities of tigers and other large felids may be primarily determined by prey community structure at a given site.

  5. Need-Based Aid and College Persistence: The Effects of the Ohio College Opportunity Grant

    ERIC Educational Resources Information Center

    Bettinger, Eric

    2015-01-01

    This article exploits a natural experiment to estimate the effects of need-based aid policies on first-year college persistence rates. In fall 2006, Ohio abruptly adopted a new state financial aid policy that was significantly more generous than the previous plan. Using student-level data and very narrowly defined sets of students, I estimate a…

  6. Effects of life-state on detectability in a demographic study of the terrestrial orchid Cleistes bifaria

    USGS Publications Warehouse

    Kery, M.; Gregg, K.B.

    2003-01-01

    1. Most plant demographic studies follow marked individuals in permanent plots. Plots tend to be small, so detectability is assumed to be one for every individual. However, detectability could be affected by factors such as plant traits, time, space, observer, previous detection, biotic interactions, and especially by life-state. 2. We used a double-observer survey and closed population capture-recapture modelling to estimate state-specific detectability of the orchid Cleistes bifaria in a long-term study plot of 41.2 m2. Based on AICc model selection, detectability was different for each life-state and for tagged vs. previously untagged plants. There were no differences in detectability between the two observers. 3. Detectability estimates (SE) for one-leaf vegetative, two-leaf vegetative, and flowering/fruiting states correlated with mean size of these states and were 0.76 (0.05), 0.92 (0.06), and 1 (0.00), respectively, for previously tagged plants, and 0.84 (0.08), 0.75 (0.22), and 0 (0.00), respectively, for previously untagged plants. (We had insufficient data to obtain a satisfactory estimate of previously untagged flowering plants). 4. Our estimates are for a medium-sized plant in a small and intensively surveyed plot. It is possible that detectability is even lower for larger plots and smaller plants or smaller life-states (e.g. seedlings) and that detectabilities < 1 are widespread in plant demographic studies. 5. State-dependent detectabilities are especially worrying since they will lead to a size- or state-biased sample from the study plot. Failure to incorporate detectability into demographic estimation methods introduces a bias into most estimates of population parameters such as fecundity, recruitment, mortality, and transition rates between life-states. We illustrate this by a simple example using a matrix model, where a hypothetical population was stable but, due to imperfect detection, wrongly projected to be declining at a rate of 8% per year. 6. Almost all plant demographic studies are based on models for discrete states. State and size are important predictors both for demographic rates and detectability. We suggest that even in studies based on small plots, state- or size-specific detectability should be estimated at least at some point to avoid biased inference about the dynamics of the population sampled.

  7. Soil profile property estimation with field and laboratory VNIR spectroscopy

    USDA-ARS?s Scientific Manuscript database

    Diffuse reflectance spectroscopy (DRS) soil sensors have the potential to provide rapid, high-resolution estimation of multiple soil properties. Although many studies have focused on laboratory-based visible and near-infrared (VNIR) spectroscopy of dried soil samples, previous work has demonstrated ...

  8. Methods for estimating population density in data-limited areas: evaluating regression and tree-based models in Peru.

    PubMed

    Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William

    2014-01-01

    Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies.

  9. Localized Dictionaries Based Orientation Field Estimation for Latent Fingerprints.

    PubMed

    Xiao Yang; Jianjiang Feng; Jie Zhou

    2014-05-01

    Dictionary based orientation field estimation approach has shown promising performance for latent fingerprints. In this paper, we seek to exploit stronger prior knowledge of fingerprints in order to further improve the performance. Realizing that ridge orientations at different locations of fingerprints have different characteristics, we propose a localized dictionaries-based orientation field estimation algorithm, in which noisy orientation patch at a location output by a local estimation approach is replaced by real orientation patch in the local dictionary at the same location. The precondition of applying localized dictionaries is that the pose of the latent fingerprint needs to be estimated. We propose a Hough transform-based fingerprint pose estimation algorithm, in which the predictions about fingerprint pose made by all orientation patches in the latent fingerprint are accumulated. Experimental results on challenging latent fingerprint datasets show the proposed method outperforms previous ones markedly.

  10. Methods for Estimating Population Density in Data-Limited Areas: Evaluating Regression and Tree-Based Models in Peru

    PubMed Central

    Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William

    2014-01-01

    Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies. PMID:24992657

  11. Search algorithm complexity modeling with application to image alignment and matching

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen

    2014-05-01

    Search algorithm complexity modeling, in the form of penetration rate estimation, provides a useful way to estimate search efficiency in application domains which involve searching over a hypothesis space of reference templates or models, as in model-based object recognition, automatic target recognition, and biometric recognition. The penetration rate quantifies the expected portion of the database that must be searched, and is useful for estimating search algorithm computational requirements. In this paper we perform mathematical modeling to derive general equations for penetration rate estimates that are applicable to a wide range of recognition problems. We extend previous penetration rate analyses to use more general probabilistic modeling assumptions. In particular we provide penetration rate equations within the framework of a model-based image alignment application domain in which a prioritized hierarchical grid search is used to rank subspace bins based on matching probability. We derive general equations, and provide special cases based on simplifying assumptions. We show how previously-derived penetration rate equations are special cases of the general formulation. We apply the analysis to model-based logo image alignment in which a hierarchical grid search is used over a geometric misalignment transform hypothesis space. We present numerical results validating the modeling assumptions and derived formulation.

  12. Estimation of Foot Plantar Center of Pressure Trajectories with Low-Cost Instrumented Insoles Using an Individual-Specific Nonlinear Model.

    PubMed

    Hu, Xinyao; Zhao, Jun; Peng, Dongsheng; Sun, Zhenglong; Qu, Xingda

    2018-02-01

    Postural control is a complex skill based on the interaction of dynamic sensorimotor processes, and can be challenging for people with deficits in sensory functions. The foot plantar center of pressure (COP) has often been used for quantitative assessment of postural control. Previously, the foot plantar COP was mainly measured by force plates or complicated and expensive insole-based measurement systems. Although some low-cost instrumented insoles have been developed, their ability to accurately estimate the foot plantar COP trajectory was not robust. In this study, a novel individual-specific nonlinear model was proposed to estimate the foot plantar COP trajectories with an instrumented insole based on low-cost force sensitive resistors (FSRs). The model coefficients were determined by a least square error approximation algorithm. Model validation was carried out by comparing the estimated COP data with the reference data in a variety of postural control assessment tasks. We also compared our data with the COP trajectories estimated by the previously well accepted weighted mean approach. Comparing with the reference measurements, the average root mean square errors of the COP trajectories of both feet were 2.23 mm (±0.64) (left foot) and 2.72 mm (±0.83) (right foot) along the medial-lateral direction, and 9.17 mm (±1.98) (left foot) and 11.19 mm (±2.98) (right foot) along the anterior-posterior direction. The results are superior to those reported in previous relevant studies, and demonstrate that our proposed approach can be used for accurate foot plantar COP trajectory estimation. This study could provide an inexpensive solution to fall risk assessment in home settings or community healthcare center for the elderly. It has the potential to help prevent future falls in the elderly.

  13. Estimation of Foot Plantar Center of Pressure Trajectories with Low-Cost Instrumented Insoles Using an Individual-Specific Nonlinear Model

    PubMed Central

    Hu, Xinyao; Zhao, Jun; Peng, Dongsheng

    2018-01-01

    Postural control is a complex skill based on the interaction of dynamic sensorimotor processes, and can be challenging for people with deficits in sensory functions. The foot plantar center of pressure (COP) has often been used for quantitative assessment of postural control. Previously, the foot plantar COP was mainly measured by force plates or complicated and expensive insole-based measurement systems. Although some low-cost instrumented insoles have been developed, their ability to accurately estimate the foot plantar COP trajectory was not robust. In this study, a novel individual-specific nonlinear model was proposed to estimate the foot plantar COP trajectories with an instrumented insole based on low-cost force sensitive resistors (FSRs). The model coefficients were determined by a least square error approximation algorithm. Model validation was carried out by comparing the estimated COP data with the reference data in a variety of postural control assessment tasks. We also compared our data with the COP trajectories estimated by the previously well accepted weighted mean approach. Comparing with the reference measurements, the average root mean square errors of the COP trajectories of both feet were 2.23 mm (±0.64) (left foot) and 2.72 mm (±0.83) (right foot) along the medial–lateral direction, and 9.17 mm (±1.98) (left foot) and 11.19 mm (±2.98) (right foot) along the anterior–posterior direction. The results are superior to those reported in previous relevant studies, and demonstrate that our proposed approach can be used for accurate foot plantar COP trajectory estimation. This study could provide an inexpensive solution to fall risk assessment in home settings or community healthcare center for the elderly. It has the potential to help prevent future falls in the elderly. PMID:29389857

  14. 3D motion and strain estimation of the heart: initial clinical findings

    NASA Astrophysics Data System (ADS)

    Barbosa, Daniel; Hristova, Krassimira; Loeckx, Dirk; Rademakers, Frank; Claus, Piet; D'hooge, Jan

    2010-03-01

    The quantitative assessment of regional myocardial function remains an important goal in clinical cardiology. As such, tissue Doppler imaging and speckle tracking based methods have been introduced to estimate local myocardial strain. Recently, volumetric ultrasound has become more readily available, allowing therefore the 3D estimation of motion and myocardial deformation. Our lab has previously presented a method based on spatio-temporal elastic registration of ultrasound volumes to estimate myocardial motion and deformation in 3D, overcoming the spatial limitations of the existing methods. This method was optimized on simulated data sets in previous work and is currently tested in a clinical setting. In this manuscript, 10 healthy volunteers, 10 patient with myocardial infarction and 10 patients with arterial hypertension were included. The cardiac strain values extracted with the proposed method were compared with the ones estimated with 1D tissue Doppler imaging and 2D speckle tracking in all patient groups. Although the absolute values of the 3D strain components assessed by this new methodology were not identical to the reference methods, the relationship between the different patient groups was similar.

  15. Estimation of soil profile properties using field and laboratory VNIR spectroscopy

    USDA-ARS?s Scientific Manuscript database

    Diffuse reflectance spectroscopy (DRS) soil sensors have the potential to provide rapid, high-resolution estimation of multiple soil properties. Although many studies have focused on laboratory-based visible and near-infrared (VNIR) spectroscopy of dried soil samples, previous work has demonstrated ...

  16. Constraints on Gusev Basin Infill from the Mars Orbiter Laser Altimeter (MOLA) Topography

    NASA Technical Reports Server (NTRS)

    Carter, B. L.; Frey, H.; Sakimoto, S. E. H.; Roark, J.

    2001-01-01

    MOLA topography provides volume estimates for Gusev crater based on higher resolution. Revisiting work previously done by Grin and Cabrol (1997), we find a substantial increase in original sedimentation estimates. Additional information is contained in the original extended abstract.

  17. CHARACTERIZATION OF NITROUS OXIDE EMISSION SOURCES

    EPA Science Inventory

    The report presents a global inventory of nitrous oxide (N2O) based on reevaluation of previous estimates and additions of previously uninventoried source categories. (NOTE: N2O is both a greenhouse gas and a precursor of nitric oxide (NO) which destroys stratospheric ozone.) The...

  18. Cuff-Free Blood Pressure Estimation Using Pulse Transit Time and Heart Rate.

    PubMed

    Wang, Ruiping; Jia, Wenyan; Mao, Zhi-Hong; Sclabassi, Robert J; Sun, Mingui

    2014-10-01

    It has been reported that the pulse transit time (PTT), the interval between the peak of the R-wave in electrocardiogram (ECG) and the fingertip photoplethysmogram (PPG), is related to arterial stiffness, and can be used to estimate the systolic blood pressure (SBP) and diastolic blood pressure (DBP). This phenomenon has been used as the basis to design portable systems for continuously cuff-less blood pressure measurement, benefiting numerous people with heart conditions. However, the PTT-based blood pressure estimation may not be sufficiently accurate because the regulation of blood pressure within the human body is a complex, multivariate physiological process. Considering the negative feedback mechanism in the blood pressure control, we introduce the heart rate (HR) and the blood pressure estimate in the previous step to obtain the current estimate. We validate this method using a clinical database. Our results show that the PTT, HR and previous estimate reduce the estimated error significantly when compared to the conventional PTT estimation approach (p<0.05).

  19. Updated Global Burden of Cholera in Endemic Countries

    PubMed Central

    Ali, Mohammad; Nelson, Allyson R.; Lopez, Anna Lena; Sack, David A.

    2015-01-01

    Background The global burden of cholera is largely unknown because the majority of cases are not reported. The low reporting can be attributed to limited capacity of epidemiological surveillance and laboratories, as well as social, political, and economic disincentives for reporting. We previously estimated 2.8 million cases and 91,000 deaths annually due to cholera in 51 endemic countries. A major limitation in our previous estimate was that the endemic and non-endemic countries were defined based on the countries’ reported cholera cases. We overcame the limitation with the use of a spatial modelling technique in defining endemic countries, and accordingly updated the estimates of the global burden of cholera. Methods/Principal Findings Countries were classified as cholera endemic, cholera non-endemic, or cholera-free based on whether a spatial regression model predicted an incidence rate over a certain threshold in at least three of five years (2008-2012). The at-risk populations were calculated for each country based on the percent of the country without sustainable access to improved sanitation facilities. Incidence rates from population-based published studies were used to calculate the estimated annual number of cases in endemic countries. The number of annual cholera deaths was calculated using inverse variance-weighted average case-fatality rate (CFRs) from literature-based CFR estimates. We found that approximately 1.3 billion people are at risk for cholera in endemic countries. An estimated 2.86 million cholera cases (uncertainty range: 1.3m-4.0m) occur annually in endemic countries. Among these cases, there are an estimated 95,000 deaths (uncertainty range: 21,000-143,000). Conclusion/Significance The global burden of cholera remains high. Sub-Saharan Africa accounts for the majority of this burden. Our findings can inform programmatic decision-making for cholera control. PMID:26043000

  20. Using effort information with change-in-ratio data for population estimation

    USGS Publications Warehouse

    Udevitz, Mark S.; Pollock, Kenneth H.

    1995-01-01

    Most change-in-ratio (CIR) methods for estimating fish and wildlife population sizes have been based only on assumptions about how encounter probabilities vary among population subclasses. When information on sampling effort is available, it is also possible to derive CIR estimators based on assumptions about how encounter probabilities vary over time. This paper presents a generalization of previous CIR models that allows explicit consideration of a range of assumptions about the variation of encounter probabilities among subclasses and over time. Explicit estimators are derived under this model for specific sets of assumptions about the encounter probabilities. Numerical methods are presented for obtaining estimators under the full range of possible assumptions. Likelihood ratio tests for these assumptions are described. Emphasis is on obtaining estimators based on assumptions about variation of encounter probabilities over time.

  1. The application of mean field theory to image motion estimation.

    PubMed

    Zhang, J; Hanauer, G G

    1995-01-01

    Previously, Markov random field (MRF) model-based techniques have been proposed for image motion estimation. Since motion estimation is usually an ill-posed problem, various constraints are needed to obtain a unique and stable solution. The main advantage of the MRF approach is its capacity to incorporate such constraints, for instance, motion continuity within an object and motion discontinuity at the boundaries between objects. In the MRF approach, motion estimation is often formulated as an optimization problem, and two frequently used optimization methods are simulated annealing (SA) and iterative-conditional mode (ICM). Although the SA is theoretically optimal in the sense of finding the global optimum, it usually takes many iterations to converge. The ICM, on the other hand, converges quickly, but its results are often unsatisfactory due to its "hard decision" nature. Previously, the authors have applied the mean field theory to image segmentation and image restoration problems. It provides results nearly as good as SA but with much faster convergence. The present paper shows how the mean field theory can be applied to MRF model-based motion estimation. This approach is demonstrated on both synthetic and real-world images, where it produced good motion estimates.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waldhoff, Stephanie T.; Anthoff, David; Rose, Steven K.

    We use FUND 3.8 to estimate the social cost of four greenhouse gases: carbon dioxide, methane, nitrous oxide, and sulphur hexafluoride emissions. The damage potential for each gas—the ratio of the social cost of the non-carbon dioxide greenhouse gas to the social cost of carbon dioxide—is also estimated. The damage potentials are compared to several metrics, focusing in particular on the global warming potentials, which are frequently used to measure the trade-off between gases in the form of carbon dioxide equivalents. We find that damage potentials could be significantly higher than global warming potentials. This finding implies that previous papersmore » have underestimated the relative importance of reducing non-carbon dioxide greenhouse gas emissions from an economic damage perspective. We show results for a range of sensitivity analyses: carbon dioxide fertilization on agriculture productivity, terrestrial feedbacks, climate sensitivity, discounting, equity weighting, and socioeconomic and emissions scenarios. The sensitivity of the results to carbon dioxide fertilization is a primary focus as it is an important element of climate change that has not been considered in much of the previous literature. We estimate that carbon dioxide fertilization has a large positive impact that reduces the social cost of carbon dioxide with a much smaller effect on the other greenhouse gases. As a result, our estimates of the damage potentials of methane and nitrous oxide are much higher compared to estimates that ignore carbon dioxide fertilization. As a result, our base estimates of the damage potential for methane and nitrous oxide that include carbon dioxide fertilization are twice their respective global warming potentials. Our base estimate of the damage potential of sulphur hexafluoride is similar to the one previous estimate, both almost three times the global warming potential.« less

  3. SCOUP: a probabilistic model based on the Ornstein-Uhlenbeck process to analyze single-cell expression data during differentiation.

    PubMed

    Matsumoto, Hirotaka; Kiryu, Hisanori

    2016-06-08

    Single-cell technologies make it possible to quantify the comprehensive states of individual cells, and have the power to shed light on cellular differentiation in particular. Although several methods have been developed to fully analyze the single-cell expression data, there is still room for improvement in the analysis of differentiation. In this paper, we propose a novel method SCOUP to elucidate differentiation process. Unlike previous dimension reduction-based approaches, SCOUP describes the dynamics of gene expression throughout differentiation directly, including the degree of differentiation of a cell (in pseudo-time) and cell fate. SCOUP is superior to previous methods with respect to pseudo-time estimation, especially for single-cell RNA-seq. SCOUP also successfully estimates cell lineage more accurately than previous method, especially for cells at an early stage of bifurcation. In addition, SCOUP can be applied to various downstream analyses. As an example, we propose a novel correlation calculation method for elucidating regulatory relationships among genes. We apply this method to a single-cell RNA-seq data and detect a candidate of key regulator for differentiation and clusters in a correlation network which are not detected with conventional correlation analysis. We develop a stochastic process-based method SCOUP to analyze single-cell expression data throughout differentiation. SCOUP can estimate pseudo-time and cell lineage more accurately than previous methods. We also propose a novel correlation calculation method based on SCOUP. SCOUP is a promising approach for further single-cell analysis and available at https://github.com/hmatsu1226/SCOUP.

  4. Screening Tools to Estimate Mold Burdens in Homes

    EPA Science Inventory

    Objective: The objective of this study was to develop screening tools that could be used to estimate the mold burden in a home which would indicate whether more detailed testing might be useful. Methods: Previously, in the American Healthy Home Survey, a DNA-based method of an...

  5. Optimal back-extrapolation method for estimating plasma volume in humans using the indocyanine green dilution method.

    PubMed

    Polidori, David; Rowley, Clarence

    2014-07-22

    The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method.

  6. Battery Calendar Life Estimator Manual Modeling and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jon P. Christophersen; Ira Bloom; Ed Thomas

    2012-10-01

    The Battery Life Estimator (BLE) Manual has been prepared to assist developers in their efforts to estimate the calendar life of advanced batteries for automotive applications. Testing requirements and procedures are defined by the various manuals previously published under the United States Advanced Battery Consortium (USABC). The purpose of this manual is to describe and standardize a method for estimating calendar life based on statistical models and degradation data acquired from typical USABC battery testing.

  7. Battery Life Estimator Manual Linear Modeling and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jon P. Christophersen; Ira Bloom; Ed Thomas

    2009-08-01

    The Battery Life Estimator (BLE) Manual has been prepared to assist developers in their efforts to estimate the calendar life of advanced batteries for automotive applications. Testing requirements and procedures are defined by the various manuals previously published under the United States Advanced Battery Consortium (USABC). The purpose of this manual is to describe and standardize a method for estimating calendar life based on statistical models and degradation data acquired from typical USABC battery testing.

  8. Learning Motion Features for Example-Based Finger Motion Estimation for Virtual Characters

    NASA Astrophysics Data System (ADS)

    Mousas, Christos; Anagnostopoulos, Christos-Nikolaos

    2017-09-01

    This paper presents a methodology for estimating the motion of a character's fingers based on the use of motion features provided by a virtual character's hand. In the presented methodology, firstly, the motion data is segmented into discrete phases. Then, a number of motion features are computed for each motion segment of a character's hand. The motion features are pre-processed using restricted Boltzmann machines, and by using the different variations of semantically similar finger gestures in a support vector machine learning mechanism, the optimal weights for each feature assigned to a metric are computed. The advantages of the presented methodology in comparison to previous solutions are the following: First, we automate the computation of optimal weights that are assigned to each motion feature counted in our metric. Second, the presented methodology achieves an increase (about 17%) in correctly estimated finger gestures in comparison to a previous method.

  9. A bicycle safety index for evaluating urban street facilities.

    PubMed

    Asadi-Shekari, Zohreh; Moeinaddini, Mehdi; Zaly Shah, Muhammad

    2015-01-01

    The objectives of this research are to conceptualize the Bicycle Safety Index (BSI) that considers all parts of the street and to propose a universal guideline with microscale details. A point system method comparing existing safety facilities to a defined standard is proposed to estimate the BSI. Two streets in Singapore and Malaysia are chosen to examine this model. The majority of previous measurements to evaluate street conditions for cyclists usually cannot cover all parts of streets, including segments and intersections. Previous models also did not consider all safety indicators and cycling facilities at a microlevel in particular. This study introduces a new concept of a practical BSI to complete previous studies using its practical, easy-to-follow, point system-based outputs. This practical model can be used in different urban settings to estimate the level of safety for cycling and suggest some improvements based on the standards.

  10. Seasonally Transported Aerosol Layers Over Southeast Atlantic are Closer to Underlying Clouds than Previously Reported

    NASA Technical Reports Server (NTRS)

    Rajapakshe, Chamara; Zhang, Zhibo; Yorks, John E.; Yu, Hongbin; Tan, Qian; Meyer, Kerry; Platnick, Steven; Winker, David M.

    2017-01-01

    From June to October, low-level clouds in the southeast (SE) Atlantic often underlie seasonal aerosol layers transported from African continent. Previously, the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) 532 nm lidar observations have been used to estimate the relative vertical location of the above-cloud aerosols (ACA) to the underlying clouds. Here we show new observations from NASA's Cloud-Aerosol Transport System (CATS) lidar. Two seasons of CATS 1064 nm observations reveal that the bottom of the ACA layer is much lower than previously estimated based on CALIPSO 532 nm observations. For about 60% of CATS nighttime ACA scenes, the aerosol layer base is within 360 m distance to the top of the underlying cloud. Our results are important for future studies of the microphysical indirect and semidirect effects of ACA in the SE Atlantic region.

  11. HIV Model Parameter Estimates from Interruption Trial Data including Drug Efficacy and Reservoir Dynamics

    PubMed Central

    Luo, Rutao; Piovoso, Michael J.; Martinez-Picado, Javier; Zurakowski, Ryan

    2012-01-01

    Mathematical models based on ordinary differential equations (ODE) have had significant impact on understanding HIV disease dynamics and optimizing patient treatment. A model that characterizes the essential disease dynamics can be used for prediction only if the model parameters are identifiable from clinical data. Most previous parameter identification studies for HIV have used sparsely sampled data from the decay phase following the introduction of therapy. In this paper, model parameters are identified from frequently sampled viral-load data taken from ten patients enrolled in the previously published AutoVac HAART interruption study, providing between 69 and 114 viral load measurements from 3–5 phases of viral decay and rebound for each patient. This dataset is considerably larger than those used in previously published parameter estimation studies. Furthermore, the measurements come from two separate experimental conditions, which allows for the direct estimation of drug efficacy and reservoir contribution rates, two parameters that cannot be identified from decay-phase data alone. A Markov-Chain Monte-Carlo method is used to estimate the model parameter values, with initial estimates obtained using nonlinear least-squares methods. The posterior distributions of the parameter estimates are reported and compared for all patients. PMID:22815727

  12. SU-E-T-129: Are Knowledge-Based Planning Dose Estimates Valid for Distensible Organs?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lalonde, R; Heron, D; Huq, M

    2015-06-15

    Purpose: Knowledge-based planning programs have become available to assist treatment planning in radiation therapy. Such programs can be used to generate estimated DVHs and planning constraints for organs at risk (OARs), based upon a model generated from previous plans. These estimates are based upon the planning CT scan. However, for distensible OARs like the bladder and rectum, daily variations in volume may make the dose estimates invalid. The purpose of this study is to determine whether knowledge-based DVH dose estimates may be valid for distensible OARs. Methods: The Varian RapidPlan™ knowledge-based planning module was used to generate OAR dose estimatesmore » and planning objectives for 10 prostate cases previously planned with VMAT, and final plans were calculated for each. Five weekly setup CBCT scans of each patient were then downloaded and contoured (assuming no change in size and shape of the target volume), and rectum and bladder DVHs were recalculated for each scan. Dose volumes were then compared at 75, 60,and 40 Gy for the bladder and rectum between the planning scan and the CBCTs. Results: Plan doses and estimates matched well at all dose points., Volumes of the rectum and bladder varied widely between planning CT and the CBCTs, ranging from 0.46 to 2.42 for the bladder and 0.71 to 2.18 for the rectum, causing relative dose volumes to vary between planning CT and CBCT, but absolute dose volumes were more consistent. The overall ratio of CBCT/plan dose volumes was 1.02 ±0.27 for rectum and 0.98 ±0.20 for bladder in these patients. Conclusion: Knowledge-based planning dose volume estimates for distensible OARs are still valid, in absolute volume terms, between treatment planning scans and CBCT’s taken during daily treatment. Further analysis of the data is being undertaken to determine how differences depend upon rectum and bladder filling state. This work has been supported by Varian Medical Systems.« less

  13. Comparison of Precision of Biomass Estimates in Regional Field Sample Surveys and Airborne LiDAR-Assisted Surveys in Hedmark County, Norway

    NASA Technical Reports Server (NTRS)

    Naesset, Erik; Gobakken, Terje; Bollandsas, Ole Martin; Gregoire, Timothy G.; Nelson, Ross; Stahl, Goeran

    2013-01-01

    Airborne scanning LiDAR (Light Detection and Ranging) has emerged as a promising tool to provide auxiliary data for sample surveys aiming at estimation of above-ground tree biomass (AGB), with potential applications in REDD forest monitoring. For larger geographical regions such as counties, states or nations, it is not feasible to collect airborne LiDAR data continuously ("wall-to-wall") over the entire area of interest. Two-stage cluster survey designs have therefore been demonstrated by which LiDAR data are collected along selected individual flight-lines treated as clusters and with ground plots sampled along these LiDAR swaths. Recently, analytical AGB estimators and associated variance estimators that quantify the sampling variability have been proposed. Empirical studies employing these estimators have shown a seemingly equal or even larger uncertainty of the AGB estimates obtained with extensive use of LiDAR data to support the estimation as compared to pure field-based estimates employing estimators appropriate under simple random sampling (SRS). However, comparison of uncertainty estimates under SRS and sophisticated two-stage designs is complicated by large differences in the designs and assumptions. In this study, probability-based principles to estimation and inference were followed. We assumed designs of a field sample and a LiDAR-assisted survey of Hedmark County (HC) (27,390 km2), Norway, considered to be more comparable than those assumed in previous studies. The field sample consisted of 659 systematically distributed National Forest Inventory (NFI) plots and the airborne scanning LiDAR data were collected along 53 parallel flight-lines flown over the NFI plots. We compared AGB estimates based on the field survey only assuming SRS against corresponding estimates assuming two-phase (double) sampling with LiDAR and employing model-assisted estimators. We also compared AGB estimates based on the field survey only assuming two-stage sampling (the NFI plots being grouped in clusters) against corresponding estimates assuming two-stage sampling with the LiDAR and employing model-assisted estimators. For each of the two comparisons, the standard errors of the AGB estimates were consistently lower for the LiDAR-assisted designs. The overall reduction of the standard errors in the LiDAR-assisted estimation was around 40-60% compared to the pure field survey. We conclude that the previously proposed two-stage model-assisted estimators are inappropriate for surveys with unequal lengths of the LiDAR flight-lines and new estimators are needed. Some options for design of LiDAR-assisted sample surveys under REDD are also discussed, which capitalize on the flexibility offered when the field survey is designed as an integrated part of the overall survey design as opposed to previous LiDAR-assisted sample surveys in the boreal and temperate zones which have been restricted by the current design of an existing NFI.

  14. Revised techniques for estimating peak discharges from channel width in Montana

    USGS Publications Warehouse

    Parrett, Charles; Hull, J.A.; Omang, R.J.

    1987-01-01

    This study was conducted to develop new estimating equations based on channel width and the updated flood frequency curves of previous investigations. Simple regression equations for estimating peak discharges with recurrence intervals of 2, 5, 10 , 25, 50, and 100 years were developed for seven regions in Montana. The standard errors of estimates for the equations that use active channel width as the independent variables ranged from 30% to 87%. The standard errors of estimate for the equations that use bankfull width as the independent variable ranged from 34% to 92%. The smallest standard errors generally occurred in the prediction equations for the 2-yr flood, 5-yr flood, and 10-yr flood, and the largest standard errors occurred in the prediction equations for the 100-yr flood. The equations that use active channel width and the equations that use bankfull width were determined to be about equally reliable in five regions. In the West Region, the equations that use bankfull width were slightly more reliable than those based on active channel width, whereas in the East-Central Region the equations that use active channel width were slightly more reliable than those based on bankfull width. Compared with similar equations previously developed, the standard errors of estimate for the new equations are substantially smaller in three regions and substantially larger in two regions. Limitations on the use of the estimating equations include: (1) The equations are based on stable conditions of channel geometry and prevailing water and sediment discharge; (2) The measurement of channel width requires a site visit, preferably by a person with experience in the method, and involves appreciable measurement errors; (3) Reliability of results from the equations for channel widths beyond the range of definition is unknown. In spite of the limitations, the estimating equations derived in this study are considered to be as reliable as estimating equations based on basin and climatic variables. Because the two types of estimating equations are independent, results from each can be weighted inversely proportional to their variances, and averaged. The weighted average estimate has a variance less than either individual estimate. (Author 's abstract)

  15. Predicting the required number of training samples. [for remotely sensed image data based on covariance matrix estimate quality criterion of normal distribution

    NASA Technical Reports Server (NTRS)

    Kalayeh, H. M.; Landgrebe, D. A.

    1983-01-01

    A criterion which measures the quality of the estimate of the covariance matrix of a multivariate normal distribution is developed. Based on this criterion, the necessary number of training samples is predicted. Experimental results which are used as a guide for determining the number of training samples are included. Previously announced in STAR as N82-28109

  16. Forest height estimation from mountain forest areas using general model-based decomposition for polarimetric interferometric synthetic aperture radar images

    NASA Astrophysics Data System (ADS)

    Minh, Nghia Pham; Zou, Bin; Cai, Hongjun; Wang, Chengyi

    2014-01-01

    The estimation of forest parameters over mountain forest areas using polarimetric interferometric synthetic aperture radar (PolInSAR) images is one of the greatest interests in remote sensing applications. For mountain forest areas, scattering mechanisms are strongly affected by the ground topography variations. Most of the previous studies in modeling microwave backscattering signatures of forest area have been carried out over relatively flat areas. Therefore, a new algorithm for the forest height estimation from mountain forest areas using the general model-based decomposition (GMBD) for PolInSAR image is proposed. This algorithm enables the retrieval of not only the forest parameters, but also the magnitude associated with each mechanism. In addition, general double- and single-bounce scattering models are proposed to fit for the cross-polarization and off-diagonal term by separating their independent orientation angle, which remains unachieved in the previous model-based decompositions. The efficiency of the proposed approach is demonstrated with simulated data from PolSARProSim software and ALOS-PALSAR spaceborne PolInSAR datasets over the Kalimantan areas, Indonesia. Experimental results indicate that forest height could be effectively estimated by GMBD.

  17. Influence of exposure differences on city-to-city variations in PM2.5-mortality effect estimates

    EPA Science Inventory

    Multi-city population-based epidemiological studies have observed heterogeneity between city specific PM2.5-mortality effect estimates. One possibility is city-specific differences in overall population exposure to PM2.5. In a previous analysis we explored this latter point by cl...

  18. Application of Real Options Theory to Software Engineering for Strategic Decision Making in Software Related Capital Investments

    DTIC Science & Technology

    2008-12-01

    between our current project and the historical projects. Therefore to refine the historical volatility estimate of the previously completed software... historical volatility estimates obtained in the form of beliefs and plausibility based on subjective probabilities that take into consideration unique

  19. Decision-Making Accuracy of CBM Progress-Monitoring Data

    ERIC Educational Resources Information Center

    Hintze, John M.; Wells, Craig S.; Marcotte, Amanda M.; Solomon, Benjamin G.

    2018-01-01

    This study examined the diagnostic accuracy associated with decision making as is typically conducted with curriculum-based measurement (CBM) approaches to progress monitoring. Using previously published estimates of the standard errors of estimate associated with CBM, 20,000 progress-monitoring data sets were simulated to model student reading…

  20. Use of A-Train Aerosol Observations to Constrain Direct Aerosol Radiative Effects (DARE) Comparisons with Aerocom Models and Uncertainty Assessments

    NASA Technical Reports Server (NTRS)

    Redemann, J.; Shinozuka, Y.; Kacenelenbogen, M.; Segal-Rozenhaimer, M.; LeBlanc, S.; Vaughan, M.; Stier, P.; Schutgens, N.

    2017-01-01

    We describe a technique for combining multiple A-Train aerosol data sets, namely MODIS spectral AOD (aerosol optical depth), OMI AAOD (absorption aerosol optical depth) and CALIOP aerosol backscatter retrievals (hereafter referred to as MOC retrievals) to estimate full spectral sets of aerosol radiative properties, and ultimately to calculate the 3-D distribution of direct aerosol radiative effects (DARE). We present MOC results using almost two years of data collected in 2007 and 2008, and show comparisons of the aerosol radiative property estimates to collocated AERONET retrievals. Use of the MODIS Collection 6 AOD data derived with the dark target and deep blue algorithms has extended the coverage of the MOC retrievals towards higher latitudes. The MOC aerosol retrievals agree better with AERONET in terms of the single scattering albedo (ssa) at 441 nm than ssa calculated from OMI and MODIS data alone, indicating that CALIOP aerosol backscatter data contains information on aerosol absorption. We compare the spatio-temporal distribution of the MOC retrievals and MOC-based calculations of seasonal clear-sky DARE to values derived from four models that participated in the Phase II AeroCom model intercomparison initiative. Overall, the MOC-based calculations of clear-sky DARE at TOA over land are smaller (less negative) than previous model or observational estimates due to the inclusion of more absorbing aerosol retrievals over brighter surfaces, not previously available for observationally-based estimates of DARE. MOC-based DARE estimates at the surface over land and total (land and ocean) DARE estimates at TOA are in between previous model and observational results. Comparisons of seasonal aerosol property to AeroCom Phase II results show generally good agreement best agreement with forcing results at TOA is found with GMI-MerraV3. We discuss sampling issues that affect the comparisons and the major challenges in extending our clear-sky DARE results to all-sky conditions. We present estimates of clear-sky and all-sky DARE and show uncertainties that stem from the assumptions in the spatial extrapolation and accuracy of aerosol and cloud properties, in the diurnal evolution of these properties, and in the radiative transfer calculations.

  1. Absolute binding free energies between T4 lysozyme and 141 small molecules: calculations based on multiple rigid receptor configurations

    PubMed Central

    Xie, Bing; Nguyen, Trung Hai; Minh, David D. L.

    2017-01-01

    We demonstrate the feasibility of estimating protein-ligand binding free energies using multiple rigid receptor configurations. Based on T4 lysozyme snapshots extracted from six alchemical binding free energy calculations with a flexible receptor, binding free energies were estimated for a total of 141 ligands. For 24 ligands, the calculations reproduced flexible-receptor estimates with a correlation coefficient of 0.90 and a root mean square error of 1.59 kcal/mol. The accuracy of calculations based on Poisson-Boltzmann/Surface Area implicit solvent was comparable to previously reported free energy calculations. PMID:28430432

  2. Increasing efficiency of CO2 uptake by combined land-ocean sink

    NASA Astrophysics Data System (ADS)

    van Marle, M.; van Wees, D.; Houghton, R. A.; Nassikas, A.; van der Werf, G.

    2017-12-01

    Carbon-climate feedbacks are one of the key uncertainties in predicting future climate change. Such a feedback could originate from carbon sinks losing their efficiency, for example due to saturation of the CO2 fertilization effect or ocean warming. An indirect approach to estimate how the combined land and ocean sink responds to climate change and growing fossil fuel emissions is based on assessing the trends in the airborne fraction of CO2 emissions from fossil fuel and land use change. One key limitation with this approach has been the large uncertainty in quantifying land use change emissions. We have re-assessed those emissions in a more data-driven approach by combining estimates coming from a bookkeeping model with visibility-based land use change emissions available for the Arc of Deforestation and Equatorial Asia, two key regions with large land use change emissions. The advantage of the visibility-based dataset is that the emissions are observation-based and this dataset provides more detailed information about interannual variability than previous estimates. Based on our estimates we provide evidence that land use and land cover change emissions have increased more rapidly than previously thought, implying that the airborne fraction has decreased since the start of CO2 measurements in 1959. This finding is surprising because it means that the combined land and ocean sink has become more efficient while the opposite is expected.

  3. The HIV care cascade in Switzerland: reaching the UNAIDS/WHO targets for patients diagnosed with HIV.

    PubMed

    Kohler, Philipp; Schmidt, Axel J; Cavassini, Matthias; Furrer, Hansjakob; Calmy, Alexandra; Battegay, Manuel; Bernasconi, Enos; Ledergerber, Bruno; Vernazza, Pietro

    2015-11-28

    To describe the HIV care cascade for Switzerland in the year 2012. Six levels were defined: (i) HIV-infected, (ii) HIV-diagnosed, (iii) linked to care, (iv) retained in care, (v) on antiretroviral treatment (ART), and (vi) with suppressed viral load. We used data from the Swiss HIV Cohort Study (SHCS) complemented by a nationwide survey among SHCS physicians to estimate the number of HIV-patients not registered in the cohort. We also used Swiss ART sales data to estimate the number of patients treated outside the SHCS network. Based on the number of patients retained in care, we inferred the estimates for levels (i) to (iii) from previously published data. We estimate that (i) 15 200 HIV-infected individuals lived in Switzerland in 2012 (margins of uncertainty, 13 400-19 300). Of those, (ii) 12 300 (81%) were diagnosed, (iii) 12 200 (80%) linked, and (iv) 11 900 (79%) retained in care. Broadly based on SHCS network data, (v) 10 800 (71%) patients were receiving ART, and (vi) 10 400 (68%) had suppressed (<200 copies/ml) viral loads. The vast majority (95%) of patients retained in care were followed within the SHCS network, with 76% registered in the cohort. Our estimate for HIV-infected individuals in Switzerland is substantially lower than previously reported, halving previous national HIV prevalence estimates to 0.2%. In Switzerland in 2012, 91% of patients in care were receiving ART, and 96% of patients on ART had suppressed viral load, meeting recent UNAIDS/WHO targets.

  4. A reevaluation of cancer incidence near the Three Mile Island nuclear plant: the collision of evidence and assumptions.

    PubMed

    Wing, S; Richardson, D; Armstrong, D; Crawford-Brown, D

    1997-01-01

    Previous studies concluded that there was no evidence that the 1979 nuclear accident at Three Mile Island (TMI) affected cancer incidence in the surrounding area; however, there were logical and methodological problems in earlier reports that led us to reconsider data previously collected. A 10-mile area around TMI was divided into 69 study tracts, which were assigned radiation dose estimates based on radiation reading and models of atmospheric dispersion. Incident cancers from 1975 to 1985 were ascertained from hospital records and assigned to study tracts. Associations between accident doses and incidence rates of leukemia, lung cancer, and all cancer were assessed using relative dose estimates calculated by the earlier investigators. Adjustments were made for age, sex, socioeconomic characteristics, and preaccident variation in incidence. Considering a 2-year latency, the estimated percent increase per dose unit +/- standard error was 0.020 +/- 0.012 for all cancer, 0.082 +/- 0.032 for lung cancer, and 0.116 +/- 0.067 for leukemia. Adjustment for socioeconomic variables increased the estimates to 0.034 +/- 0.013, 0.103 +/- 0.035, and 0.139 +/- 0.073 for all cancer, lung cancer, and leukemia, respectively. Associations were generally larger considering a 5-year latency, but were based on smaller numbers of cases. Results support the hypothesis that radiation doses are related to increased cancer incidence around TMI. The analysis avoids medical detection bias, but suffers from inaccurate dose classification; therefore, results may underestimate the magnitude of the association between radiation and cancer incidence. These associations would not be expected, based on previous estimates of near-background levels of radiation exposure following the accident.

  5. Bayesian estimation of the discrete coefficient of determination.

    PubMed

    Chen, Ting; Braga-Neto, Ulisses M

    2016-12-01

    The discrete coefficient of determination (CoD) measures the nonlinear interaction between discrete predictor and target variables and has had far-reaching applications in Genomic Signal Processing. Previous work has addressed the inference of the discrete CoD using classical parametric and nonparametric approaches. In this paper, we introduce a Bayesian framework for the inference of the discrete CoD. We derive analytically the optimal minimum mean-square error (MMSE) CoD estimator, as well as a CoD estimator based on the Optimal Bayesian Predictor (OBP). For the latter estimator, exact expressions for its bias, variance, and root-mean-square (RMS) are given. The accuracy of both Bayesian CoD estimators with non-informative and informative priors, under fixed or random parameters, is studied via analytical and numerical approaches. We also demonstrate the application of the proposed Bayesian approach in the inference of gene regulatory networks, using gene-expression data from a previously published study on metastatic melanoma.

  6. (U-Th)/He ages of phosphates from Zagami and ALHA77005 Martian meteorites: Implications to shock temperatures

    NASA Astrophysics Data System (ADS)

    Min, Kyoungwon; Farah, Annette E.; Lee, Seung Ryeol; Lee, Jong Ik

    2017-01-01

    Shock conditions of Martian meteorites provide crucial information about ejection dynamics and original features of the Martian rocks. To better constrain equilibrium shock temperatures (Tequi-shock) of Martian meteorites, we investigated (U-Th)/He systematics of moderately-shocked (Zagami) and intensively shocked (ALHA77005) Martian meteorites. Multiple phosphate aggregates from Zagami and ALHA77005 yielded overall (U-Th)/He ages 92.2 ± 4.4 Ma (2σ) and 8.4 ± 1.2 Ma, respectively. These ages correspond to fractional losses of 0.49 ± 0.03 (Zagami) and 0.97 ± 0.01 (ALHA77005), assuming that the ejection-related shock event at ∼3 Ma is solely responsible for diffusive helium loss since crystallization. For He diffusion modeling, the diffusion domain radius is estimated based on detailed examination of fracture patterns in phosphates using a scanning electron microscope. For Zagami, the diffusion domain radius is estimated to be ∼2-9 μm, which is generally consistent with calculations from isothermal heating experiments (1-4 μm). For ALHA77005, the diffusion domain radius of ∼4-20 μm is estimated. Using the newly constrained (U-Th)/He data, diffusion domain radii, and other previously estimated parameters, the conductive cooling models yield Tequi-shock estimates of 360-410 °C and 460-560 °C for Zagami and ALHA77005, respectively. According to the sensitivity test, the estimated Tequi-shock values are relatively robust to input parameters. The Tequi-shock estimates for Zagami are more robust than those for ALHA77005, primarily because Zagami yielded intermediate fHe value (0.49) compared to ALHA77005 (0.97). For less intensively shocked Zagami, the He diffusion-based Tequi-shock estimates (this study) are significantly higher than expected from previously reported Tpost-shock values. For intensively shocked ALHA77005, the two independent approaches yielded generally consistent results. Using two other examples of previously studied Martian meteorites (ALHA84001 and Los Angeles), we compared Tequi-shock and Tpost-shock estimates. For intensively shocked meteorites (ALHA77005, Los Angeles), the He diffusion-based approach yield slightly higher or consistent Tequi-shock with estimations from Tpost-shock, and the discrepancy between the two methods increases as the intensity of shock increases. The reason for the discrepancy between the two methods, particularly for less-intensively shocked meteorites (Zagami, ALHA84001), remains to be resolved, but we prefer the He diffusion-based approach because its Tequi-shock estimates are relatively robust to input parameters.

  7. Quantifying uncertainties in streamflow predictions through signature based inference of hydrological model parameters

    NASA Astrophysics Data System (ADS)

    Fenicia, Fabrizio; Reichert, Peter; Kavetski, Dmitri; Albert, Calro

    2016-04-01

    The calibration of hydrological models based on signatures (e.g. Flow Duration Curves - FDCs) is often advocated as an alternative to model calibration based on the full time series of system responses (e.g. hydrographs). Signature based calibration is motivated by various arguments. From a conceptual perspective, calibration on signatures is a way to filter out errors that are difficult to represent when calibrating on the full time series. Such errors may for example occur when observed and simulated hydrographs are shifted, either on the "time" axis (i.e. left or right), or on the "streamflow" axis (i.e. above or below). These shifts may be due to errors in the precipitation input (time or amount), and if not properly accounted in the likelihood function, may cause biased parameter estimates (e.g. estimated model parameters that do not reproduce the recession characteristics of a hydrograph). From a practical perspective, signature based calibration is seen as a possible solution for making predictions in ungauged basins. Where streamflow data are not available, it may in fact be possible to reliably estimate streamflow signatures. Previous research has for example shown how FDCs can be reliably estimated at ungauged locations based on climatic and physiographic influence factors. Typically, the goal of signature based calibration is not the prediction of the signatures themselves, but the prediction of the system responses. Ideally, the prediction of system responses should be accompanied by a reliable quantification of the associated uncertainties. Previous approaches for signature based calibration, however, do not allow reliable estimates of streamflow predictive distributions. Here, we illustrate how the Bayesian approach can be employed to obtain reliable streamflow predictive distributions based on signatures. A case study is presented, where a hydrological model is calibrated on FDCs and additional signatures. We propose an approach where the likelihood function for the signatures is derived from the likelihood for streamflow (rather than using an "ad-hoc" likelihood for the signatures as done in previous approaches). This likelihood is not easily tractable analytically and we therefore cannot apply "simple" MCMC methods. This numerical problem is solved using Approximate Bayesian Computation (ABC). Our result indicate that the proposed approach is suitable for producing reliable streamflow predictive distributions based on calibration to signature data. Moreover, our results provide indications on which signatures are more appropriate to represent the information content of the hydrograph.

  8. A phylogeny and revised classification of Squamata, including 4161 species of lizards and snakes

    PubMed Central

    2013-01-01

    Background The extant squamates (>9400 known species of lizards and snakes) are one of the most diverse and conspicuous radiations of terrestrial vertebrates, but no studies have attempted to reconstruct a phylogeny for the group with large-scale taxon sampling. Such an estimate is invaluable for comparative evolutionary studies, and to address their classification. Here, we present the first large-scale phylogenetic estimate for Squamata. Results The estimated phylogeny contains 4161 species, representing all currently recognized families and subfamilies. The analysis is based on up to 12896 base pairs of sequence data per species (average = 2497 bp) from 12 genes, including seven nuclear loci (BDNF, c-mos, NT3, PDC, R35, RAG-1, and RAG-2), and five mitochondrial genes (12S, 16S, cytochrome b, ND2, and ND4). The tree provides important confirmation for recent estimates of higher-level squamate phylogeny based on molecular data (but with more limited taxon sampling), estimates that are very different from previous morphology-based hypotheses. The tree also includes many relationships that differ from previous molecular estimates and many that differ from traditional taxonomy. Conclusions We present a new large-scale phylogeny of squamate reptiles that should be a valuable resource for future comparative studies. We also present a revised classification of squamates at the family and subfamily level to bring the taxonomy more in line with the new phylogenetic hypothesis. This classification includes new, resurrected, and modified subfamilies within gymnophthalmid and scincid lizards, and boid, colubrid, and lamprophiid snakes. PMID:23627680

  9. Global gridded anthropogenic emissions inventory of carbonyl sulfide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zumkehr, Andrew; Hilton, Tim; Whelan, Mary

    Atmospheric carbonyl sulfide (COS or OCS) is the most abundant sulfur containing gas in the troposphere and is an atmospheric tracer for the carbon cycle. Gridded inventories of global anthropogenic COS are used for interpreting global COS measurements. However, previous gridded anthropogenic data are a climatological estimate based on input data that is over three decades old and are not representative of current conditions. Here we develop a new gridded data set of global anthropogenic COS sources that includes more source sectors than previously available and uses the most current emissions factors and industry activity data as input. Additionally, themore » inventory is provided as annually varying estimates from years 1980–2012 and employs a source specific spatial scaling procedure. We estimate a global source in year 2012 of 406 Gg S y -1 (range of 223–586 Gg S y -1), which is highly concentrated in China and is twice as large as the previous gridded inventory. Our large upward revision in the bottom-up estimate of the source is consistent with a recent top-down estimate based on air-monitoring and Antarctic firn data. Furthermore, our inventory time trends, including a decline in the 1990's and growth after the year 2000, are qualitatively consistent with trends in atmospheric data. Lastly, similarities between the spatial distribution in this inventory and remote sensing data suggest that the anthropogenic source could potentially play a role in explaining a missing source in the global COS budget.« less

  10. Global gridded anthropogenic emissions inventory of carbonyl sulfide

    DOE PAGES

    Zumkehr, Andrew; Hilton, Tim; Whelan, Mary; ...

    2018-03-31

    Atmospheric carbonyl sulfide (COS or OCS) is the most abundant sulfur containing gas in the troposphere and is an atmospheric tracer for the carbon cycle. Gridded inventories of global anthropogenic COS are used for interpreting global COS measurements. However, previous gridded anthropogenic data are a climatological estimate based on input data that is over three decades old and are not representative of current conditions. Here we develop a new gridded data set of global anthropogenic COS sources that includes more source sectors than previously available and uses the most current emissions factors and industry activity data as input. Additionally, themore » inventory is provided as annually varying estimates from years 1980–2012 and employs a source specific spatial scaling procedure. We estimate a global source in year 2012 of 406 Gg S y -1 (range of 223–586 Gg S y -1), which is highly concentrated in China and is twice as large as the previous gridded inventory. Our large upward revision in the bottom-up estimate of the source is consistent with a recent top-down estimate based on air-monitoring and Antarctic firn data. Furthermore, our inventory time trends, including a decline in the 1990's and growth after the year 2000, are qualitatively consistent with trends in atmospheric data. Lastly, similarities between the spatial distribution in this inventory and remote sensing data suggest that the anthropogenic source could potentially play a role in explaining a missing source in the global COS budget.« less

  11. Global gridded anthropogenic emissions inventory of carbonyl sulfide

    NASA Astrophysics Data System (ADS)

    Zumkehr, Andrew; Hilton, Tim W.; Whelan, Mary; Smith, Steve; Kuai, Le; Worden, John; Campbell, J. Elliott

    2018-06-01

    Atmospheric carbonyl sulfide (COS or OCS) is the most abundant sulfur containing gas in the troposphere and is an atmospheric tracer for the carbon cycle. Gridded inventories of global anthropogenic COS are used for interpreting global COS measurements. However, previous gridded anthropogenic data are a climatological estimate based on input data that is over three decades old and are not representative of current conditions. Here we develop a new gridded data set of global anthropogenic COS sources that includes more source sectors than previously available and uses the most current emissions factors and industry activity data as input. Additionally, the inventory is provided as annually varying estimates from years 1980-2012 and employs a source specific spatial scaling procedure. We estimate a global source in year 2012 of 406 Gg S y-1 (range of 223-586 Gg S y-1), which is highly concentrated in China and is twice as large as the previous gridded inventory. Our large upward revision in the bottom-up estimate of the source is consistent with a recent top-down estimate based on air-monitoring and Antarctic firn data. Furthermore, our inventory time trends, including a decline in the 1990's and growth after the year 2000, are qualitatively consistent with trends in atmospheric data. Finally, similarities between the spatial distribution in this inventory and remote sensing data suggest that the anthropogenic source could potentially play a role in explaining a missing source in the global COS budget.

  12. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances.

    PubMed

    Gil, Manuel

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.

  13. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances

    PubMed Central

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error. PMID:25279263

  14. Beef quality parameters estimation using ultrasound and color images

    PubMed Central

    2015-01-01

    Background Beef quality measurement is a complex task with high economic impact. There is high interest in obtaining an automatic quality parameters estimation in live cattle or post mortem. In this paper we set out to obtain beef quality estimates from the analysis of ultrasound (in vivo) and color images (post mortem), with the measurement of various parameters related to tenderness and amount of meat: rib eye area, percentage of intramuscular fat and backfat thickness or subcutaneous fat. Proposal An algorithm based on curve evolution is implemented to calculate the rib eye area. The backfat thickness is estimated from the profile of distances between two curves that limit the steak and the rib eye, previously detected. A model base in Support Vector Regression (SVR) is trained to estimate the intramuscular fat percentage. A series of features extracted on a region of interest, previously detected in both ultrasound and color images, were proposed. In all cases, a complete evaluation was performed with different databases including: color and ultrasound images acquired by a beef industry expert, intramuscular fat estimation obtained by an expert using a commercial software, and chemical analysis. Conclusions The proposed algorithms show good results to calculate the rib eye area and the backfat thickness measure and profile. They are also promising in predicting the percentage of intramuscular fat. PMID:25734452

  15. Estimated stocks of circumpolar permafrost carbon with quantified uncertainty ranges and identified data gaps

    DOE PAGES

    Hugelius, Gustaf; Strauss, J.; Zubrzycki, S.; ...

    2014-12-01

    Soils and other unconsolidated deposits in the northern circumpolar permafrost region store large amounts of soil organic carbon (SOC). This SOC is potentially vulnerable to remobilization following soil warming and permafrost thaw, but SOC stock estimates were poorly constrained and quantitative error estimates were lacking. This study presents revised estimates of permafrost SOC stocks, including quantitative uncertainty estimates, in the 0–3 m depth range in soils as well as for sediments deeper than 3 m in deltaic deposits of major rivers and in the Yedoma region of Siberia and Alaska. Revised estimates are based on significantly larger databases compared tomore » previous studies. Despite this there is evidence of significant remaining regional data gaps. Estimates remain particularly poorly constrained for soils in the High Arctic region and physiographic regions with thin sedimentary overburden (mountains, highlands and plateaus) as well as for deposits below 3 m depth in deltas and the Yedoma region. While some components of the revised SOC stocks are similar in magnitude to those previously reported for this region, there are substantial differences in other components, including the fraction of perennially frozen SOC. Upscaled based on regional soil maps, estimated permafrost region SOC stocks are 217 ± 12 and 472 ± 27 Pg for the 0–0.3 and 0–1 m soil depths, respectively (±95% confidence intervals). Storage of SOC in 0–3 m of soils is estimated to 1035 ± 150 Pg. Of this, 34 ± 16 Pg C is stored in poorly developed soils of the High Arctic. Based on generalized calculations, storage of SOC below 3 m of surface soils in deltaic alluvium of major Arctic rivers is estimated as 91 ± 52 Pg. In the Yedoma region, estimated SOC stocks below 3 m depth are 181 ± 54 Pg, of which 74 ± 20 Pg is stored in intact Yedoma (late Pleistocene ice- and organic-rich silty sediments) with the remainder in refrozen thermokarst deposits. Total estimated SOC storage for the permafrost region is ∼1300 Pg with an uncertainty range of ∼1100 to 1500 Pg. Of this, ∼500 Pg is in non-permafrost soils, seasonally thawed in the active layer or in deeper taliks, while ∼800 Pg is perennially frozen. In conclusion, this represents a substantial ∼300 Pg lowering of the estimated perennially frozen SOC stock compared to previous estimates.« less

  16. A mathematical programming method for formulating a fuzzy regression model based on distance criterion.

    PubMed

    Chen, Liang-Hsuan; Hsueh, Chan-Ching

    2007-06-01

    Fuzzy regression models are useful to investigate the relationship between explanatory and response variables with fuzzy observations. Different from previous studies, this correspondence proposes a mathematical programming method to construct a fuzzy regression model based on a distance criterion. The objective of the mathematical programming is to minimize the sum of distances between the estimated and observed responses on the X axis, such that the fuzzy regression model constructed has the minimal total estimation error in distance. Only several alpha-cuts of fuzzy observations are needed as inputs to the mathematical programming model; therefore, the applications are not restricted to triangular fuzzy numbers. Three examples, adopted in the previous studies, and a larger example, modified from the crisp case, are used to illustrate the performance of the proposed approach. The results indicate that the proposed model has better performance than those in the previous studies based on either distance criterion or Kim and Bishu's criterion. In addition, the efficiency and effectiveness for solving the larger example by the proposed model are also satisfactory.

  17. Optimal back-extrapolation method for estimating plasma volume in humans using the indocyanine green dilution method

    PubMed Central

    2014-01-01

    Background The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. Methods We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Conclusions Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method. PMID:25052018

  18. Evaluating MEDEVAC Force Structure Requirements Using an Updated Army Scenario, Total Army Analysis Admission Data, Monte Carlo Simulation, and Theater Structure.

    PubMed

    Fulton, Lawrence; Kerr, Bernie; Inglis, James M; Brooks, Matthew; Bastian, Nathaniel D

    2015-07-01

    In this study, we re-evaluate air ambulance requirements (rules of allocation) and planning considerations based on an Army-approved, Theater Army Analysis scenario. A previous study using workload only estimated a requirement of 0.4 to 0.6 aircraft per admission, a significant bolus over existence-based rules. In this updated study, we estimate requirements for Phase III (major combat operations) using a simulation grounded in previously published work and Phase IV (stability operations) based on four rules of allocation: unit existence rules, workload factors, theater structure (geography), and manual input. This study improves upon previous work by including the new air ambulance mission requirements of Department of Defense 51001.1, Roles and Functions of the Services, by expanding the analysis over two phases, and by considering unit rotation requirements known as Army Force Generation based on Department of Defense policy. The recommendations of this study are intended to inform future planning factors and already provided decision support to the Army Aviation Branch in determining force structure requirements. Reprint & Copyright © 2015 Association of Military Surgeons of the U.S.

  19. New, national bottom-up estimate for tree-based biological ...

    EPA Pesticide Factsheets

    Nitrogen is a limiting nutrient in many ecosystems, but is also a chief pollutant from human activity. Quantifying human impacts on the nitrogen cycle and investigating natural ecosystem nitrogen cycling both require an understanding of the magnitude of nitrogen inputs from biological nitrogen fixation (BNF). A bottom-up approach to estimating BNF—scaling rates up from measurements to broader scales—is attractive because it is rooted in actual BNF measurements. However, bottom-up approaches have been hindered by scaling difficulties, and a recent top-down approach suggested that the previous bottom-up estimate was much too large. Here, we used a bottom-up approach for tree-based BNF, overcoming scaling difficulties with the systematic, immense (>70,000 N-fixing trees) Forest Inventory and Analysis (FIA) database. We employed two approaches to estimate species-specific BNF rates: published ecosystem-scale rates (kg N ha-1 yr-1) and published estimates of the percent of N derived from the atmosphere (%Ndfa) combined with FIA-derived growth rates. Species-specific rates can vary for a variety of reasons, so for each approach we examined how different assumptions influenced our results. Specifically, we allowed BNF rates to vary with stand age, N-fixer density, and canopy position (since N-fixation is known to require substantial light).Our estimates from this bottom-up technique are several orders of magnitude lower than previous estimates indicating

  20. Landform partitioning and estimates of deep storage of soil organic matter in Zackenberg, Greenland

    NASA Astrophysics Data System (ADS)

    Palmtag, Juri; Cable, Stefanie; Christiansen, Hanne H.; Hugelius, Gustaf; Kuhry, Peter

    2018-05-01

    Soils in the northern high latitudes are a key component in the global carbon cycle, with potential feedback on climate. This study aims to improve the previous soil organic carbon (SOC) and total nitrogen (TN) storage estimates for the Zackenberg area (NE Greenland) that were based on a land cover classification (LCC) approach, by using geomorphological upscaling. In addition, novel organic carbon (OC) estimates for deeper alluvial and deltaic deposits (down to 300 cm depth) are presented. We hypothesise that landforms will better represent the long-term slope and depositional processes that result in deep SOC burial in this type of mountain permafrost environments. The updated mean SOC storage for the 0-100 cm soil depth is 4.8 kg C m-2, which is 42 % lower than the previous estimate of 8.3 kg C m-2 based on land cover upscaling. Similarly, the mean soil TN storage in the 0-100 cm depth decreased with 44 % from 0.50 kg (± 0.1 CI) to 0.28 (±0.1 CI) kg TN m-2. We ascribe the differences to a previous areal overestimate of SOC- and TN-rich vegetated land cover classes. The landform-based approach more correctly constrains the depositional areas in alluvial fans and deltas with high SOC and TN storage. These are also areas of deep carbon storage with an additional 2.4 kg C m-2 in the 100-300 cm depth interval. This research emphasises the need to consider geomorphology when assessing SOC pools in mountain permafrost landscapes.

  1. Virtual Estimator for Piecewise Linear Systems Based on Observability Analysis

    PubMed Central

    Morales-Morales, Cornelio; Adam-Medina, Manuel; Cervantes, Ilse; Vela-Valdés and, Luis G.; García Beltrán, Carlos Daniel

    2013-01-01

    This article proposes a virtual sensor for piecewise linear systems based on observability analysis that is in function of a commutation law related with the system's outpu. This virtual sensor is also known as a state estimator. Besides, it presents a detector of active mode when the commutation sequences of each linear subsystem are arbitrary and unknown. For the previous, this article proposes a set of virtual estimators that discern the commutation paths of the system and allow estimating their output. In this work a methodology in order to test the observability for piecewise linear systems with discrete time is proposed. An academic example is presented to show the obtained results. PMID:23447007

  2. Stochastic models to demonstrate the effect of motivated testing on HIV incidence estimates using the serological testing algorithm for recent HIV seroconversion (STARHS).

    PubMed

    White, Edward W; Lumley, Thomas; Goodreau, Steven M; Goldbaum, Gary; Hawes, Stephen E

    2010-12-01

    To produce valid seroincidence estimates, the serological testing algorithm for recent HIV seroconversion (STARHS) assumes independence between infection and testing, which may be absent in clinical data. STARHS estimates are generally greater than cohort-based estimates of incidence from observable person-time and diagnosis dates. The authors constructed a series of partial stochastic models to examine whether testing motivated by suspicion of infection could bias STARHS. One thousand Monte Carlo simulations of 10,000 men who have sex with men were generated using parameters for HIV incidence and testing frequency from data from a clinical testing population in Seattle. In one set of simulations, infection and testing dates were independent. In another set, some intertest intervals were abbreviated to reflect the distribution of intervals between suspected HIV exposure and testing in a group of Seattle men who have sex with men recently diagnosed as having HIV. Both estimation methods were applied to the simulated datasets. Both cohort-based and STARHS incidence estimates were calculated using the simulated data and compared with previously calculated, empirical cohort-based and STARHS seroincidence estimates from the clinical testing population. Under simulated independence between infection and testing, cohort-based and STARHS incidence estimates resembled cohort estimates from the clinical dataset. Under simulated motivated testing, cohort-based estimates remained unchanged, but STARHS estimates were inflated similar to empirical STARHS estimates. Varying motivation parameters appreciably affected STARHS incidence estimates, but not cohort-based estimates. Cohort-based incidence estimates are robust against dependence between testing and acquisition of infection, whereas STARHS incidence estimates are not.

  3. Method and system for efficient video compression with low-complexity encoder

    NASA Technical Reports Server (NTRS)

    Chen, Jun (Inventor); He, Dake (Inventor); Sheinin, Vadim (Inventor); Jagmohan, Ashish (Inventor); Lu, Ligang (Inventor)

    2012-01-01

    Disclosed are a method and system for video compression, wherein the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a video decoder, wherein the method for encoding includes the steps of converting a source frame into a space-frequency representation; estimating conditional statistics of at least one vector of space-frequency coefficients; estimating encoding rates based on the said conditional statistics; and applying Slepian-Wolf codes with the said computed encoding rates. The preferred method for decoding includes the steps of; generating a side-information vector of frequency coefficients based on previously decoded source data, encoder statistics, and previous reconstructions of the source frequency vector; and performing Slepian-Wolf decoding of at least one source frequency vector based on the generated side-information, the Slepian-Wolf code bits and the encoder statistics.

  4. Parametric cost estimation for space science missions

    NASA Astrophysics Data System (ADS)

    Lillie, Charles F.; Thompson, Bruce E.

    2008-07-01

    Cost estimation for space science missions is critically important in budgeting for successful missions. The process requires consideration of a number of parameters, where many of the values are only known to a limited accuracy. The results of cost estimation are not perfect, but must be calculated and compared with the estimates that the government uses for budgeting purposes. Uncertainties in the input parameters result from evolving requirements for missions that are typically the "first of a kind" with "state-of-the-art" instruments and new spacecraft and payload technologies that make it difficult to base estimates on the cost histories of previous missions. Even the cost of heritage avionics is uncertain due to parts obsolescence and the resulting redesign work. Through experience and use of industry best practices developed in participation with the Aerospace Industries Association (AIA), Northrop Grumman has developed a parametric modeling approach that can provide a reasonably accurate cost range and most probable cost for future space missions. During the initial mission phases, the approach uses mass- and powerbased cost estimating relationships (CER)'s developed with historical data from previous missions. In later mission phases, when the mission requirements are better defined, these estimates are updated with vendor's bids and "bottoms- up", "grass-roots" material and labor cost estimates based on detailed schedules and assigned tasks. In this paper we describe how we develop our CER's for parametric cost estimation and how they can be applied to estimate the costs for future space science missions like those presented to the Astronomy & Astrophysics Decadal Survey Study Committees.

  5. Using Extended Genealogy to Estimate Components of Heritability for 23 Quantitative and Dichotomous Traits

    PubMed Central

    Zaitlen, Noah; Kraft, Peter; Patterson, Nick; Pasaniuc, Bogdan; Bhatia, Gaurav; Pollack, Samuela; Price, Alkes L.

    2013-01-01

    Important knowledge about the determinants of complex human phenotypes can be obtained from the estimation of heritability, the fraction of phenotypic variation in a population that is determined by genetic factors. Here, we make use of extensive phenotype data in Iceland, long-range phased genotypes, and a population-wide genealogical database to examine the heritability of 11 quantitative and 12 dichotomous phenotypes in a sample of 38,167 individuals. Most previous estimates of heritability are derived from family-based approaches such as twin studies, which may be biased upwards by epistatic interactions or shared environment. Our estimates of heritability, based on both closely and distantly related pairs of individuals, are significantly lower than those from previous studies. We examine phenotypic correlations across a range of relationships, from siblings to first cousins, and find that the excess phenotypic correlation in these related individuals is predominantly due to shared environment as opposed to dominance or epistasis. We also develop a new method to jointly estimate narrow-sense heritability and the heritability explained by genotyped SNPs. Unlike existing methods, this approach permits the use of information from both closely and distantly related pairs of individuals, thereby reducing the variance of estimates of heritability explained by genotyped SNPs while preventing upward bias. Our results show that common SNPs explain a larger proportion of the heritability than previously thought, with SNPs present on Illumina 300K genotyping arrays explaining more than half of the heritability for the 23 phenotypes examined in this study. Much of the remaining heritability is likely to be due to rare alleles that are not captured by standard genotyping arrays. PMID:23737753

  6. Using extended genealogy to estimate components of heritability for 23 quantitative and dichotomous traits.

    PubMed

    Zaitlen, Noah; Kraft, Peter; Patterson, Nick; Pasaniuc, Bogdan; Bhatia, Gaurav; Pollack, Samuela; Price, Alkes L

    2013-05-01

    Important knowledge about the determinants of complex human phenotypes can be obtained from the estimation of heritability, the fraction of phenotypic variation in a population that is determined by genetic factors. Here, we make use of extensive phenotype data in Iceland, long-range phased genotypes, and a population-wide genealogical database to examine the heritability of 11 quantitative and 12 dichotomous phenotypes in a sample of 38,167 individuals. Most previous estimates of heritability are derived from family-based approaches such as twin studies, which may be biased upwards by epistatic interactions or shared environment. Our estimates of heritability, based on both closely and distantly related pairs of individuals, are significantly lower than those from previous studies. We examine phenotypic correlations across a range of relationships, from siblings to first cousins, and find that the excess phenotypic correlation in these related individuals is predominantly due to shared environment as opposed to dominance or epistasis. We also develop a new method to jointly estimate narrow-sense heritability and the heritability explained by genotyped SNPs. Unlike existing methods, this approach permits the use of information from both closely and distantly related pairs of individuals, thereby reducing the variance of estimates of heritability explained by genotyped SNPs while preventing upward bias. Our results show that common SNPs explain a larger proportion of the heritability than previously thought, with SNPs present on Illumina 300K genotyping arrays explaining more than half of the heritability for the 23 phenotypes examined in this study. Much of the remaining heritability is likely to be due to rare alleles that are not captured by standard genotyping arrays.

  7. Subtitle-Based Word Frequencies as the Best Estimate of Reading Behavior: The Case of Greek

    PubMed Central

    Dimitropoulou, Maria; Duñabeitia, Jon Andoni; Avilés, Alberto; Corral, José; Carreiras, Manuel

    2010-01-01

    Previous evidence has shown that word frequencies calculated from corpora based on film and television subtitles can readily account for reading performance, since the language used in subtitles greatly approximates everyday language. The present study examines this issue in a society with increased exposure to subtitle reading. We compiled SUBTLEX-GR, a subtitled-based corpus consisting of more than 27 million Modern Greek words, and tested to what extent subtitle-based frequency estimates and those taken from a written corpus of Modern Greek account for the lexical decision performance of young Greek adults who are exposed to subtitle reading on a daily basis. Results showed that SUBTLEX-GR frequency estimates effectively accounted for participants’ reading performance in two different visual word recognition experiments. More importantly, different analyses showed that frequencies estimated from a subtitle corpus explained the obtained results significantly better than traditional frequencies derived from written corpora. PMID:21833273

  8. Estimating Power System Dynamic States Using Extended Kalman Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Zhenyu; Schneider, Kevin P.; Nieplocha, Jaroslaw

    2014-10-31

    Abstract—The state estimation tools which are currently deployed in power system control rooms are based on a steady state assumption. As a result, the suite of operational tools that rely on state estimation results as inputs do not have dynamic information available and their accuracy is compromised. This paper investigates the application of Extended Kalman Filtering techniques for estimating dynamic states in the state estimation process. The new formulated “dynamic state estimation” includes true system dynamics reflected in differential equations, not like previously proposed “dynamic state estimation” which only considers the time-variant snapshots based on steady state modeling. This newmore » dynamic state estimation using Extended Kalman Filter has been successfully tested on a multi-machine system. Sensitivity studies with respect to noise levels, sampling rates, model errors, and parameter errors are presented as well to illustrate the robust performance of the developed dynamic state estimation process.« less

  9. Estimating tuberculosis incidence from primary survey data: a mathematical modeling approach.

    PubMed

    Pandey, S; Chadha, V K; Laxminarayan, R; Arinaminpathy, N

    2017-04-01

    There is an urgent need for improved estimations of the burden of tuberculosis (TB). To develop a new quantitative method based on mathematical modelling, and to demonstrate its application to TB in India. We developed a simple model of TB transmission dynamics to estimate the annual incidence of TB disease from the annual risk of tuberculous infection and prevalence of smear-positive TB. We first compared model estimates for annual infections per smear-positive TB case using previous empirical estimates from China, Korea and the Philippines. We then applied the model to estimate TB incidence in India, stratified by urban and rural settings. Study model estimates show agreement with previous empirical estimates. Applied to India, the model suggests an annual incidence of smear-positive TB of 89.8 per 100 000 population (95%CI 56.8-156.3). Results show differences in urban and rural TB: while an urban TB case infects more individuals per year, a rural TB case remains infectious for appreciably longer, suggesting the need for interventions tailored to these different settings. Simple models of TB transmission, in conjunction with necessary data, can offer approaches to burden estimation that complement those currently being used.

  10. A Doppler centroid estimation algorithm for SAR systems optimized for the quasi-homogeneous source

    NASA Technical Reports Server (NTRS)

    Jin, Michael Y.

    1989-01-01

    Radar signal processing applications frequently require an estimate of the Doppler centroid of a received signal. The Doppler centroid estimate is required for synthetic aperture radar (SAR) processing. It is also required for some applications involving target motion estimation and antenna pointing direction estimation. In some cases, the Doppler centroid can be accurately estimated based on available information regarding the terrain topography, the relative motion between the sensor and the terrain, and the antenna pointing direction. Often, the accuracy of the Doppler centroid estimate can be improved by analyzing the characteristics of the received SAR signal. This kind of signal processing is also referred to as clutterlock processing. A Doppler centroid estimation (DCE) algorithm is described which contains a linear estimator optimized for the type of terrain surface that can be modeled by a quasi-homogeneous source (QHS). Information on the following topics is presented: (1) an introduction to the theory of Doppler centroid estimation; (2) analysis of the performance characteristics of previously reported DCE algorithms; (3) comparison of these analysis results with experimental results; (4) a description and performance analysis of a Doppler centroid estimator which is optimized for a QHS; and (5) comparison of the performance of the optimal QHS Doppler centroid estimator with that of previously reported methods.

  11. Estimating trace-suspect match probabilities for singleton Y-STR haplotypes using coalescent theory.

    PubMed

    Andersen, Mikkel Meyer; Caliebe, Amke; Jochens, Arne; Willuweit, Sascha; Krawczak, Michael

    2013-02-01

    Estimation of match probabilities for singleton haplotypes of lineage markers, i.e. for haplotypes observed only once in a reference database augmented by a suspect profile, is an important problem in forensic genetics. We compared the performance of four estimators of singleton match probabilities for Y-STRs, namely the count estimate, both with and without Brenner's so-called 'kappa correction', the surveying estimate, and a previously proposed, but rarely used, coalescent-based approach implemented in the BATWING software. Extensive simulation with BATWING of the underlying population history, haplotype evolution and subsequent database sampling revealed that the coalescent-based approach is characterized by lower bias and lower mean squared error than the uncorrected count estimator and the surveying estimator. Moreover, in contrast to the two count estimators, both the surveying and the coalescent-based approach exhibited a good correlation between the estimated and true match probabilities. However, although its overall performance is thus better than that of any other recognized method, the coalescent-based estimator is still computation-intense on the verge of general impracticability. Its application in forensic practice therefore will have to be limited to small reference databases, or to isolated cases of particular interest, until more powerful algorithms for coalescent simulation have become available. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  12. Automatic estimation of heart boundaries and cardiothoracic ratio from chest x-ray images

    NASA Astrophysics Data System (ADS)

    Dallal, Ahmed H.; Agarwal, Chirag; Arbabshirani, Mohammad R.; Patel, Aalpen; Moore, Gregory

    2017-03-01

    Cardiothoracic ratio (CTR) is a widely used radiographic index to assess heart size on chest X-rays (CXRs). Recent studies have suggested that also two-dimensional CTR might contain clinical information about the heart function. However, manual measurement of such indices is both subjective and time consuming. This study proposes a fast algorithm to automatically estimate CTR indices based on CXRs. The algorithm has three main steps: 1) model based lung segmentation, 2) estimation of heart boundaries from lung contours, and 3) computation of cardiothoracic indices from the estimated boundaries. We extended a previously employed lung detection algorithm to automatically estimate heart boundaries without using ground truth heart markings. We used two datasets: a publicly available dataset with 247 images as well as clinical dataset with 167 studies from Geisinger Health System. The models of lung fields are learned from both datasets. The lung regions in a given test image are estimated by registering the learned models to patient CXRs. Then, heart region is estimated by applying Harris operator on segmented lung fields to detect the corner points corresponding to the heart boundaries. The algorithm calculates three indices, CTR1D, CTR2D, and cardiothoracic area ratio (CTAR). The method was tested on 103 clinical CXRs and average error rates of 7.9%, 25.5%, and 26.4% (for CTR1D, CTR2D, and CTAR respectively) were achieved. The proposed method outperforms previous CTR estimation methods without using any heart templates. This method can have important clinical implications as it can provide fast and accurate estimate of cardiothoracic indices.

  13. Paule‐Mandel estimators for network meta‐analysis with random inconsistency effects

    PubMed Central

    Veroniki, Areti Angeliki; Law, Martin; Tricco, Andrea C.; Baker, Rose

    2017-01-01

    Network meta‐analysis is used to simultaneously compare multiple treatments in a single analysis. However, network meta‐analyses may exhibit inconsistency, where direct and different forms of indirect evidence are not in agreement with each other, even after allowing for between‐study heterogeneity. Models for network meta‐analysis with random inconsistency effects have the dual aim of allowing for inconsistencies and estimating average treatment effects across the whole network. To date, two classical estimation methods for fitting this type of model have been developed: a method of moments that extends DerSimonian and Laird's univariate method and maximum likelihood estimation. However, the Paule and Mandel estimator is another recommended classical estimation method for univariate meta‐analysis. In this paper, we extend the Paule and Mandel method so that it can be used to fit models for network meta‐analysis with random inconsistency effects. We apply all three estimation methods to a variety of examples that have been used previously and we also examine a challenging new dataset that is highly heterogenous. We perform a simulation study based on this new example. We find that the proposed Paule and Mandel method performs satisfactorily and generally better than the previously proposed method of moments because it provides more accurate inferences. Furthermore, the Paule and Mandel method possesses some advantages over likelihood‐based methods because it is both semiparametric and requires no convergence diagnostics. Although restricted maximum likelihood estimation remains the gold standard, the proposed methodology is a fully viable alternative to this and other estimation methods. PMID:28585257

  14. Cost analysis of life sciences experiments and subsystems. [to be carried in the Spacelab

    NASA Technical Reports Server (NTRS)

    Yakut, M. M.

    1975-01-01

    Cost estimates for experiments and subsystems flown in the Spacelab were established. Ten experiments were cost analyzed. Estimated cost varied from $650,000 for the hardware development of the SPE water electrolysis experiment to $78,500,000 for the development and operation of a representative life sciences laboratory program. The cost of subsystems for thermal, atmospheric and trace contaminants control of the Spacelab internal atmosphere was also estimated. Subsystem cost estimates were based on the utilization of existing components developed in previous space programs whenever necessary.

  15. Child mortality estimation 2013: an overview of updates in estimation methods by the United Nations Inter-agency Group for Child Mortality Estimation.

    PubMed

    Alkema, Leontine; New, Jin Rou; Pedersen, Jon; You, Danzhen

    2014-01-01

    In September 2013, the United Nations Inter-agency Group for Child Mortality Estimation (UN IGME) published an update of the estimates of the under-five mortality rate (U5MR) and under-five deaths for all countries. Compared to the UN IGME estimates published in 2012, updated data inputs and a new method for estimating the U5MR were used. We summarize the new U5MR estimation method, which is a Bayesian B-spline Bias-reduction model, and highlight differences with the previously used method. Differences in UN IGME U5MR estimates as published in 2012 and those published in 2013 are presented and decomposed into differences due to the updated database and differences due to the new estimation method to explain and motivate changes in estimates. Compared to the previously used method, the new UN IGME estimation method is based on a different trend fitting method that can track (recent) changes in U5MR more closely. The new method provides U5MR estimates that account for data quality issues. Resulting differences in U5MR point estimates between the UN IGME 2012 and 2013 publications are small for the majority of countries but greater than 10 deaths per 1,000 live births for 33 countries in 2011 and 19 countries in 1990. These differences can be explained by the updated database used, the curve fitting method as well as accounting for data quality issues. Changes in the number of deaths were less than 10% on the global level and for the majority of MDG regions. The 2013 UN IGME estimates provide the most recent assessment of levels and trends in U5MR based on all available data and an improved estimation method that allows for closer-to-real-time monitoring of changes in the U5MR and takes account of data quality issues.

  16. Child Mortality Estimation 2013: An Overview of Updates in Estimation Methods by the United Nations Inter-Agency Group for Child Mortality Estimation

    PubMed Central

    Alkema, Leontine; New, Jin Rou; Pedersen, Jon; You, Danzhen

    2014-01-01

    Background In September 2013, the United Nations Inter-agency Group for Child Mortality Estimation (UN IGME) published an update of the estimates of the under-five mortality rate (U5MR) and under-five deaths for all countries. Compared to the UN IGME estimates published in 2012, updated data inputs and a new method for estimating the U5MR were used. Methods We summarize the new U5MR estimation method, which is a Bayesian B-spline Bias-reduction model, and highlight differences with the previously used method. Differences in UN IGME U5MR estimates as published in 2012 and those published in 2013 are presented and decomposed into differences due to the updated database and differences due to the new estimation method to explain and motivate changes in estimates. Findings Compared to the previously used method, the new UN IGME estimation method is based on a different trend fitting method that can track (recent) changes in U5MR more closely. The new method provides U5MR estimates that account for data quality issues. Resulting differences in U5MR point estimates between the UN IGME 2012 and 2013 publications are small for the majority of countries but greater than 10 deaths per 1,000 live births for 33 countries in 2011 and 19 countries in 1990. These differences can be explained by the updated database used, the curve fitting method as well as accounting for data quality issues. Changes in the number of deaths were less than 10% on the global level and for the majority of MDG regions. Conclusions The 2013 UN IGME estimates provide the most recent assessment of levels and trends in U5MR based on all available data and an improved estimation method that allows for closer-to-real-time monitoring of changes in the U5MR and takes account of data quality issues. PMID:25013954

  17. Impulse excitation scanning acoustic microscopy for local quantification of Rayleigh surface wave velocity using B-scan analysis

    NASA Astrophysics Data System (ADS)

    Cherry, M.; Dierken, J.; Boehnlein, T.; Pilchak, A.; Sathish, S.; Grandhi, R.

    2018-01-01

    A new technique for performing quantitative scanning acoustic microscopy imaging of Rayleigh surface wave (RSW) velocity was developed based on b-scan processing. In this technique, the focused acoustic beam is moved through many defocus distances over the sample and excited with an impulse excitation, and advanced algorithms based on frequency filtering and the Hilbert transform are used to post-process the b-scans to estimate the Rayleigh surface wave velocity. The new method was used to estimate the RSW velocity on an optically flat E6 glass sample, and the velocity was measured at ±2 m/s and the scanning time per point was on the order of 1.0 s, which are both improvement from the previous two-point defocus method. The new method was also applied to the analysis of two titanium samples, and the velocity was estimated with very low standard deviation in certain large grains on the sample. A new behavior was observed with the b-scan analysis technique where the amplitude of the surface wave decayed dramatically on certain crystallographic orientations. The new technique was also compared with previous results, and the new technique has been found to be much more reliable and to have higher contrast than previously possible with impulse excitation.

  18. Species richness in soil bacterial communities: a proposed approach to overcome sample size bias.

    PubMed

    Youssef, Noha H; Elshahed, Mostafa S

    2008-09-01

    Estimates of species richness based on 16S rRNA gene clone libraries are increasingly utilized to gauge the level of bacterial diversity within various ecosystems. However, previous studies have indicated that regardless of the utilized approach, species richness estimates obtained are dependent on the size of the analyzed clone libraries. We here propose an approach to overcome sample size bias in species richness estimates in complex microbial communities. Parametric (Maximum likelihood-based and rarefaction curve-based) and non-parametric approaches were used to estimate species richness in a library of 13,001 near full-length 16S rRNA clones derived from soil, as well as in multiple subsets of the original library. Species richness estimates obtained increased with the increase in library size. To obtain a sample size-unbiased estimate of species richness, we calculated the theoretical clone library sizes required to encounter the estimated species richness at various clone library sizes, used curve fitting to determine the theoretical clone library size required to encounter the "true" species richness, and subsequently determined the corresponding sample size-unbiased species richness value. Using this approach, sample size-unbiased estimates of 17,230, 15,571, and 33,912 were obtained for the ML-based, rarefaction curve-based, and ACE-1 estimators, respectively, compared to bias-uncorrected values of 15,009, 11,913, and 20,909.

  19. Self-report vs. kinematic screening test: prevalence, demographics, and sports biography of yips-affected golfers.

    PubMed

    Klämpfl, Martin K; Philippen, Philipp B; Lobinger, Babett H

    2015-01-01

    The yips is considered a task-specific movement disorder. Its estimated prevalence, however, is high compared to similar neurological movement disorders, possibly resulting from previous studies' restriction of samples based on skill level, and self-report bias. Alternatively, this high prevalence might be an indication of additional aetiologies, for example the influence of previously played racket sports. We estimated the prevalence of the putting yips across the skill range, using self-reports in one study and a screening test in a second study. We explored if previously played sports matter for the development of the yips. In study 1, yips prevalence (N = 1,306) and golfers' sports biographies (n = 264) were examined via two online surveys, in which golfers indicated if they were yips-affected. In study 2, golfers (N = 186) putted in a standardised putting test while kinematic and performance measures were recorded. Prevalence was estimated via a kinematic threshold. Sports biographies (n = 119) were obtained via an online survey. Prevalence of currently yips-affected golfers was 22.4% in study 1 and 16.7% in study 2. In both studies, more yips-affected than unaffected golfers had experience in playing racket sports. Yips prevalence remained higher than previously estimated prevalence of other movement disorders but decreased when the whole skill range including professionals and novices was considered. Future studies should use the kinematic screening test instead of self-reports to detect the yips and further investigate the influence of previously played racket sports.

  20. Improving Estimation of Ground Casualty Risk From Reentering Space Objects

    NASA Technical Reports Server (NTRS)

    Ostrom, Chris L.

    2017-01-01

    A recent improvement to the long-term estimation of ground casualties from reentering space debris is the further refinement and update to the human population distribution. Previous human population distributions were based on global totals with simple scaling factors for future years, or a coarse grid of population counts in a subset of the world's countries, each cell having its own projected growth rate. The newest population model includes a 5-fold refinement in both latitude and longitude resolution. All areas along a single latitude are combined to form a global population distribution as a function of latitude, creating a more accurate population estimation based on non-uniform growth at the country and area levels. Previous risk probability calculations used simplifying assumptions that did not account for the ellipsoidal nature of the Earth. The new method uses first, a simple analytical method to estimate the amount of time spent above each latitude band for a debris object with a given orbit inclination and second, a more complex numerical method that incorporates the effects of a non-spherical Earth. These new results are compared with the prior models to assess the magnitude of the effects on reentry casualty risk.

  1. Improving Estimation of Ground Casualty Risk from Reentering Space Objects

    NASA Technical Reports Server (NTRS)

    Ostrom, C.

    2017-01-01

    A recent improvement to the long-term estimation of ground casualties from reentering space debris is the further refinement and update to the human population distribution. Previous human population distributions were based on global totals with simple scaling factors for future years, or a coarse grid of population counts in a subset of the world's countries, each cell having its own projected growth rate. The newest population model includes a 5-fold refinement in both latitude and longitude resolution. All areas along a single latitude are combined to form a global population distribution as a function of latitude, creating a more accurate population estimation based on non-uniform growth at the country and area levels. Previous risk probability calculations used simplifying assumptions that did not account for the ellipsoidal nature of the earth. The new method uses first, a simple analytical method to estimate the amount of time spent above each latitude band for a debris object with a given orbit inclination, and second, a more complex numerical method that incorporates the effects of a non-spherical Earth. These new results are compared with the prior models to assess the magnitude of the effects on reentry casualty risk.

  2. Rules of Thumb for Depth of Investigation, Pseudo-Position and Resolution of the Electrical Resistivity Method from Analysis of the Moments of the Sensitivity Function for a Homogeneous Half-Space

    NASA Astrophysics Data System (ADS)

    Butler, S. L.

    2017-12-01

    The electrical resistivity method is now highly developed with 2D and even 3D surveys routinely performed and with available fast inversion software. However, rules of thumb, based on simple mathematical formulas, for important quantities like depth of investigation, horizontal position and resolution have not previously been available and would be useful for survey planning, preliminary interpretation and general education about the method. In this contribution, I will show that the sensitivity function for the resistivity method for a homogeneous half-space can be analyzed in terms of its first and second moments which yield simple mathematical formulas. The first moment gives the sensitivity-weighted center of an apparent resistivity measurement with the vertical center being an estimate of the depth of investigation. I will show that this depth of investigation estimate works at least as well as previous estimates based on the peak and median of the depth sensitivity function which must be calculated numerically for a general four electrode array. The vertical and horizontal first moments can also be used as pseudopositions when plotting 1, 2 and 3D pseudosections. The appropriate horizontal plotting point for a pseudosection was not previously obvious for nonsymmetric arrays. The second moments of the sensitivity function give estimates of the spatial extent of the region contributing to an apparent resistivity measurement and hence are measures of the resolution. These also have simple mathematical formulas.

  3. Convex-hull mass estimates of the dodo (Raphus cucullatus): application of a CT-based mass estimation technique

    PubMed Central

    O’Mahoney, Thomas G.; Kitchener, Andrew C.; Manning, Phillip L.; Sellers, William I.

    2016-01-01

    The external appearance of the dodo (Raphus cucullatus, Linnaeus, 1758) has been a source of considerable intrigue, as contemporaneous accounts or depictions are rare. The body mass of the dodo has been particularly contentious, with the flightless pigeon alternatively reconstructed as slim or fat depending upon the skeletal metric used as the basis for mass prediction. Resolving this dichotomy and obtaining a reliable estimate for mass is essential before future analyses regarding dodo life history, physiology or biomechanics can be conducted. Previous mass estimates of the dodo have relied upon predictive equations based upon hind limb dimensions of extant pigeons. Yet the hind limb proportions of dodo have been found to differ considerably from those of their modern relatives, particularly with regards to midshaft diameter. Therefore, application of predictive equations to unusually robust fossil skeletal elements may bias mass estimates. We present a whole-body computed tomography (CT) -based mass estimation technique for application to the dodo. We generate 3D volumetric renders of the articulated skeletons of 20 species of extant pigeons, and wrap minimum-fit ‘convex hulls’ around their bony extremities. Convex hull volume is subsequently regressed against mass to generate predictive models based upon whole skeletons. Our best-performing predictive model is characterized by high correlation coefficients and low mean squared error (a = − 2.31, b = 0.90, r2 = 0.97, MSE = 0.0046). When applied to articulated composite skeletons of the dodo (National Museums Scotland, NMS.Z.1993.13; Natural History Museum, NHMUK A.9040 and S/1988.50.1), we estimate eviscerated body masses of 8–10.8 kg. When accounting for missing soft tissues, this may equate to live masses of 10.6–14.3 kg. Mass predictions presented here overlap at the lower end of those previously published, and support recent suggestions of a relatively slim dodo. CT-based reconstructions provide a means of objectively estimating mass and body segment properties of extinct species using whole articulated skeletons. PMID:26788418

  4. A New Geological Slip Rate Estimate for the Calico Fault, Eastern California: Implications for Geodetic Versus Geologic Rate Estimates in the Eastern California Shear Zone

    NASA Astrophysics Data System (ADS)

    Wetmore, P. H.; Xie, S.; Gallant, E.; Owen, L. A.; Dixon, T. H.

    2017-12-01

    Fault slip rate is fundamental to accurate seismic hazard assessment. In the Mojave Desert section of the Eastern California Shear Zone previous studies have suggested a discrepancy between short-term geodetic and long-term geologic slip rate estimates. Understanding the origin of this discrepancy could lead to better understanding of stress evolution, and improve earthquake hazard estimates in general. We measured offsets in alluvial fans along the Calico fault near Newberry Springs, California, and used exposure age dating based on the cosmogenic nuclide 10Be to date the offset landforms. We derive a mean slip rate of 3.6 mm/yr, representing an average over the last few hundred thousand years, significantly faster than previous estimates. Considering numerous faults in the Mojave Desert and limited geologic slip rate estimates, it is premature to claim a geologic versus geodetic "discrepancy" for the ECSZ. More slip rate data, from all faults with the ECSZ, are needed to provide a statistically meaningful assessment of the geologic rates for each of the faults comprising the ECSZ.

  5. Estimation of median human lethal radiation dose computed from data on occupants of reinforced concrete structures in Nagasaki, Japan.

    PubMed

    Levin, S G; Young, R W; Stohler, R L

    1992-11-01

    This paper presents an estimate of the median lethal dose for humans exposed to total-body irradiation and not subsequently treated for radiation sickness. The median lethal dose was estimated from calculated doses to young adults who were inside two reinforced concrete buildings that remained standing in Nagasaki after the atomic detonation. The individuals in this study, none of whom have previously had calculated doses, were identified from a detailed survey done previously. Radiation dose to the bone marrow, which was taken as the critical radiation site, was calculated for each individual by the Engineering Physics and Mathematics Division of the Oak Ridge National Laboratory using a new three-dimensional discrete-ordinates radiation transport code that was developed and validated for this study using the latest site geometry, radiation yield, and spectra data. The study cohort consisted of 75 individuals who either survived > 60 d or died between the second and 60th d postirradiation due to radiation injury, without burns or other serious injury. Median lethal dose estimates were calculated using both logarithmic (2.9 Gy) and linear (3.4 Gy) dose scales. Both calculations, which met statistical validity tests, support previous estimates of the median lethal dose based solely on human data, which cluster around 3 Gy.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savy, J.

    New design and evaluation guidelines for department of energy facilities subjected to natural phenomena hazard, are being finalized. Although still in draft form at this time, the document describing those guidelines should be considered to be an update of previously available guidelines. The recommendations in the guidelines document mentioned above, and simply referred to as the guidelines'' thereafter, are based on the best information at the time of its development. In particular, the seismic hazard model for the Princeton site was based on a study performed in 1981 for Lawrence Livermore National Laboratory (LLNL), which relied heavily on the resultsmore » of the NRC's Systematic Evaluation Program and was based on a methodology and data sets developed in 1977 and 1978. Considerable advances have been made in the last ten years in the domain of seismic hazard modeling. Thus, it is recommended to update the estimate of the seismic hazard at the DOE sites whenever possible. The major differences between previous estimates and the ones proposed in this study for the PPPL are in the modeling of the strong ground motion at the site, and the treatment of the total uncertainty in the estimates to include knowledge uncertainty, random uncertainty, and expert opinion diversity as well. 28 refs.« less

  7. Extending Theory-Based Quantitative Predictions to New Health Behaviors.

    PubMed

    Brick, Leslie Ann D; Velicer, Wayne F; Redding, Colleen A; Rossi, Joseph S; Prochaska, James O

    2016-04-01

    Traditional null hypothesis significance testing suffers many limitations and is poorly adapted to theory testing. A proposed alternative approach, called Testing Theory-based Quantitative Predictions, uses effect size estimates and confidence intervals to directly test predictions based on theory. This paper replicates findings from previous smoking studies and extends the approach to diet and sun protection behaviors using baseline data from a Transtheoretical Model behavioral intervention (N = 5407). Effect size predictions were developed using two methods: (1) applying refined effect size estimates from previous smoking research or (2) using predictions developed by an expert panel. Thirteen of 15 predictions were confirmed for smoking. For diet, 7 of 14 predictions were confirmed using smoking predictions and 6 of 16 using expert panel predictions. For sun protection, 3 of 11 predictions were confirmed using smoking predictions and 5 of 19 using expert panel predictions. Expert panel predictions and smoking-based predictions poorly predicted effect sizes for diet and sun protection constructs. Future studies should aim to use previous empirical data to generate predictions whenever possible. The best results occur when there have been several iterations of predictions for a behavior, such as with smoking, demonstrating that expected values begin to converge on the population effect size. Overall, the study supports necessity in strengthening and revising theory with empirical data.

  8. Estimate of Fuel Consumption and GHG Emission Impact from an Automated Mobility District

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yuche; Young, Stanley; Qi, Xuewei

    2015-10-19

    This study estimates the range of fuel and emissions impact of an automated-vehicle (AV) based transit system that services campus-based developments, termed an automated mobility district (AMD). The study develops a framework to quantify the fuel consumption and greenhouse gas (GHG) emission impacts of a transit system comprised of AVs, taking into consideration average vehicle fleet composition, fuel consumption/GHG emission of vehicles within specific speed bins, and the average occupancy of passenger vehicles and transit vehicles. The framework is exercised using a previous mobility analysis of a personal rapid transit (PRT) system, a system which shares many attributes with envisionedmore » AV-based transit systems. Total fuel consumption and GHG emissions with and without an AMD are estimated, providing a range of potential system impacts on sustainability. The results of a previous case study based of a proposed implementation of PRT on the Kansas State University (KSU) campus in Manhattan, Kansas, serves as the basis to estimate personal miles traveled supplanted by an AMD at varying levels of service. The results show that an AMD has the potential to reduce total system fuel consumption and GHG emissions, but the amount is largely dependent on operating and ridership assumptions. The study points to the need to better understand ride-sharing scenarios and calls for future research on sustainability benefits of an AMD system at both vehicle and system levels.« less

  9. Modest familial risks for multiple sclerosis: a registry-based study of the population of Sweden

    PubMed Central

    Westerlind, Helga; Ramanujam, Ryan; Uvehag, Daniel; Kuja-Halkola, Ralf; Boman, Marcus; Bottai, Matteo; Lichtenstein, Paul

    2014-01-01

    Data on familial recurrence rates of complex diseases such as multiple sclerosis give important hints to aetiological factors such as the importance of genes and environment. By linking national registries, we sought to avoid common limitations of clinic-based studies such as low numbers, poor representation of the population and selection bias. Through the Swedish Multiple Sclerosis Registry and a nationwide hospital registry, a total of 28 396 patients with multiple sclerosis were identified. We used the national Multi-Generation Registry to identify first and second degree relatives as well as cousins, and the Swedish Twin Registry to identify twins of patients with multiple sclerosis. Crude and age corrected familial risks were estimated for cases and found to be in the same range as previously published figures. Matched population-based controls were used to calculate relative risks, revealing lower estimates of familial multiple sclerosis risks than previously reported, with a sibling recurrence risk (λs = 7.1; 95% confidence interval: 6.42–7.86). Surprisingly, despite a well-established lower prevalence of multiple sclerosis amongst males, the relative risks were equal among maternal and paternal relations. A previously reported increased risk in maternal relations could thus not be replicated. An observed higher transmission rate from fathers to sons compared with mothers to sons suggested a higher transmission to offspring from the less prevalent sex; therefore, presence of the so-called ‘Carter effect’ could not be excluded. We estimated the heritability of multiple sclerosis using 74 757 twin pairs with known zygosity, of which 315 were affected with multiple sclerosis, and added information from 2.5 million sibling pairs to increase power. The heritability was estimated to be 0.64 (0.36–0.76), whereas the shared environmental component was estimated to be 0.01 (0.00–0.18). In summary, whereas multiple sclerosis is to a great extent an inherited trait, the familial relative risks may be lower than usually reported. PMID:24441172

  10. Ensemble-Based Parameter Estimation in a Coupled GCM Using the Adaptive Spatial Average Method

    DOE PAGES

    Liu, Y.; Liu, Z.; Zhang, S.; ...

    2014-05-29

    Ensemble-based parameter estimation for a climate model is emerging as an important topic in climate research. And for a complex system such as a coupled ocean–atmosphere general circulation model, the sensitivity and response of a model variable to a model parameter could vary spatially and temporally. An adaptive spatial average (ASA) algorithm is proposed to increase the efficiency of parameter estimation. Refined from a previous spatial average method, the ASA uses the ensemble spread as the criterion for selecting “good” values from the spatially varying posterior estimated parameter values; these good values are then averaged to give the final globalmore » uniform posterior parameter. In comparison with existing methods, the ASA parameter estimation has a superior performance: faster convergence and enhanced signal-to-noise ratio.« less

  11. Near Real-time GNSS-based Ionospheric Model using Expanded Kriging in the East Asia Region

    NASA Astrophysics Data System (ADS)

    Choi, P. H.; Bang, E.; Lee, J.

    2016-12-01

    Many applications which utilize radio waves (e.g. navigation, communications, and radio sciences) are influenced by the ionosphere. The technology to provide global ionospheric maps (GIM) which show ionospheric Total Electron Content (TEC) has been progressed by processing GNSS data. However, the GIMs have limited spatial resolution (e.g. 2.5° in latitude and 5° in longitude), because they are generated using globally-distributed and thus relatively sparse GNSS reference station networks. This study presents a near real-time and high spatial resolution TEC model over East Asia by using ionospheric observables from both International GNSS Service (IGS) and local GNSS networks and the expanded kriging method. New signals from multi-constellation (e.g,, GPS L5, Galileo E5) were also used to generate high-precision TEC estimates. The newly proposed estimation method is based on the universal kriging interpolation technique, but integrates TEC data from previous epochs to those from the current epoch to improve the TEC estimation performance by increasing ionospheric observability. To propagate previous measurements to the current epoch, we implemented a Kalman filter whose dynamic model was derived by using the first-order Gauss-Markov process which characterizes temporal ionospheric changes under the nominal ionospheric conditions. Along with the TEC estimates at grids, the method generates the confidence bounds on the estimates using resulting estimation covariance. We also suggest to classify the confidence bounds into several categories to allow users to recognize the quality levels of TEC estimates according to the requirements for user's applications. This paper examines the performance of the proposed method by obtaining estimation results for both nominal and disturbed ionospheric conditions, and compares these results to those provided by GIM of the NASA Jet propulsion Laboratory. In addition, the estimation results based on the expanded kriging method are compared to the results from the universal kriging method for both nominal and disturbed ionospheric conditions.

  12. A new methodological approach to adjust alcohol exposure distributions to improve the estimation of alcohol-attributable fractions.

    PubMed

    Parish, William J; Aldridge, Arnie; Allaire, Benjamin; Ekwueme, Donatus U; Poehler, Diana; Guy, Gery P; Thomas, Cheryll C; Trogdon, Justin G

    2017-11-01

    To assess the burden of excessive alcohol use, researchers estimate alcohol-attributable fractions (AAFs) routinely. However, under-reporting in survey data can bias these estimates. We present an approach that adjusts for under-reporting in the estimation of AAFs, particularly within subgroups. This framework is a refinement of a previous method conducted by Rehm et al. We use a measurement error model to derive the 'true' alcohol distribution from a 'reported' alcohol distribution. The 'true' distribution leverages per-capita sales data to identify the distribution average and then identifies the shape of the distribution with self-reported survey data. Data are from the National Alcohol Survey (NAS), the National Household Survey on Drug Abuse (NHSDA) and the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC). We compared our approach with previous approaches by estimating the AAF of female breast cancer cases. Compared with Rehm et al.'s approach, our refinement performs similarly under a gamma assumption. For example, among females aged 18-25 years, the two approaches produce estimates from NHSDA that are within a percentage point. However, relaxing the gamma assumption generally produces more conservative evidence. For example, among females aged 18-25 years, estimates from NHSDA based on the best-fitting distribution are only 19.33% of breast cancer cases, which is a much smaller proportion than the gamma-based estimates of approximately 28%. A refinement of Rehm et al.'s approach to adjusting for underreporting in the estimation of alcohol-attributable fractions provides more flexibility. This flexibility can avoid biases associated with failing to account for the underlying differences in alcohol consumption patterns across different study populations. Comparisons of our refinement with Rehm et al.'s approach show that results are similar when a gamma distribution is assumed. However, results are appreciably lower when the best-fitting distribution is chosen versus gamma-based results. © 2017 Society for the Study of Addiction.

  13. An ecological risk assessment of the acute and chronic effects of the herbicide clopyralid to rainbow trout (Oncorhynchus mykiss)

    USGS Publications Warehouse

    Fairchild, J.F.; Allert, A.L.; Feltz, K.P.; Nelson, K.J.; Valle, J.A.

    2009-01-01

    Clopyralid (3,6-dichloro-2-pyridinecarboxylic acid) is a pyridine herbicide frequently used to control invasive, noxious weeds in the northwestern United States. Clopyralid exhibits low acute toxicity to fish, including the rainbow trout (Oncorhynchus mykiss) and the threatened bull trout (Salvelinus confluentus). However, there are no published chronic toxicity data for clopyralid and fish that can be used in ecological risk assessments. We conducted 30-day chronic toxicity studies with juvenile rainbow trout exposed to the acid form of clopyralid. The 30-day maximum acceptable toxicant concentration (MATC) for growth, calculated as the geometric mean of the no observable effect concentration (68 mg/L) and the lowest observable effect concentration (136 mg/L), was 96 mg/L. No mortality was measured at the highest chronic concentration tested (273 mg/L). The acute:chronic ratio, calculated by dividing the previously published 96-h acutely lethal concentration (96-h ALC50; 700 mg/L) by the MATC was 7.3. Toxicity values were compared to a four-tiered exposure assessment profile assuming an application rate of 1.12 kg/ha. The Tier 1 exposure estimation, based on direct overspray of a 2-m deep pond, was 0.055 mg/L. The Tier 2 maximum exposure estimate, based on the Generic Exposure Estimate Concentration model (GEENEC), was 0.057 mg/L. The Tier 3 maximum exposure estimate, based on previously published results of the Groundwater Loading Effects of Agricultural Management Systems model (GLEAMS), was 0.073 mg/L. The Tier 4 exposure estimate, based on published edge-of-field monitoring data, was estimated at 0.008 mg/L. Comparison of toxicity data to estimated environmental concentrations of clopyralid indicates that the safety factor for rainbow trout exposed to clopyralid at labeled use rates exceeds 1000. Therefore, the herbicide presents little to no risk to rainbow trout or other salmonids such as the threatened bull trout. ?? 2009 US Government.

  14. Use of claims data to estimate annual cervical cancer screening percentages in Portland metropolitan area, Oregon.

    PubMed

    Abdullah, Nasreen; Laing, Robert S; Hariri, Susan; Young, Collette M; Schafer, Sean

    2016-04-01

    Human papillomavirus (HPV) vaccine should reduce cervical dysplasia before cervical cancer. However, dysplasia diagnosis is screening-dependent. Accurate screening estimates are needed. To estimate the percentage of women in a geographic population that has had cervical cancer screening. We analyzed claims data for (Papanicolau) Pap tests from 2008-2012 to estimate the percentage of insured women aged 18-39 years screened. We estimated screening in uninsured women by dividing the percentage of insured Behavioral Risk Factor Surveillance Survey respondents reporting previous-year testing by the percentage of uninsured respondents reporting previous-year testing, and multiplying this ratio by claims-based estimates of insured women with previous-year screening. We calculated a simple weighted average of the two estimates to estimate overall screening percentage. We estimated credible intervals using Monte-Carlo simulations. During 2008-2012, an annual average of 29.6% of women aged 18-39 years were screened. Screening increased from 2008 to 2009 in all age groups. During 2009-2012, the screening percentages decreased for all groups, but declined most in women aged 18-20 years, from 21.5% to 5.4%. Within age groups, compared to 2009, credible intervals did not overlap during 2011 (except age group 21-29 years) and 2012, and credible intervals in the 18-20 year group did not overlap with older groups in any year. This introduces a novel method to estimate population-level cervical cancer screening. Overall, percentage of women screened in Portland, Oregon fell following changes in screening recommendations released in 2009 and later modified in 2012. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Application of a Tenax Model to Assess Bioavailability of Polychlorinated Biphenyls in Field Sediments

    EPA Science Inventory

    Recent literature has shown that bioavailability-based techniques, such as Tenax extraction, can estimate sediment exposure to benthos. In a previous study by the authors,Tenax extraction was used to create and validate a literature-based Tenax model to predict oligochaete bioac...

  16. Optical Method for Estimating the Chlorophyll Contents in Plant Leaves.

    PubMed

    Pérez-Patricio, Madaín; Camas-Anzueto, Jorge Luis; Sanchez-Alegría, Avisaí; Aguilar-González, Abiel; Gutiérrez-Miceli, Federico; Escobar-Gómez, Elías; Voisin, Yvon; Rios-Rojas, Carlos; Grajales-Coutiño, Ruben

    2018-02-22

    This work introduces a new vision-based approach for estimating chlorophyll contents in a plant leaf using reflectance and transmittance as base parameters. Images of the top and underside of the leaf are captured. To estimate the base parameters (reflectance/transmittance), a novel optical arrangement is proposed. The chlorophyll content is then estimated by using linear regression where the inputs are the reflectance and transmittance of the leaf. Performance of the proposed method for chlorophyll content estimation was compared with a spectrophotometer and a Soil Plant Analysis Development (SPAD) meter. Chlorophyll content estimation was realized for Lactuca sativa L., Azadirachta indica , Canavalia ensiforme , and Lycopersicon esculentum . Experimental results showed that-in terms of accuracy and processing speed-the proposed algorithm outperformed many of the previous vision-based approach methods that have used SPAD as a reference device. On the other hand, the accuracy reached is 91% for crops such as Azadirachta indica , where the chlorophyll value was obtained using the spectrophotometer. Additionally, it was possible to achieve an estimation of the chlorophyll content in the leaf every 200 ms with a low-cost camera and a simple optical arrangement. This non-destructive method increased accuracy in the chlorophyll content estimation by using an optical arrangement that yielded both the reflectance and transmittance information, while the required hardware is cheap.

  17. An Auto-Calibrating Knee Flexion-Extension Axis Estimator Using Principal Component Analysis with Inertial Sensors.

    PubMed

    McGrath, Timothy; Fineman, Richard; Stirling, Leia

    2018-06-08

    Inertial measurement units (IMUs) have been demonstrated to reliably measure human joint angles—an essential quantity in the study of biomechanics. However, most previous literature proposed IMU-based joint angle measurement systems that required manual alignment or prescribed calibration motions. This paper presents a simple, physically-intuitive method for IMU-based measurement of the knee flexion/extension angle in gait without requiring alignment or discrete calibration, based on computationally-efficient and easy-to-implement Principle Component Analysis (PCA). The method is compared against an optical motion capture knee flexion/extension angle modeled through OpenSim. The method is evaluated using both measured and simulated IMU data in an observational study ( n = 15) with an absolute root-mean-square-error (RMSE) of 9.24∘ and a zero-mean RMSE of 3.49∘. Variation in error across subjects was found, made emergent by the larger subject population than previous literature considers. Finally, the paper presents an explanatory model of RMSE on IMU mounting location. The observational data suggest that RMSE of the method is a function of thigh IMU perturbation and axis estimation quality. However, the effect size for these parameters is small in comparison to potential gains from improved IMU orientation estimations. Results also highlight the need to set relevant datums from which to interpret joint angles for both truth references and estimated data.

  18. Revisiting the Table 2 fallacy: A motivating example examining preeclampsia and preterm birth.

    PubMed

    Bandoli, Gretchen; Palmsten, Kristin; Chambers, Christina D; Jelliffe-Pawlowski, Laura L; Baer, Rebecca J; Thompson, Caroline A

    2018-05-21

    A "Table Fallacy," as coined by Westreich and Greenland, reports multiple adjusted effect estimates from a single model. This practice, which remains common in published literature, can be problematic when different types of effect estimates are presented together in a single table. The purpose of this paper is to quantitatively illustrate this potential for misinterpretation with an example estimating the effects of preeclampsia on preterm birth. We analysed a retrospective population-based cohort of 2 963 888 singleton births in California between 2007 and 2012. We performed a modified Poisson regression to calculate the total effect of preeclampsia on the risk of PTB, adjusting for previous preterm birth. pregnancy alcohol abuse, maternal education, and maternal socio-demographic factors (Model 1). In subsequent models, we report the total effects of previous preterm birth, alcohol abuse, and education on the risk of PTB, comparing and contrasting the controlled direct effects, total effects, and confounded effect estimates, resulting from Model 1. The effect estimate for previous preterm birth (a controlled direct effect in Model 1) increased 10% when estimated as a total effect. The risk ratio for alcohol abuse, biased due to an uncontrolled confounder in Model 1, was reduced by 23% when adjusted for drug abuse. The risk ratio for maternal education, solely a predictor of the outcome, was essentially unchanged. Reporting multiple effect estimates from a single model may lead to misinterpretation and lack of reproducibility. This example highlights the need for careful consideration of the types of effects estimated in statistical models. © 2018 John Wiley & Sons Ltd.

  19. On-line implementation of nonlinear parameter estimation for the Space Shuttle main engine

    NASA Technical Reports Server (NTRS)

    Buckland, Julia H.; Musgrave, Jeffrey L.; Walker, Bruce K.

    1992-01-01

    We investigate the performance of a nonlinear estimation scheme applied to the estimation of several parameters in a performance model of the Space Shuttle Main Engine. The nonlinear estimator is based upon the extended Kalman filter which has been augmented to provide estimates of several key performance variables. The estimated parameters are directly related to the efficiency of both the low pressure and high pressure fuel turbopumps. Decreases in the parameter estimates may be interpreted as degradations in turbine and/or pump efficiencies which can be useful measures for an online health monitoring algorithm. This paper extends previous work which has focused on off-line parameter estimation by investigating the filter's on-line potential from a computational standpoint. ln addition, we examine the robustness of the algorithm to unmodeled dynamics. The filter uses a reduced-order model of the engine that includes only fuel-side dynamics. The on-line results produced during this study are comparable to off-line results generated previously. The results show that the parameter estimates are sensitive to dynamics not included in the filter model. Off-line results using an extended Kalman filter with a full order engine model to address the robustness problems of the reduced-order model are also presented.

  20. Estimating tuberculosis incidence from primary survey data: a mathematical modeling approach

    PubMed Central

    Chadha, V. K.; Laxminarayan, R.; Arinaminpathy, N.

    2017-01-01

    SUMMARY BACKGROUND: There is an urgent need for improved estimations of the burden of tuberculosis (TB). OBJECTIVE: To develop a new quantitative method based on mathematical modelling, and to demonstrate its application to TB in India. DESIGN: We developed a simple model of TB transmission dynamics to estimate the annual incidence of TB disease from the annual risk of tuberculous infection and prevalence of smear-positive TB. We first compared model estimates for annual infections per smear-positive TB case using previous empirical estimates from China, Korea and the Philippines. We then applied the model to estimate TB incidence in India, stratified by urban and rural settings. RESULTS: Study model estimates show agreement with previous empirical estimates. Applied to India, the model suggests an annual incidence of smear-positive TB of 89.8 per 100 000 population (95%CI 56.8–156.3). Results show differences in urban and rural TB: while an urban TB case infects more individuals per year, a rural TB case remains infectious for appreciably longer, suggesting the need for interventions tailored to these different settings. CONCLUSIONS: Simple models of TB transmission, in conjunction with necessary data, can offer approaches to burden estimation that complement those currently being used. PMID:28284250

  1. The Extended-Image Tracking Technique Based on the Maximum Likelihood Estimation

    NASA Technical Reports Server (NTRS)

    Tsou, Haiping; Yan, Tsun-Yee

    2000-01-01

    This paper describes an extended-image tracking technique based on the maximum likelihood estimation. The target image is assume to have a known profile covering more than one element of a focal plane detector array. It is assumed that the relative position between the imager and the target is changing with time and the received target image has each of its pixels disturbed by an independent additive white Gaussian noise. When a rotation-invariant movement between imager and target is considered, the maximum likelihood based image tracking technique described in this paper is a closed-loop structure capable of providing iterative update of the movement estimate by calculating the loop feedback signals from a weighted correlation between the currently received target image and the previously estimated reference image in the transform domain. The movement estimate is then used to direct the imager to closely follow the moving target. This image tracking technique has many potential applications, including free-space optical communications and astronomy where accurate and stabilized optical pointing is essential.

  2. Recurrence of preterm birth and perinatal mortality in northern Tanzania: registry-based cohort study.

    PubMed

    Mahande, Michael J; Daltveit, Anne K; Obure, Joseph; Mmbaga, Blandina T; Masenga, Gileard; Manongi, Rachel; Lie, Rolv T

    2013-08-01

    To estimate the recurrence risk of preterm delivery and estimate the perinatal mortality in repeated preterm deliveries. Prospective study in Tanzania of 18 176 women who delivered a singleton between 2000 and 2008 at KCMC hospital. The women were followed up to 2010 for consecutive births. A total of 3359 women were identified with a total of 3867 subsequent deliveries in the follow-up period. Recurrence risk of preterm birth and perinatal mortality was estimated using log-binomial regression and adjusted for potential confounders. For women with a previous preterm birth, the risk of preterm birth in a subsequent pregnancy was 17%. This recurrence risk was estimated to be 2.7-fold (95% CI: 2.1-3.4) of the risk of women with a previous term birth. The perinatal mortality of babies in a second preterm birth of the same woman was 15%. Babies born at term who had an older sibling that was born preterm had a perinatal mortality of 10%. Babies born at term who had an older sibling who was also born at term had a perinatal mortality of 1.7%. Previous delivery of a preterm infant is a strong predictor of future preterm births in Tanzania. Previous or repeated preterm births increase the risk of perinatal death substantially in the subsequent pregnancy. © 2013 Blackwell Publishing Ltd.

  3. High-resolution estimates of Southwest Indian Ridge plate motions, 20 Ma to present

    NASA Astrophysics Data System (ADS)

    DeMets, C.; Merkouriev, S.; Sauter, D.

    2015-12-01

    We present the first estimates of Southwest Indian Ridge (SWIR) plate motions at high temporal resolution during the Quaternary and Neogene based on nearly 5000 crossings of 21 magnetic reversals out to C6no (19.72 Ma) and the digitized traces of 17 fracture zones and transform faults. Our reconstructions of this slow-spreading mid-ocean ridge reveal several unexpected results with notable implications for regional and global plate reconstructions since 20 Ma. Extrapolations of seafloor opening distances to zero-age seafloor based on reconstructions of reversals C1n (0.78 Ma) through C3n.4 (5.2 Ma) reveal evidence for surprisingly large outward displacement of 5 ± 1 km west of 32°E, where motion between the Nubia and Antarctic plates occurs, but 2 ± 1 km east of 32°E, more typical of most mid-ocean ridges. Newly estimated SWIR seafloor spreading rates are up to 15 per cent slower everywhere along the ridge than previous estimates. Reconstructions of the numerous observations for times back to 11 Ma confirm the existence of the hypothesized Lwandle plate at high confidence level and indicate that the Lwandle plate's western and eastern boundaries respectively intersect the ridge near the Andrew Bain transform fault complex at 32°E and between ˜45°E and 52°E, in accord with previous results. The Nubia-Antarctic, Lwandle-Antarctic and Somalia-Antarctic rotation sequences that best fit many magnetic reversal, fracture zone and transform fault crossings define previously unknown changes in the Neogene motions of all three plate pairs, consisting of ˜20 per cent slowdowns in their spreading rates at 7.2^{+0.9 }_{ -1.4} Ma if we enforce a simultaneous change in motion everywhere along the SWIR and gradual 3°-7° anticlockwise rotations of the relative slip directions. We apply trans-dimensional Bayesian analysis to our noisy, best-fitting rotation sequences in order to estimate less-noisy rotation sequences suitable for use in future global plate reconstructions and geodynamic studies. Notably, our new Nubia-Antarctic reconstruction of C5n.2 (11.0 Ma) predicts 20 per cent less opening than do two previous estimates, with important implications for motion that is estimated between the Nubia and Somalia plates. A Nubia-Somalia rotation determined from our Nubia-Antarctic and Somalia-Antarctic plate rotations for C5n.2 (11.0 Ma) predicts cumulative opening of 45 ± 4 km (95 per cent uncertainty) across the northernmost East Africa rift since 11.0 Ma, 70 per cent less than a recent 129 ± 62 km opening estimate based on a now-superseded interpretation of Anomaly 5 along the western portion of the SWIR.

  4. Estimating Planetary Boundary Layer Heights from NOAA Profiler Network Wind Profiler Data

    NASA Technical Reports Server (NTRS)

    Molod, Andrea M.; Salmun, H.; Dempsey, M

    2015-01-01

    An algorithm was developed to estimate planetary boundary layer (PBL) heights from hourly archived wind profiler data from the NOAA Profiler Network (NPN) sites located throughout the central United States. Unlike previous studies, the present algorithm has been applied to a long record of publicly available wind profiler signal backscatter data. Under clear conditions, summertime averaged hourly time series of PBL heights compare well with Richardson-number based estimates at the few NPN stations with hourly temperature measurements. Comparisons with clear sky reanalysis based estimates show that the wind profiler PBL heights are lower by approximately 250-500 m. The geographical distribution of daily maximum PBL heights corresponds well with the expected distribution based on patterns of surface temperature and soil moisture. Wind profiler PBL heights were also estimated under mostly cloudy conditions, and are generally higher than both the Richardson number based and reanalysis PBL heights, resulting in a smaller clear-cloudy condition difference. The algorithm presented here was shown to provide a reliable summertime climatology of daytime hourly PBL heights throughout the central United States.

  5. Effects of visual cues of object density on perception and anticipatory control of dexterous manipulation.

    PubMed

    Crajé, Céline; Santello, Marco; Gordon, Andrew M

    2013-01-01

    Anticipatory force planning during grasping is based on visual cues about the object's physical properties and sensorimotor memories of previous actions with grasped objects. Vision can be used to estimate object mass based on the object size to identify and recall sensorimotor memories of previously manipulated objects. It is not known whether subjects can use density cues to identify the object's center of mass (CM) and create compensatory moments in an anticipatory fashion during initial object lifts to prevent tilt. We asked subjects (n = 8) to estimate CM location of visually symmetric objects of uniform densities (plastic or brass, symmetric CM) and non-uniform densities (mixture of plastic and brass, asymmetric CM). We then asked whether subjects can use density cues to scale fingertip forces when lifting the visually symmetric objects of uniform and non-uniform densities. Subjects were able to accurately estimate an object's center of mass based on visual density cues. When the mass distribution was uniform, subjects could scale their fingertip forces in an anticipatory fashion based on the estimation. However, despite their ability to explicitly estimate CM location when object density was non-uniform, subjects were unable to scale their fingertip forces to create a compensatory moment and prevent tilt on initial lifts. Hefting object parts in the hand before the experiment did not affect this ability. This suggests a dichotomy between the ability to accurately identify the object's CM location for objects with non-uniform density cues and the ability to utilize this information to correctly scale their fingertip forces. These results are discussed in the context of possible neural mechanisms underlying sensorimotor integration linking visual cues and anticipatory control of grasping.

  6. Estimating the Prevalence of Anxiety and Mood Disorders in an Adolescent General Population: An Evaluation of the GHQ12

    ERIC Educational Resources Information Center

    Mann, Robert E.; Paglia-Boak, Angela; Adlaf, Edward M.; Beitchman, Joseph; Wolfe, David; Wekerle, Christine; Hamilton, Hayley A.; Rehm, Jurgen

    2011-01-01

    Anxiety and mood disorders (AMD) may be more common among adolescents than previously thought, and epidemiological research would benefit from an easily-administered measure of AMD. We assessed the ability of the GHQ12 to estimate the prevalence of AMD in a representative sample of Ontario adolescents. Data were based on self-administered…

  7. Why Was Kelvin's Estimate of the Earth's Age Wrong?

    ERIC Educational Resources Information Center

    Lovatt, Ian; Syed, M. Qasim

    2014-01-01

    This is a companion to our previous paper in which we give a published example, based primarily on Perry's work, of a graph of ln "y" versus "t" when "y" is an exponential function of "t". This work led us to the idea that Lord Kelvin's (William Thomson's) estimate of the Earth's age was…

  8. Limits on estimating the width of thin tubular structures in 3D images.

    PubMed

    Wörz, Stefan; Rohr, Karl

    2006-01-01

    This work studies limits on estimating the width of thin tubular structures in 3D images. Based on nonlinear estimation theory we analyze the minimal stochastic error of estimating the width. Given a 3D analytic model of the image intensities of tubular structures, we derive a closed-form expression for the Cramér-Rao bound of the width estimate under image noise. We use the derived lower bound as a benchmark and compare it with three previously proposed accuracy limits for vessel width estimation. Moreover, by experimental investigations we demonstrate that the derived lower bound can be achieved by fitting a 3D parametric intensity model directly to the image data.

  9. Estimating rooftop solar technical potential across the US using a combination of GIS-based methods, lidar data, and statistical modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gagnon, Pieter; Margolis, Robert; Melius, Jennifer

    We provide a detailed estimate of the technical potential of rooftop solar photovoltaic (PV) electricity generation throughout the contiguous United States. This national estimate is based on an analysis of select US cities that combines light detection and ranging (lidar) data with a validated analytical method for determining rooftop PV suitability employing geographic information systems. We use statistical models to extend this analysis to estimate the quantity and characteristics of roofs in areas not covered by lidar data. Finally, we model PV generation for all rooftops to yield technical potential estimates. At the national level, 8.13 billion m 2 ofmore » suitable roof area could host 1118 GW of PV capacity, generating 1432 TWh of electricity per year. This would equate to 38.6% of the electricity that was sold in the contiguous United States in 2013. This estimate is substantially higher than a previous estimate made by the National Renewable Energy Laboratory. The difference can be attributed to increases in PV module power density, improved estimation of building suitability, higher estimates of total number of buildings, and improvements in PV performance simulation tools that previously tended to underestimate productivity. Also notable, the nationwide percentage of buildings suitable for at least some PV deployment is high—82% for buildings smaller than 5000 ft 2 and over 99% for buildings larger than that. In most states, rooftop PV could enable small, mostly residential buildings to offset the majority of average household electricity consumption. Even in some states with a relatively poor solar resource, such as those in the Northeast, the residential sector has the potential to offset around 100% of its total electricity consumption with rooftop PV.« less

  10. Estimating rooftop solar technical potential across the US using a combination of GIS-based methods, lidar data, and statistical modeling

    DOE PAGES

    Gagnon, Pieter; Margolis, Robert; Melius, Jennifer; ...

    2018-01-05

    We provide a detailed estimate of the technical potential of rooftop solar photovoltaic (PV) electricity generation throughout the contiguous United States. This national estimate is based on an analysis of select US cities that combines light detection and ranging (lidar) data with a validated analytical method for determining rooftop PV suitability employing geographic information systems. We use statistical models to extend this analysis to estimate the quantity and characteristics of roofs in areas not covered by lidar data. Finally, we model PV generation for all rooftops to yield technical potential estimates. At the national level, 8.13 billion m 2 ofmore » suitable roof area could host 1118 GW of PV capacity, generating 1432 TWh of electricity per year. This would equate to 38.6% of the electricity that was sold in the contiguous United States in 2013. This estimate is substantially higher than a previous estimate made by the National Renewable Energy Laboratory. The difference can be attributed to increases in PV module power density, improved estimation of building suitability, higher estimates of total number of buildings, and improvements in PV performance simulation tools that previously tended to underestimate productivity. Also notable, the nationwide percentage of buildings suitable for at least some PV deployment is high—82% for buildings smaller than 5000 ft 2 and over 99% for buildings larger than that. In most states, rooftop PV could enable small, mostly residential buildings to offset the majority of average household electricity consumption. Even in some states with a relatively poor solar resource, such as those in the Northeast, the residential sector has the potential to offset around 100% of its total electricity consumption with rooftop PV.« less

  11. Estimating rooftop solar technical potential across the US using a combination of GIS-based methods, lidar data, and statistical modeling

    NASA Astrophysics Data System (ADS)

    Gagnon, Pieter; Margolis, Robert; Melius, Jennifer; Phillips, Caleb; Elmore, Ryan

    2018-02-01

    We provide a detailed estimate of the technical potential of rooftop solar photovoltaic (PV) electricity generation throughout the contiguous United States. This national estimate is based on an analysis of select US cities that combines light detection and ranging (lidar) data with a validated analytical method for determining rooftop PV suitability employing geographic information systems. We use statistical models to extend this analysis to estimate the quantity and characteristics of roofs in areas not covered by lidar data. Finally, we model PV generation for all rooftops to yield technical potential estimates. At the national level, 8.13 billion m2 of suitable roof area could host 1118 GW of PV capacity, generating 1432 TWh of electricity per year. This would equate to 38.6% of the electricity that was sold in the contiguous United States in 2013. This estimate is substantially higher than a previous estimate made by the National Renewable Energy Laboratory. The difference can be attributed to increases in PV module power density, improved estimation of building suitability, higher estimates of total number of buildings, and improvements in PV performance simulation tools that previously tended to underestimate productivity. Also notable, the nationwide percentage of buildings suitable for at least some PV deployment is high—82% for buildings smaller than 5000 ft2 and over 99% for buildings larger than that. In most states, rooftop PV could enable small, mostly residential buildings to offset the majority of average household electricity consumption. Even in some states with a relatively poor solar resource, such as those in the Northeast, the residential sector has the potential to offset around 100% of its total electricity consumption with rooftop PV.

  12. Elimination Rates of Dioxin Congeners in Former Chlorophenol Workers from Midland, Michigan

    PubMed Central

    Collins, James J.; Bodner, Kenneth M.; Wilken, Michael; Bodnar, Catherine M.

    2012-01-01

    Background: Exposure reconstructions and risk assessments for 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) and other dioxins rely on estimates of elimination rates. Limited data are available on elimination rates for congeners other than TCDD. Objectives: We estimated apparent elimination rates using a simple first-order one-compartment model for selected dioxin congeners based on repeated blood sampling in a previously studied population. Methods: Blood samples collected from 56 former chlorophenol workers in 2004–2005 and again in 2010 were analyzed for dioxin congeners. We calculated the apparent elimination half-life in each individual for each dioxin congener and examined factors potentially influencing elimination rates and the impact of estimated ongoing background exposures on rate estimates. Results: Mean concentrations of all dioxin congeners in the sampled participants declined between sampling times. Median apparent half-lives of elimination based on changes in estimated mass in the body were generally consistent with previous estimates and ranged from 6.8 years (1,2,3,7,8,9-hexachlorodibenzo-p-dioxin) to 11.6 years (pentachlorodibenzo-p-dioxin), with a composite half-life of 9.3 years for TCDD toxic equivalents. None of the factors examined, including age, smoking status, body mass index or change in body mass index, initial measured concentration, or chloracne diagnosis, was consistently associated with the estimated elimination rates in this population. Inclusion of plausible estimates of ongoing background exposures decreased apparent half-lives by approximately 10%. Available concentration-dependent toxicokinetic models for TCDD underpredicted observed elimination rates for concentrations < 100 ppt. Conclusions: The estimated elimination rates from this relatively large serial sampling study can inform occupational and environmental exposure and serum evaluations for dioxin compounds. PMID:23063871

  13. Absolute probability estimates of lethal vessel strikes to North Atlantic right whales in Roseway Basin, Scotian Shelf.

    PubMed

    van der Hoop, Julie M; Vanderlaan, Angelia S M; Taggart, Christopher T

    2012-10-01

    Vessel strikes are the primary source of known mortality for the endangered North Atlantic right whale (Eubalaena glacialis). Multi-institutional efforts to reduce mortality associated with vessel strikes include vessel-routing amendments such as the International Maritime Organization voluntary "area to be avoided" (ATBA) in the Roseway Basin right whale feeding habitat on the southwestern Scotian Shelf. Though relative probabilities of lethal vessel strikes have been estimated and published, absolute probabilities remain unknown. We used a modeling approach to determine the regional effect of the ATBA, by estimating reductions in the expected number of lethal vessel strikes. This analysis differs from others in that it explicitly includes a spatiotemporal analysis of real-time transits of vessels through a population of simulated, swimming right whales. Combining automatic identification system (AIS) vessel navigation data and an observationally based whale movement model allowed us to determine the spatial and temporal intersection of vessels and whales, from which various probability estimates of lethal vessel strikes are derived. We estimate one lethal vessel strike every 0.775-2.07 years prior to ATBA implementation, consistent with and more constrained than previous estimates of every 2-16 years. Following implementation, a lethal vessel strike is expected every 41 years. When whale abundance is held constant across years, we estimate that voluntary vessel compliance with the ATBA results in an 82% reduction in the per capita rate of lethal strikes; very similar to a previously published estimate of 82% reduction in the relative risk of a lethal vessel strike. The models we developed can inform decision-making and policy design, based on their ability to provide absolute, population-corrected, time-varying estimates of lethal vessel strikes, and they are easily transported to other regions and situations.

  14. Continuous Indoor Positioning Fusing WiFi, Smartphone Sensors and Landmarks

    PubMed Central

    Deng, Zhi-An; Wang, Guofeng; Qin, Danyang; Na, Zhenyu; Cui, Yang; Chen, Juan

    2016-01-01

    To exploit the complementary strengths of WiFi positioning, pedestrian dead reckoning (PDR), and landmarks, we propose a novel fusion approach based on an extended Kalman filter (EKF). For WiFi positioning, unlike previous fusion approaches setting measurement noise parameters empirically, we deploy a kernel density estimation-based model to adaptively measure the related measurement noise statistics. Furthermore, a trusted area of WiFi positioning defined by fusion results of previous step and WiFi signal outlier detection are exploited to reduce computational cost and improve WiFi positioning accuracy. For PDR, we integrate a gyroscope, an accelerometer, and a magnetometer to determine the user heading based on another EKF model. To reduce accumulation error of PDR and enable continuous indoor positioning, not only the positioning results but also the heading estimations are recalibrated by indoor landmarks. Experimental results in a realistic indoor environment show that the proposed fusion approach achieves substantial positioning accuracy improvement than individual positioning approaches including PDR and WiFi positioning. PMID:27608019

  15. Continuous Indoor Positioning Fusing WiFi, Smartphone Sensors and Landmarks.

    PubMed

    Deng, Zhi-An; Wang, Guofeng; Qin, Danyang; Na, Zhenyu; Cui, Yang; Chen, Juan

    2016-09-05

    To exploit the complementary strengths of WiFi positioning, pedestrian dead reckoning (PDR), and landmarks, we propose a novel fusion approach based on an extended Kalman filter (EKF). For WiFi positioning, unlike previous fusion approaches setting measurement noise parameters empirically, we deploy a kernel density estimation-based model to adaptively measure the related measurement noise statistics. Furthermore, a trusted area of WiFi positioning defined by fusion results of previous step and WiFi signal outlier detection are exploited to reduce computational cost and improve WiFi positioning accuracy. For PDR, we integrate a gyroscope, an accelerometer, and a magnetometer to determine the user heading based on another EKF model. To reduce accumulation error of PDR and enable continuous indoor positioning, not only the positioning results but also the heading estimations are recalibrated by indoor landmarks. Experimental results in a realistic indoor environment show that the proposed fusion approach achieves substantial positioning accuracy improvement than individual positioning approaches including PDR and WiFi positioning.

  16. An Information Retrieval Approach for Robust Prediction of Road Surface States.

    PubMed

    Park, Jae-Hyung; Kim, Kwanho

    2017-01-28

    Recently, due to the increasing importance of reducing severe vehicle accidents on roads (especially on highways), the automatic identification of road surface conditions, and the provisioning of such information to drivers in advance, have recently been gaining significant momentum as a proactive solution to decrease the number of vehicle accidents. In this paper, we firstly propose an information retrieval approach that aims to identify road surface states by combining conventional machine-learning techniques and moving average methods. Specifically, when signal information is received from a radar system, our approach attempts to estimate the current state of the road surface based on the similar instances observed previously based on utilizing a given similarity function. Next, the estimated state is then calibrated by using the recently estimated states to yield both effective and robust prediction results. To validate the performances of the proposed approach, we established a real-world experimental setting on a section of actual highway in South Korea and conducted a comparison with the conventional approaches in terms of accuracy. The experimental results show that the proposed approach successfully outperforms the previously developed methods.

  17. An Information Retrieval Approach for Robust Prediction of Road Surface States

    PubMed Central

    Park, Jae-Hyung; Kim, Kwanho

    2017-01-01

    Recently, due to the increasing importance of reducing severe vehicle accidents on roads (especially on highways), the automatic identification of road surface conditions, and the provisioning of such information to drivers in advance, have recently been gaining significant momentum as a proactive solution to decrease the number of vehicle accidents. In this paper, we firstly propose an information retrieval approach that aims to identify road surface states by combining conventional machine-learning techniques and moving average methods. Specifically, when signal information is received from a radar system, our approach attempts to estimate the current state of the road surface based on the similar instances observed previously based on utilizing a given similarity function. Next, the estimated state is then calibrated by using the recently estimated states to yield both effective and robust prediction results. To validate the performances of the proposed approach, we established a real-world experimental setting on a section of actual highway in South Korea and conducted a comparison with the conventional approaches in terms of accuracy. The experimental results show that the proposed approach successfully outperforms the previously developed methods. PMID:28134859

  18. Estimating the relative weights of visual and auditory tau versus heuristic-based cues for time-to-contact judgments in realistic, familiar scenes by older and younger adults.

    PubMed

    Keshavarz, Behrang; Campos, Jennifer L; DeLucia, Patricia R; Oberfeld, Daniel

    2017-04-01

    Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object's instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18-39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio-visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio-visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.

  19. Body mass estimates of hominin fossils and the evolution of human body size.

    PubMed

    Grabowski, Mark; Hatala, Kevin G; Jungers, William L; Richmond, Brian G

    2015-08-01

    Body size directly influences an animal's place in the natural world, including its energy requirements, home range size, relative brain size, locomotion, diet, life history, and behavior. Thus, an understanding of the biology of extinct organisms, including species in our own lineage, requires accurate estimates of body size. Since the last major review of hominin body size based on postcranial morphology over 20 years ago, new fossils have been discovered, species attributions have been clarified, and methods improved. Here, we present the most comprehensive and thoroughly vetted set of individual fossil hominin body mass predictions to date, and estimation equations based on a large (n = 220) sample of modern humans of known body masses. We also present species averages based exclusively on fossils with reliable taxonomic attributions, estimates of species averages by sex, and a metric for levels of sexual dimorphism. Finally, we identify individual traits that appear to be the most reliable for mass estimation for each fossil species, for use when only one measurement is available for a fossil. Our results show that many early hominins were generally smaller-bodied than previously thought, an outcome likely due to larger estimates in previous studies resulting from the use of large-bodied modern human reference samples. Current evidence indicates that modern human-like large size first appeared by at least 3-3.5 Ma in some Australopithecus afarensis individuals. Our results challenge an evolutionary model arguing that body size increased from Australopithecus to early Homo. Instead, we show that there is no reliable evidence that the body size of non-erectus early Homo differed from that of australopiths, and confirm that Homo erectus evolved larger average body size than earlier hominins. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Comparison of MRI-based estimates of articular cartilage contact area in the tibiofemoral joint.

    PubMed

    Henderson, Christopher E; Higginson, Jill S; Barrance, Peter J

    2011-01-01

    Knee osteoarthritis (OA) detrimentally impacts the lives of millions of older Americans through pain and decreased functional ability. Unfortunately, the pathomechanics and associated deviations from joint homeostasis that OA patients experience are not well understood. Alterations in mechanical stress in the knee joint may play an essential role in OA; however, existing literature in this area is limited. The purpose of this study was to evaluate the ability of an existing magnetic resonance imaging (MRI)-based modeling method to estimate articular cartilage contact area in vivo. Imaging data of both knees were collected on a single subject with no history of knee pathology at three knee flexion angles. Intra-observer reliability and sensitivity studies were also performed to determine the role of operator-influenced elements of the data processing on the results. The method's articular cartilage contact area estimates were compared with existing contact area estimates in the literature. The method demonstrated an intra-observer reliability of 0.95 when assessed using Pearson's correlation coefficient and was found to be most sensitive to changes in the cartilage tracings on the peripheries of the compartment. The articular cartilage contact area estimates at full extension were similar to those reported in the literature. The relationships between tibiofemoral articular cartilage contact area and knee flexion were also qualitatively and quantitatively similar to those previously reported. The MRI-based knee modeling method was found to have high intra-observer reliability, sensitivity to peripheral articular cartilage tracings, and agreeability with previous investigations when using data from a single healthy adult. Future studies will implement this modeling method to investigate the role that mechanical stress may play in progression of knee OA through estimation of articular cartilage contact area.

  1. Duration analysis using matching pursuit algorithm reveals longer bouts of gamma rhythm.

    PubMed

    Chandran Ks, Subhash; Seelamantula, Chandra Sekhar; Ray, Supratim

    2018-03-01

    The gamma rhythm (30-80 Hz), often associated with high-level cortical functions, is believed to provide a temporal reference frame for spiking activity, for which it should have a stable center frequency and linear phase for an extended duration. However, recent studies that have estimated the power and phase of gamma as a function of time suggest that gamma occurs in short bursts and lacks the temporal structure required to act as a reference frame. Here, we show that the bursty appearance of gamma arises from the variability in the spectral estimator used in these studies. To overcome this problem, we use another duration estimator based on a matching pursuit algorithm that robustly estimates the duration of gamma in simulated data. Applying this algorithm to gamma oscillations recorded from implanted microelectrodes in the primary visual cortex of awake monkeys, we show that the median gamma duration is greater than 300 ms, which is three times longer than previously reported values. NEW & NOTEWORTHY Gamma oscillations (30-80 Hz) have been hypothesized to provide a temporal reference frame for coordination of spiking activity, but recent studies have shown that gamma occurs in very short bursts. We show that existing techniques have severely underestimated the rhythm duration, use a technique based on the Matching Pursuit algorithm, which provides a robust estimate of the duration, and show that the median duration of gamma is greater than 300 ms, much longer than previous estimates.

  2. Assessment of the point-source method for estimating dose rates to members of the public from exposure to patients with 131I thyroid treatment

    DOE PAGES

    Dewji, Shaheen Azim; Bellamy, Michael B.; Hertel, Nolan E.; ...

    2015-09-01

    The U.S. Nuclear Regulatory Commission (USNRC) initiated a contract with Oak Ridge National Laboratory (ORNL) to calculate radiation dose rates to members of the public that may result from exposure to patients recently administered iodine-131 ( 131I) as part of medical therapy. The main purpose was to compare dose rate estimates based on a point source and target with values derived from more realistic simulations that considered the time-dependent distribution of 131I in the patient and attenuation of emitted photons by the patient’s tissues. The external dose rate estimates were derived using Monte Carlo methods and two representations of themore » Phantom with Movable Arms and Legs, previously developed by ORNL and the USNRC, to model the patient and a nearby member of the public. Dose rates to tissues and effective dose rates were calculated for distances ranging from 10 to 300 cm between the phantoms and compared to estimates based on the point-source method, as well as to results of previous studies that estimated exposure from 131I patients. The point-source method overestimates dose rates to members of the public in very close proximity to an 131I patient but is a broadly accurate method of dose rate estimation at separation distances of 300 cm or more at times closer to administration.« less

  3. Duration analysis using matching pursuit algorithm reveals longer bouts of gamma rhythm

    PubMed Central

    Chandran KS, Subhash; Seelamantula, Chandra Sekhar

    2018-01-01

    The gamma rhythm (30–80 Hz), often associated with high-level cortical functions, is believed to provide a temporal reference frame for spiking activity, for which it should have a stable center frequency and linear phase for an extended duration. However, recent studies that have estimated the power and phase of gamma as a function of time suggest that gamma occurs in short bursts and lacks the temporal structure required to act as a reference frame. Here, we show that the bursty appearance of gamma arises from the variability in the spectral estimator used in these studies. To overcome this problem, we use another duration estimator based on a matching pursuit algorithm that robustly estimates the duration of gamma in simulated data. Applying this algorithm to gamma oscillations recorded from implanted microelectrodes in the primary visual cortex of awake monkeys, we show that the median gamma duration is greater than 300 ms, which is three times longer than previously reported values. NEW & NOTEWORTHY Gamma oscillations (30–80 Hz) have been hypothesized to provide a temporal reference frame for coordination of spiking activity, but recent studies have shown that gamma occurs in very short bursts. We show that existing techniques have severely underestimated the rhythm duration, use a technique based on the Matching Pursuit algorithm, which provides a robust estimate of the duration, and show that the median duration of gamma is greater than 300 ms, much longer than previous estimates. PMID:29118193

  4. Robust detection, isolation and accommodation for sensor failures

    NASA Technical Reports Server (NTRS)

    Emami-Naeini, A.; Akhter, M. M.; Rock, S. M.

    1986-01-01

    The objective is to extend the recent advances in robust control system design of multivariable systems to sensor failure detection, isolation, and accommodation (DIA), and estimator design. This effort provides analysis tools to quantify the trade-off between performance robustness and DIA sensitivity, which are to be used to achieve higher levels of performance robustness for given levels of DIA sensitivity. An innovations-based DIA scheme is used. Estimators, which depend upon a model of the process and process inputs and outputs, are used to generate these innovations. Thresholds used to determine failure detection are computed based on bounds on modeling errors, noise properties, and the class of failures. The applicability of the newly developed tools are demonstrated on a multivariable aircraft turbojet engine example. A new concept call the threshold selector was developed. It represents a significant and innovative tool for the analysis and synthesis of DiA algorithms. The estimators were made robust by introduction of an internal model and by frequency shaping. The internal mode provides asymptotically unbiased filter estimates.The incorporation of frequency shaping of the Linear Quadratic Gaussian cost functional modifies the estimator design to make it suitable for sensor failure DIA. The results are compared with previous studies which used thresholds that were selcted empirically. Comparison of these two techniques on a nonlinear dynamic engine simulation shows improved performance of the new method compared to previous techniques

  5. vFitness: a web-based computing tool for improving estimation of in vitro HIV-1 fitness experiments

    PubMed Central

    2010-01-01

    Background The replication rate (or fitness) between viral variants has been investigated in vivo and in vitro for human immunodeficiency virus (HIV). HIV fitness plays an important role in the development and persistence of drug resistance. The accurate estimation of viral fitness relies on complicated computations based on statistical methods. This calls for tools that are easy to access and intuitive to use for various experiments of viral fitness. Results Based on a mathematical model and several statistical methods (least-squares approach and measurement error models), a Web-based computing tool has been developed for improving estimation of virus fitness in growth competition assays of human immunodeficiency virus type 1 (HIV-1). Conclusions Unlike the two-point calculation used in previous studies, the estimation here uses linear regression methods with all observed data in the competition experiment to more accurately estimate relative viral fitness parameters. The dilution factor is introduced for making the computational tool more flexible to accommodate various experimental conditions. This Web-based tool is implemented in C# language with Microsoft ASP.NET, and is publicly available on the Web at http://bis.urmc.rochester.edu/vFitness/. PMID:20482791

  6. vFitness: a web-based computing tool for improving estimation of in vitro HIV-1 fitness experiments.

    PubMed

    Ma, Jingming; Dykes, Carrie; Wu, Tao; Huang, Yangxin; Demeter, Lisa; Wu, Hulin

    2010-05-18

    The replication rate (or fitness) between viral variants has been investigated in vivo and in vitro for human immunodeficiency virus (HIV). HIV fitness plays an important role in the development and persistence of drug resistance. The accurate estimation of viral fitness relies on complicated computations based on statistical methods. This calls for tools that are easy to access and intuitive to use for various experiments of viral fitness. Based on a mathematical model and several statistical methods (least-squares approach and measurement error models), a Web-based computing tool has been developed for improving estimation of virus fitness in growth competition assays of human immunodeficiency virus type 1 (HIV-1). Unlike the two-point calculation used in previous studies, the estimation here uses linear regression methods with all observed data in the competition experiment to more accurately estimate relative viral fitness parameters. The dilution factor is introduced for making the computational tool more flexible to accommodate various experimental conditions. This Web-based tool is implemented in C# language with Microsoft ASP.NET, and is publicly available on the Web at http://bis.urmc.rochester.edu/vFitness/.

  7. Assessing the likely value of gravity and drawdown measurements to constrain estimates of hydraulic conductivity and specific yield during unconfined aquifer testing

    USGS Publications Warehouse

    Blainey, Joan B.; Ferré, Ty P.A.; Cordova, Jeffrey T.

    2007-01-01

    Pumping of an unconfined aquifer can cause local desaturation detectable with high‐resolution gravimetry. A previous study showed that signal‐to‐noise ratios could be predicted for gravity measurements based on a hydrologic model. We show that although changes should be detectable with gravimeters, estimations of hydraulic conductivity and specific yield based on gravity data alone are likely to be unacceptably inaccurate and imprecise. In contrast, a transect of low‐quality drawdown data alone resulted in accurate estimates of hydraulic conductivity and inaccurate and imprecise estimates of specific yield. Combined use of drawdown and gravity data, or use of high‐quality drawdown data alone, resulted in unbiased and precise estimates of both parameters. This study is an example of the value of a staged assessment regarding the likely significance of a new measurement method or monitoring scenario before collecting field data.

  8. Automation of GIS-based population data-collection for transportation risk analysis

    DOT National Transportation Integrated Search

    1999-11-01

    Estimation of the potential radiological risks associated with highway transport of radioactive : materials (RAM) requires input data describing population densities adjacent to all portions of : the route to be traveled. Previously, aggregated risks...

  9. Groundwater Evapotranspiration from Diurnal Water Table Fluctuation: a Modified White Based Method Using Drainable and Fillable Porosity

    NASA Astrophysics Data System (ADS)

    Acharya, S.; Mylavarapu, R.; Jawitz, J. W.

    2012-12-01

    In shallow unconfined aquifers, the water table usually shows a distinct diurnal fluctuation pattern corresponding to the twenty-four hour solar radiation cycle. This diurnal water table fluctuation (DWTF) signal can be used to estimate the groundwater evapotranspiration (ETg) by vegetation, a method known as the White [1932] method. Water table fluctuations in shallow phreatic aquifers is controlled by two distinct storage parameters, drainable porosity (or specific yield) and the fillable porosity. Yet, it is implicitly assumed in most studies that these two parameters are equal, unless hysteresis effect is considered. The White based method available in the literature is also based on a single drainable porosity parameter to estimate the ETg. In this study, we present a modification of the White based method to estimate ETg from DWTF using separate drainable (λd) and fillable porosity (λf) parameters. Separate analytical expressions based on successive steady state moisture profiles are used to estimate λd and λf, instead of the commonly employed hydrostatic moisture profile approach. The modified method is then applied to estimate ETg using the DWTF data observed in a field in northeast Florida and the results are compared with ET estimations from the standard Penman-Monteith equation. It is found that the modified method resulted in significantly better estimates of ETg than the previously available method that used only a single, hydrostatic-moisture-profile based λd. Furthermore, the modified method is also used to estimate ETg even during rainfall events which produced significantly better estimates of ETg as compared to the single λd parameter method.

  10. Photometric redshifts for the next generation of deep radio continuum surveys - II. Gaussian processes and hybrid estimates

    NASA Astrophysics Data System (ADS)

    Duncan, Kenneth J.; Jarvis, Matt J.; Brown, Michael J. I.; Röttgering, Huub J. A.

    2018-07-01

    Building on the first paper in this series (Duncan et al. 2018), we present a study investigating the performance of Gaussian process photometric redshift (photo-z) estimates for galaxies and active galactic nuclei (AGNs) detected in deep radio continuum surveys. A Gaussian process redshift code is used to produce photo-z estimates targeting specific subsets of both the AGN population - infrared (IR), X-ray, and optically selected AGNs - and the general galaxy population. The new estimates for the AGN population are found to perform significantly better at z > 1 than the template-based photo-z estimates presented in our previous study. Our new photo-z estimates are then combined with template estimates through hierarchical Bayesian combination to produce a hybrid consensus estimate that outperforms both of the individual methods across all source types. Photo-z estimates for radio sources that are X-ray sources or optical/IR AGNs are significantly improved in comparison to previous template-only estimates - with outlier fractions and robust scatter reduced by up to a factor of ˜4. The ability of our method to combine the strengths of the two input photo-z techniques and the large improvements we observe illustrate its potential for enabling future exploitation of deep radio continuum surveys for both the study of galaxy and black hole coevolution and for cosmological studies.

  11. Exact and Approximate Statistical Inference for Nonlinear Regression and the Estimating Equation Approach.

    PubMed

    Demidenko, Eugene

    2017-09-01

    The exact density distribution of the nonlinear least squares estimator in the one-parameter regression model is derived in closed form and expressed through the cumulative distribution function of the standard normal variable. Several proposals to generalize this result are discussed. The exact density is extended to the estimating equation (EE) approach and the nonlinear regression with an arbitrary number of linear parameters and one intrinsically nonlinear parameter. For a very special nonlinear regression model, the derived density coincides with the distribution of the ratio of two normally distributed random variables previously obtained by Fieller (1932), unlike other approximations previously suggested by other authors. Approximations to the density of the EE estimators are discussed in the multivariate case. Numerical complications associated with the nonlinear least squares are illustrated, such as nonexistence and/or multiple solutions, as major factors contributing to poor density approximation. The nonlinear Markov-Gauss theorem is formulated based on the near exact EE density approximation.

  12. A Web-based interface to calculate phonotactic probability for words and nonwords in English

    PubMed Central

    VITEVITCH, MICHAEL S.; LUCE, PAUL A.

    2008-01-01

    Phonotactic probability refers to the frequency with which phonological segments and sequences of phonological segments occur in words in a given language. We describe one method of estimating phonotactic probabilities based on words in American English. These estimates of phonotactic probability have been used in a number of previous studies and are now being made available to other researchers via a Web-based interface. Instructions for using the interface, as well as details regarding how the measures were derived, are provided in the present article. The Phonotactic Probability Calculator can be accessed at http://www.people.ku.edu/~mvitevit/PhonoProbHome.html. PMID:15641436

  13. Performance of Chronic Kidney Disease Epidemiology Collaboration Creatinine-Cystatin C Equation for Estimating Kidney Function in Cirrhosis

    PubMed Central

    Mindikoglu, Ayse L.; Dowling, Thomas C.; Weir, Matthew R.; Seliger, Stephen L.; Christenson, Robert H.; Magder, Laurence S.

    2013-01-01

    Conventional creatinine-based glomerular filtration rate (GFR) equations are insufficiently accurate for estimating GFR in cirrhosis. The Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) recently proposed an equation to estimate GFR in subjects without cirrhosis using both serum creatinine and cystatin C levels. Performance of the new CKD-EPI creatinine-cystatin C equation (2012) was superior to previous creatinine- or cystatin C-based GFR equations. To evaluate the performance of the CKD-EPI creatinine-cystatin C equation in subjects with cirrhosis, we compared it to GFR measured by non-radiolabeled iothalamate plasma clearance (mGFR) in 72 subjects with cirrhosis. We compared the “bias”, “precision” and “accuracy” of the new CKD-EPI creatinine-cystatin C equation to that of 24-hour urinary creatinine clearance (CrCl), Cockcroft-Gault (CG) and previously reported creatinine- and/or cystatin C-based GFR-estimating equations. Accuracy of CKD-EPI creatinine-cystatin C equation as quantified by root mean squared error of difference scores [differences between mGFR and estimated GFR (eGFR) or between mGFR and CrCl, or between mGFR and CG equation for each subject] (RMSE=23.56) was significantly better than that of CrCl (37.69, P=0.001), CG (RMSE=36.12, P=0.002) and GFR-estimating equations based on cystatin C only. Its accuracy as quantified by percentage of eGFRs that differed by greater than 30% with respect to mGFR was significantly better compared to CrCl (P=0.024), CG (P=0.0001), 4-variable MDRD (P=0.027) and CKD-EPI creatinine 2009 (P=0.012) equations. However, for 23.61% of the subjects, GFR estimated by CKD-EPI creatinine-cystatin C equation differed from the mGFR by more than 30%. CONCLUSIONS The diagnostic performance of CKD-EPI creatinine-cystatin C equation (2012) in patients with cirrhosis was superior to conventional equations in clinical practice for estimating GFR. However, its diagnostic performance was substantially worse than reported in subjects without cirrhosis. PMID:23744636

  14. Weak Learner Method for Estimating River Discharges using Remotely Sensed Data: Central Congo River as a Testbed

    NASA Astrophysics Data System (ADS)

    Kim, D.; Lee, H.; Yu, H.; Beighley, E.; Durand, M. T.; Alsdorf, D. E.; Hwang, E.

    2017-12-01

    River discharge is a prerequisite for an understanding of flood hazard and water resource management, yet we have poor knowledge of it, especially over remote basins. Previous studies have successfully used a classic hydraulic geometry, at-many-stations hydraulic geometry (AMHG), and Manning's equation to estimate the river discharge. Theoretical bases of these empirical methods were introduced by Leopold and Maddock (1953) and Manning (1889), and those have been long used in the field of hydrology, water resources, and geomorphology. However, the methods to estimate the river discharge from remotely sensed data essentially require bathymetric information of the river or are not applicable to braided rivers. Furthermore, the methods used in the previous studies adopted assumptions of river conditions to be steady and uniform. Consequently, those methods have limitations in estimating the river discharge in complex and unsteady flow in nature. In this study, we developed a novel approach to estimating river discharges by applying the weak learner method (here termed WLQ), which is one of the ensemble methods using multiple classifiers, to the remotely sensed measurements of water levels from Envisat altimetry, effective river widths from PALSAR images, and multi-temporal surface water slopes over a part of the mainstem Congo. Compared with the methods used in the previous studies, the root mean square error (RMSE) decreased from 5,089 m3s-1 to 3,701 m3s-1, and the relative RMSE (RRMSE) improved from 12% to 8%. It is expected that our method can provide improved estimates of river discharges in complex and unsteady flow conditions based on the data-driven prediction model by machine learning (i.e. WLQ), even when the bathymetric data is not available or in case of the braided rivers. Moreover, it is also expected that the WLQ can be applied to the measurements of river levels, slopes and widths from the future Surface Water Ocean Topography (SWOT) mission to be launched in 2021.

  15. An Integrated Approach for Aircraft Engine Performance Estimation and Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    imon, Donald L.; Armstrong, Jeffrey B.

    2012-01-01

    A Kalman filter-based approach for integrated on-line aircraft engine performance estimation and gas path fault diagnostics is presented. This technique is specifically designed for underdetermined estimation problems where there are more unknown system parameters representing deterioration and faults than available sensor measurements. A previously developed methodology is applied to optimally design a Kalman filter to estimate a vector of tuning parameters, appropriately sized to enable estimation. The estimated tuning parameters can then be transformed into a larger vector of health parameters representing system performance deterioration and fault effects. The results of this study show that basing fault isolation decisions solely on the estimated health parameter vector does not provide ideal results. Furthermore, expanding the number of the health parameters to address additional gas path faults causes a decrease in the estimation accuracy of those health parameters representative of turbomachinery performance deterioration. However, improved fault isolation performance is demonstrated through direct analysis of the estimated tuning parameters produced by the Kalman filter. This was found to provide equivalent or superior accuracy compared to the conventional fault isolation approach based on the analysis of sensed engine outputs, while simplifying online implementation requirements. Results from the application of these techniques to an aircraft engine simulation are presented and discussed.

  16. Annual and average estimates of water-budget components based on hydrograph separation and PRISM precipitation for gaged basins in the Appalachian Plateaus Region, 1900-2011

    USGS Publications Warehouse

    Nelms, David L.; Messinger, Terence; McCoy, Kurt J.

    2015-07-14

    As part of the U.S. Geological Survey’s Groundwater Resources Program study of the Appalachian Plateaus aquifers, annual and average estimates of water-budget components based on hydrograph separation and precipitation data from parameter-elevation regressions on independent slopes model (PRISM) were determined at 849 continuous-record streamflow-gaging stations from Mississippi to New York and covered the period of 1900 to 2011. Only complete calendar years (January to December) of streamflow record at each gage were used to determine estimates of base flow, which is that part of streamflow attributed to groundwater discharge; such estimates can serve as a proxy for annual recharge. For each year, estimates of annual base flow, runoff, and base-flow index were determined using computer programs—PART, HYSEP, and BFI—that have automated the separation procedures. These streamflow-hydrograph analysis methods are provided with version 1.0 of the U.S. Geological Survey Groundwater Toolbox, which is a new program that provides graphing, mapping, and analysis capabilities in a Windows environment. Annual values of precipitation were estimated by calculating the average of cell values intercepted by basin boundaries where previously defined in the GAGES–II dataset. Estimates of annual evapotranspiration were then calculated from the difference between precipitation and streamflow.

  17. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Jun; Zhang, Jiangjiang; Li, Weixuan

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less

  18. Reconnaissance investigation of the alluvial gold deposits in the North Takhar Area of Interest, Takhar Province, Afghanistan

    USGS Publications Warehouse

    Chirico, Peter G.; Malpeli, Katherine C.; Moran, Thomas W.

    2013-01-01

    This study is a reconnaissance assessment of the alluvial gold deposits of the North Takhar Area of Interest (AOI) in Takhar Province, Afghanistan. Soviet and Afghan geologists collected data and calculated the gold deposit reserves in Takhar Province in the 1970s, prior to the development of satellite-based remote-sensing platforms and new methods of geomorphic mapping. The purpose of this study was to integrate new mapping techniques with previously collected borehole sampling and concentration sampling data and geomorphologic interpretations to reassess the alluvial gold placer deposits in the North Takhar AOI. Through a combination of historical borehole and cross-section data and digital terrain modeling, the Samti, Nooraba-Khasar-Anjir, and Kocha River placer deposits were reassessed. Resource estimates were calculated to be 20,927 kilograms (kg) for Samti, 7,626 kg for Nooraba-Khasar-Anjir, 160 kg for the mouth of the Kocha, 1,047 kg for the lower Kocha, 113 kg for the middle Kocha, and 168 kg for the upper Kocha. Previous resource estimates conducted by the Soviets for the Samti and Nooraba-Khasar-Anjir deposits estimated 30,062 kg and 802 kg of gold, respectively. This difference between the new estimates and previous estimates results from the higher resolution geomorphic model and the interpretation of areas outside of the initial work zone studied by Soviet and Afghan geologists.

  19. The Independent Evolution Method Is Not a Viable Phylogenetic Comparative Method

    PubMed Central

    2015-01-01

    Phylogenetic comparative methods (PCMs) use data on species traits and phylogenetic relationships to shed light on evolutionary questions. Recently, Smaers and Vinicius suggested a new PCM, Independent Evolution (IE), which purportedly employs a novel model of evolution based on Felsenstein’s Adaptive Peak Model. The authors found that IE improves upon previous PCMs by producing more accurate estimates of ancestral states, as well as separate estimates of evolutionary rates for each branch of a phylogenetic tree. Here, we document substantial theoretical and computational issues with IE. When data are simulated under a simple Brownian motion model of evolution, IE produces severely biased estimates of ancestral states and changes along individual branches. We show that these branch-specific changes are essentially ancestor-descendant or “directional” contrasts, and draw parallels between IE and previous PCMs such as “minimum evolution”. Additionally, while comparisons of branch-specific changes between variables have been interpreted as reflecting the relative strength of selection on those traits, we demonstrate through simulations that regressing IE estimated branch-specific changes against one another gives a biased estimate of the scaling relationship between these variables, and provides no advantages or insights beyond established PCMs such as phylogenetically independent contrasts. In light of our findings, we discuss the results of previous papers that employed IE. We conclude that Independent Evolution is not a viable PCM, and should not be used in comparative analyses. PMID:26683838

  20. Improvement in Visual Target Tracking for a Mobile Robot

    NASA Technical Reports Server (NTRS)

    Kim, Won; Ansar, Adnan; Madison, Richard

    2006-01-01

    In an improvement of the visual-target-tracking software used aboard a mobile robot (rover) of the type used to explore the Martian surface, an affine-matching algorithm has been replaced by a combination of a normalized- cross-correlation (NCC) algorithm and a template-image-magnification algorithm. Although neither NCC nor template-image magnification is new, the use of both of them to increase the degree of reliability with which features can be matched is new. In operation, a template image of a target is obtained from a previous rover position, then the magnification of the template image is based on the estimated change in the target distance from the previous rover position to the current rover position (see figure). For this purpose, the target distance at the previous rover position is determined by stereoscopy, while the target distance at the current rover position is calculated from an estimate of the current pose of the rover. The template image is then magnified by an amount corresponding to the estimated target distance to obtain a best template image to match with the image acquired at the current rover position.

  1. Unclothed firewalls

    NASA Astrophysics Data System (ADS)

    Chen, Pisin; Ong, Yen Chin; Page, Don Nelson; Sasaki, Misao; Yeom, Dong-Han

    2016-07-01

    We have previously argued that fluctuations of the Hawking emission rate can cause a black hole event horizon to fluctuate inside the location of a putative firewall, rendering the firewall naked. This assumes that the firewall is located near where the horizon would be expected based on the past evolution of the spacetime. Here, we expand our previous results by defining two new estimates for where the firewall might be that have more smooth temporal behavior than our original estimate. Our results continue to contradict the usual assumption that the firewall should not be observable except by infalling observers. This casts doubt about the idea that firewalls are the ‘most conservative’ solution to the information loss paradox.

  2. Quantifying the major mechanisms of recent gene duplications in the human and mouse genomes: a novel strategy to estimate gene duplication rates

    PubMed Central

    Pan, Deng; Zhang, Liqing

    2007-01-01

    Background The rate of gene duplication is an important parameter in the study of evolution, but the influence of gene conversion and technical problems have confounded previous attempts to provide a satisfying estimate. We propose a new strategy to estimate the rate that involves separate quantification of the rates of two different mechanisms of gene duplication and subsequent combination of the two rates, based on their respective contributions to the overall gene duplication rate. Results Previous estimates of gene duplication rates are based on small gene families. Therefore, to assess the applicability of this to families of all sizes, we looked at both two-copy gene families and the entire genome. We studied unequal crossover and retrotransposition, and found that these mechanisms of gene duplication are largely independent and account for a substantial amount of duplicated genes. Unequal crossover contributed more to duplications in the entire genome than retrotransposition did, but this contribution was significantly less in two-copy gene families, and duplicated genes arising from this mechanism are more likely to be retained. Combining rates of duplication using the two mechanisms, we estimated the overall rates to be from approximately 0.515 to 1.49 × 10-3 per gene per million years in human, and from approximately 1.23 to 4.23 × 10-3 in mouse. The rates estimated from two-copy gene families are always lower than those from the entire genome, and so it is not appropriate to use small families to estimate the rate for the entire genome. Conclusion We present a novel strategy for estimating gene duplication rates. Our results show that different mechanisms contribute differently to the evolution of small and large gene families. PMID:17683522

  3. Estimation of Pre-industrial Nitrous Oxide Emission from the Terrestrial Biosphere

    NASA Astrophysics Data System (ADS)

    Xu, R.; Tian, H.; Lu, C.; Zhang, B.; Pan, S.; Yang, J.

    2015-12-01

    Nitrous oxide (N2O) is currently the third most important greenhouse gases (GHG) after methane (CH4) and carbon dioxide (CO2). Global N2O emission increased substantially primarily due to reactive nitrogen (N) enrichment through fossil fuel combustion, fertilizer production, and legume crop cultivation etc. In order to understand how climate system is perturbed by anthropogenic N2O emissions from the terrestrial biosphere, it is necessary to better estimate the pre-industrial N2O emissions. Previous estimations of natural N2O emissions from the terrestrial biosphere range from 3.3-9.0 Tg N2O-N yr-1. This large uncertainty in the estimation of pre-industrial N2O emissions from the terrestrial biosphere may be caused by uncertainty associated with key parameters such as maximum nitrification and denitrification rates, half-saturation coefficients of soil ammonium and nitrate, N fixation rate, and maximum N uptake rate. In addition to the large estimation range, previous studies did not provide an estimate on preindustrial N2O emissions at regional and biome levels. In this study, we applied a process-based coupled biogeochemical model to estimate the magnitude and spatial patterns of pre-industrial N2O fluxes at biome and continental scales as driven by multiple input data, including pre-industrial climate data, atmospheric CO2 concentration, N deposition, N fixation, and land cover types and distributions. Uncertainty associated with key parameters is also evaluated. Finally, we generate sector-based estimates of pre-industrial N2O emission, which provides a reference for assessing the climate forcing of anthropogenic N2O emission from the land biosphere.

  4. Identifying Genetic Signatures of Natural Selection Using Pooled Population Sequencing in Picea abies

    PubMed Central

    Chen, Jun; Källman, Thomas; Ma, Xiao-Fei; Zaina, Giusi; Morgante, Michele; Lascoux, Martin

    2016-01-01

    The joint inference of selection and past demography remain a costly and demanding task. We used next generation sequencing of two pools of 48 Norway spruce mother trees, one corresponding to the Fennoscandian domain, and the other to the Alpine domain, to assess nucleotide polymorphism at 88 nuclear genes. These genes are candidate genes for phenological traits, and most belong to the photoperiod pathway. Estimates of population genetic summary statistics from the pooled data are similar to previous estimates, suggesting that pooled sequencing is reliable. The nonsynonymous SNPs tended to have both lower frequency differences and lower FST values between the two domains than silent ones. These results suggest the presence of purifying selection. The divergence between the two domains based on synonymous changes was around 5 million yr, a time similar to a recent phylogenetic estimate of 6 million yr, but much larger than earlier estimates based on isozymes. Two approaches, one of them novel and that considers both FST and difference in allele frequencies between the two domains, were used to identify SNPs potentially under diversifying selection. SNPs from around 20 genes were detected, including genes previously identified as main target for selection, such as PaPRR3 and PaGI. PMID:27172202

  5. Identifying Genetic Signatures of Natural Selection Using Pooled Population Sequencing in Picea abies.

    PubMed

    Chen, Jun; Källman, Thomas; Ma, Xiao-Fei; Zaina, Giusi; Morgante, Michele; Lascoux, Martin

    2016-07-07

    The joint inference of selection and past demography remain a costly and demanding task. We used next generation sequencing of two pools of 48 Norway spruce mother trees, one corresponding to the Fennoscandian domain, and the other to the Alpine domain, to assess nucleotide polymorphism at 88 nuclear genes. These genes are candidate genes for phenological traits, and most belong to the photoperiod pathway. Estimates of population genetic summary statistics from the pooled data are similar to previous estimates, suggesting that pooled sequencing is reliable. The nonsynonymous SNPs tended to have both lower frequency differences and lower FST values between the two domains than silent ones. These results suggest the presence of purifying selection. The divergence between the two domains based on synonymous changes was around 5 million yr, a time similar to a recent phylogenetic estimate of 6 million yr, but much larger than earlier estimates based on isozymes. Two approaches, one of them novel and that considers both FST and difference in allele frequencies between the two domains, were used to identify SNPs potentially under diversifying selection. SNPs from around 20 genes were detected, including genes previously identified as main target for selection, such as PaPRR3 and PaGI. Copyright © 2016 Chen et al.

  6. Modeling the erythemal surface diffuse irradiance fraction for Badajoz, Spain

    NASA Astrophysics Data System (ADS)

    Sanchez, Guadalupe; Serrano, Antonio; Cancillo, María Luisa

    2017-10-01

    Despite its important role on the human health and numerous biological processes, the diffuse component of the erythemal ultraviolet irradiance (UVER) is scarcely measured at standard radiometric stations and therefore needs to be estimated. This study proposes and compares 10 empirical models to estimate the UVER diffuse fraction. These models are inspired from mathematical expressions originally used to estimate total diffuse fraction, but, in this study, they are applied to the UVER case and tested against experimental measurements. In addition to adapting to the UVER range the various independent variables involved in these models, the total ozone column has been added in order to account for its strong impact on the attenuation of ultraviolet radiation. The proposed models are fitted to experimental measurements and validated against an independent subset. The best-performing model (RAU3) is based on a model proposed by Ruiz-Arias et al. (2010) and shows values of r2 equal to 0.91 and relative root-mean-square error (rRMSE) equal to 6.1 %. The performance achieved by this entirely empirical model is better than those obtained by previous semi-empirical approaches and therefore needs no additional information from other physically based models. This study expands on previous research to the ultraviolet range and provides reliable empirical models to accurately estimate the UVER diffuse fraction.

  7. Fine Pointing of Military Spacecraft

    DTIC Science & Technology

    2007-03-01

    estimate is high. But feedback controls are attempting to fix the attitude at the next time step with error based on the previous time step without using ...52 a. Stability Analysis Consider not using the reference trajectory in the feedback signal. The previous stability proof (Refs.[43],[46]) are no... robust steering law and quaternion feedback control [52]. TASS2 has center-of-gravity offset disturbance that must be countered by the three CMG

  8. Preliminary Multivariable Cost Model for Space Telescopes

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip

    2010-01-01

    Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. Previously, the authors published two single variable cost models based on 19 flight missions. The current paper presents the development of a multi-variable space telescopes cost model. The validity of previously published models are tested. Cost estimating relationships which are and are not significant cost drivers are identified. And, interrelationships between variables are explored

  9. Vision Based Localization in Urban Environments

    NASA Technical Reports Server (NTRS)

    McHenry, Michael; Cheng, Yang; Matthies, Larry

    2005-01-01

    As part of DARPA's MARS2020 program, the Jet Propulsion Laboratory developed a vision-based system for localization in urban environments that requires neither GPS nor active sensors. System hardware consists of a pair of small FireWire cameras and a standard Pentium-based computer. The inputs to the software system consist of: 1) a crude grid-based map describing the positions of buildings, 2) an initial estimate of robot location and 3) the video streams produced by each camera. At each step during the traverse the system: captures new image data, finds image features hypothesized to lie on the outside of a building, computes the range to those features, determines an estimate of the robot's motion since the previous step and combines that data with the map to update a probabilistic representation of the robot's location. This probabilistic representation allows the system to simultaneously represent multiple possible locations, For our testing, we have derived the a priori map manually using non-orthorectified overhead imagery, although this process could be automated. The software system consists of two primary components. The first is the vision system which uses binocular stereo ranging together with a set of heuristics to identify features likely to be part of building exteriors and to compute an estimate of the robot's motion since the previous step. The resulting visual features and the associated range measurements are software component, a particle-filter based localization system. This system uses the map and the then fed to the second primary most recent results from the vision system to update the estimate of the robot's location. This report summarizes the design of both the hardware and software and will include the results of applying the system to the global localization of a robot over an approximately half-kilometer traverse across JPL'S Pasadena campus.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yuche; Young, Stanley; Gonder, Jeff

    This study estimates the range of fuel and emissions impact of an automated-vehicle (AV) based transit system that services campus-based developments, termed an automated mobility district (AMD). The study develops a framework to quantify the fuel consumption and greenhouse gas (GHG) emission impacts of a transit system comprised of AVs, taking into consideration average vehicle fleet composition, fuel consumption/GHG emission of vehicles within specific speed bins, and the average occupancy of passenger vehicles and transit vehicles. The framework is exercised using a previous mobility analysis of a personal rapid transit (PRT) system, a system which shares many attributes with envisionedmore » AV-based transit systems. Total fuel consumption and GHG emissions with and without an AMD are estimated, providing a range of potential system impacts on sustainability. The results of a previous case study based of a proposed implementation of PRT on the Kansas State University (KSU) campus in Manhattan, Kansas, serves as the basis to estimate personal miles traveled supplanted by an AMD at varying levels of service. The results show that an AMD has the potential to reduce total system fuel consumption and GHG emissions, but the amount is largely dependent on operating and ridership assumptions. The study points to the need to better understand ride-sharing scenarios and calls for future research on sustainability benefits of an AMD system at both vehicle and system levels.« less

  11. Robust Diagnosis Method Based on Parameter Estimation for an Interturn Short-Circuit Fault in Multipole PMSM under High-Speed Operation.

    PubMed

    Lee, Jewon; Moon, Seokbae; Jeong, Hyeyun; Kim, Sang Woo

    2015-11-20

    This paper proposes a diagnosis method for a multipole permanent magnet synchronous motor (PMSM) under an interturn short circuit fault. Previous works in this area have suffered from the uncertainties of the PMSM parameters, which can lead to misdiagnosis. The proposed method estimates the q-axis inductance (Lq) of the faulty PMSM to solve this problem. The proposed method also estimates the faulty phase and the value of G, which serves as an index of the severity of the fault. The q-axis current is used to estimate the faulty phase, the values of G and Lq. For this reason, two open-loop observers and an optimization method based on a particle-swarm are implemented. The q-axis current of a healthy PMSM is estimated by the open-loop observer with the parameters of a healthy PMSM. The Lq estimation significantly compensates for the estimation errors in high-speed operation. The experimental results demonstrate that the proposed method can estimate the faulty phase, G, and Lq besides exhibiting robustness against parameter uncertainties.

  12. Astrometric exoplanet detection with Gaia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perryman, Michael; Hartman, Joel; Bakos, Gáspár Á.

    2014-12-10

    We provide a revised assessment of the number of exoplanets that should be discovered by Gaia astrometry, extending previous studies to a broader range of spectral types, distances, and magnitudes. Our assessment is based on a large representative sample of host stars from the TRILEGAL Galaxy population synthesis model, recent estimates of the exoplanet frequency distributions as a function of stellar type, and detailed simulation of the Gaia observations using the updated instrument performance and scanning law. We use two approaches to estimate detectable planetary systems: one based on the signal-to-noise ratio of the astrometric signature per field crossing, easilymore » reproducible and allowing comparisons with previous estimates, and a new and more robust metric based on orbit fitting to the simulated satellite data. With some plausible assumptions on planet occurrences, we find that some 21,000 (±6000) high-mass (∼1-15M {sub J}) long-period planets should be discovered out to distances of ∼500 pc for the nominal 5 yr mission (including at least 1000-1500 around M dwarfs out to 100 pc), rising to some 70,000 (±20, 000) for a 10 yr mission. We indicate some of the expected features of this exoplanet population, amongst them ∼25-50 intermediate-period (P ∼ 2-3 yr) transiting systems.« less

  13. United States Data Center Energy Usage Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shehabi, Arman; Smith, Sarah; Sartor, Dale

    This report estimates historical data center electricity consumption back to 2000, relying on previous studies and historical shipment data, and forecasts consumption out to 2020 based on new trends and the most recent data available. Figure ES-1 provides an estimate of total U.S. data center electricity use (servers, storage, network equipment, and infrastructure) from 2000-2020. In 2014, data centers in the U.S. consumed an estimated 70 billion kWh, representing about 1.8% of total U.S. electricity consumption. Current study results show data center electricity consumption increased by about 4% from 2010-2014, a large shift from the 24% percent increase estimated frommore » 2005-2010 and the nearly 90% increase estimated from 2000-2005. Energy use is expected to continue slightly increasing in the near future, increasing 4% from 2014-2020, the same rate as the past five years. Based on current trend estimates, U.S. data centers are projected to consume approximately 73 billion kWh in 2020.« less

  14. Shear strength of clay and silt embankments.

    DOT National Transportation Integrated Search

    2009-09-01

    Highway embankment is one of the most common large-scale geotechnical facilities constructed in Ohio. In the past, the design of these embankments was largely based on soil shear strength properties that had been estimated from previously published e...

  15. Estimating fatality rates in occupational light vehicle users using vehicle registration and crash data.

    PubMed

    Stuckey, Rwth; LaMontagne, Anthony D; Glass, Deborah C; Sim, Malcolm R

    2010-04-01

    To estimate occupational light vehicle (OLV) fatality numbers using vehicle registration and crash data and compare these with previous estimates based on workers' compensation data. New South Wales (NSW) Roads and Traffic Authority (RTA) vehicle registration and crash data were obtained for 2004. NSW is the only Australian jurisdiction with mandatory work-use registration, which was used as a proxy for work-relatedness. OLV fatality rates based on registration data as the denominator were calculated and comparisons made with published 2003/04 fatalities based on workers' compensation data. Thirty-four NSW RTA OLV-user fatalities were identified, a rate of 4.5 deaths per 100,000 organisationally registered OLV, whereas the Australian Safety and Compensation Council (ASCC), reported 28 OLV deaths Australia-wide. More OLV user fatalities were identified from vehicle registration-based data than those based on workers' compensation estimates and the data are likely to provide an improved estimate of fatalities specific to OLV use. OLV-use is an important cause of traumatic fatalities that would be better identified through the use of vehicle-registration data, which provides a stronger evidence base from which to develop policy responses. © 2010 The Authors. Journal Compilation © 2010 Public Health Association of Australia.

  16. Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Cheong, R. Y.; Gabda, D.

    2017-09-01

    Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.

  17. Estimating neural response functions from fMRI

    PubMed Central

    Kumar, Sukhbinder; Penny, William

    2014-01-01

    This paper proposes a methodology for estimating Neural Response Functions (NRFs) from fMRI data. These NRFs describe non-linear relationships between experimental stimuli and neuronal population responses. The method is based on a two-stage model comprising an NRF and a Hemodynamic Response Function (HRF) that are simultaneously fitted to fMRI data using a Bayesian optimization algorithm. This algorithm also produces a model evidence score, providing a formal model comparison method for evaluating alternative NRFs. The HRF is characterized using previously established “Balloon” and BOLD signal models. We illustrate the method with two example applications based on fMRI studies of the auditory system. In the first, we estimate the time constants of repetition suppression and facilitation, and in the second we estimate the parameters of population receptive fields in a tonotopic mapping study. PMID:24847246

  18. Outgassed water on Mars - Constraints from melt inclusions in SNC meteorites

    NASA Technical Reports Server (NTRS)

    Mcsween, Harry Y., Jr.; Harvey, Ralph P.

    1993-01-01

    The SNC (shergottite-nakhlite-chassignite) meteorites, thought to be igneous rocks from Mars, contain melt inclusions trapped at depth in early-formed crystals. Determination of the pre-eruptive water contents of SNC parental magmas from calculations of the solidification histories of these amphibole-bearing inclusions indicates that Martian magmas commonly contained 1.4 percent water by weight. When combined with an estimate of the volume of igneous materials on Mars, this information suggests that the total amount of water outgassed since 3.9 billion years ago corresponds to global depths on the order of 200 meters. This value is significantly higher than previous geochemical estimates but lower than estimates based on erosion by floods. These results imply a wetter Mars interior than has been previously thought and support suggestions of significant outgassing before formation of a stable crust or heterogeneous accretion of a veneer of cometary matter.

  19. The global magnitude-frequency relationship for large explosive volcanic eruptions

    NASA Astrophysics Data System (ADS)

    Rougier, Jonathan; Sparks, R. Stephen J.; Cashman, Katharine V.; Brown, Sarah K.

    2018-01-01

    For volcanoes, as for other natural hazards, the frequency of large events diminishes with their magnitude, as captured by the magnitude-frequency relationship. Assessing this relationship is valuable both for the insights it provides about volcanism, and for the practical challenge of risk management. We derive a global magnitude-frequency relationship for explosive volcanic eruptions of at least 300Mt of erupted mass (or M4.5). Our approach is essentially empirical, based on the eruptions recorded in the LaMEVE database. It differs from previous approaches mainly in our conservative treatment of magnitude-rounding and under-recording. Our estimate for the return period of 'super-eruptions' (1000Gt, or M8) is 17ka (95% CI: 5.2ka, 48ka), which is substantially shorter than previous estimates, indicating that volcanoes pose a larger risk to human civilisation than previously thought.

  20. Retrieval of volcanic ash height from satellite-based infrared measurements

    NASA Astrophysics Data System (ADS)

    Zhu, Lin; Li, Jun; Zhao, Yingying; Gong, He; Li, Wenjie

    2017-05-01

    A new algorithm for retrieving volcanic ash cloud height from satellite-based measurements is presented. This algorithm, which was developed in preparation for China's next-generation meteorological satellite (FY-4), is based on volcanic ash microphysical property simulation and statistical optimal estimation theory. The MSG satellite's main payload, a 12-channel Spinning Enhanced Visible and Infrared Imager, was used as proxy data to test this new algorithm. A series of eruptions of Iceland's Eyjafjallajökull volcano during April to May 2010 and the Puyehue-Cordón Caulle volcanic complex eruption in the Chilean Andes on 16 June 2011 were selected as two typical cases for evaluating the algorithm under various meteorological backgrounds. Independent volcanic ash simulation training samples and satellite-based Cloud-Aerosol Lidar with Orthogonal Polarization data were used as validation data. It is demonstrated that the statistically based volcanic ash height algorithm is able to rapidly retrieve volcanic ash heights, globally. The retrieved ash heights show comparable accuracy with both independent training data and the lidar measurements, which is consistent with previous studies. However, under complicated background, with multilayers in vertical scale, underlying stratus clouds tend to have detrimental effects on the final retrieval accuracy. This is an unresolved problem, like many other previously published methods using passive satellite sensors. Compared with previous studies, the FY-4 ash height algorithm is independent of simultaneous atmospheric profiles, providing a flexible way to estimate volcanic ash height using passive satellite infrared measurements.

  1. Methods for estimating peak-flow frequencies at ungaged sites in Montana based on data through water year 2011: Chapter F in Montana StreamStats

    USGS Publications Warehouse

    Sando, Roy; Sando, Steven K.; McCarthy, Peter M.; Dutton, DeAnn M.

    2016-04-05

    The U.S. Geological Survey (USGS), in cooperation with the Montana Department of Natural Resources and Conservation, completed a study to update methods for estimating peak-flow frequencies at ungaged sites in Montana based on peak-flow data at streamflow-gaging stations through water year 2011. The methods allow estimation of peak-flow frequencies (that is, peak-flow magnitudes, in cubic feet per second, associated with annual exceedance probabilities of 66.7, 50, 42.9, 20, 10, 4, 2, 1, 0.5, and 0.2 percent) at ungaged sites. The annual exceedance probabilities correspond to 1.5-, 2-, 2.33-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence intervals, respectively.Regional regression analysis is a primary focus of Chapter F of this Scientific Investigations Report, and regression equations for estimating peak-flow frequencies at ungaged sites in eight hydrologic regions in Montana are presented. The regression equations are based on analysis of peak-flow frequencies and basin characteristics at 537 streamflow-gaging stations in or near Montana and were developed using generalized least squares regression or weighted least squares regression.All of the data used in calculating basin characteristics that were included as explanatory variables in the regression equations were developed for and are available through the USGS StreamStats application (http://water.usgs.gov/osw/streamstats/) for Montana. StreamStats is a Web-based geographic information system application that was created by the USGS to provide users with access to an assortment of analytical tools that are useful for water-resource planning and management. The primary purpose of the Montana StreamStats application is to provide estimates of basin characteristics and streamflow characteristics for user-selected ungaged sites on Montana streams. The regional regression equations presented in this report chapter can be conveniently solved using the Montana StreamStats application.Selected results from this study were compared with results of previous studies. For most hydrologic regions, the regression equations reported for this study had lower mean standard errors of prediction (in percent) than the previously reported regression equations for Montana. The equations presented for this study are considered to be an improvement on the previously reported equations primarily because this study (1) included 13 more years of peak-flow data; (2) included 35 more streamflow-gaging stations than previous studies; (3) used a detailed geographic information system (GIS)-based definition of the regulation status of streamflow-gaging stations, which allowed better determination of the unregulated peak-flow records that are appropriate for use in the regional regression analysis; (4) included advancements in GIS and remote-sensing technologies, which allowed more convenient calculation of basin characteristics and investigation of many more candidate basin characteristics; and (5) included advancements in computational and analytical methods, which allowed more thorough and consistent data analysis.This report chapter also presents other methods for estimating peak-flow frequencies at ungaged sites. Two methods for estimating peak-flow frequencies at ungaged sites located on the same streams as streamflow-gaging stations are described. Additionally, envelope curves relating maximum recorded annual peak flows to contributing drainage area for each of the eight hydrologic regions in Montana are presented and compared to a national envelope curve. In addition to providing general information on characteristics of large peak flows, the regional envelope curves can be used to assess the reasonableness of peak-flow frequency estimates determined using the regression equations.

  2. CAT Model with Personalized Algorithm for Evaluation of Estimated Student Knowledge

    ERIC Educational Resources Information Center

    Andjelic, Svetlana; Cekerevac, Zoran

    2014-01-01

    This article presents the original model of the computer adaptive testing and grade formation, based on scientifically recognized theories. The base of the model is a personalized algorithm for selection of questions depending on the accuracy of the answer to the previous question. The test is divided into three basic levels of difficulty, and the…

  3. Desirable properties of wood for sustainable development in the twenty-first century

    Treesearch

    Kenneth E. Skog; Theodore H. Wegner; Ted Bilek; Charles H. Michler

    2015-01-01

    We previously identified desirable properties for wood based on current market-based trends for commercial uses (Wegner et al. 2010). World business models increasingly incorporate the concept of social responsibility and the tenets of sustainable development. Sustainable development is needed to support an estimated 9 billion people by 2050 within the carrying...

  4. Imputatoin and Model-Based Updating Technique for Annual Forest Inventories

    Treesearch

    Ronald E. McRoberts

    2001-01-01

    The USDA Forest Service is developing an annual inventory system to establish the capability of producing annual estimates of timber volume and related variables. The inventory system features measurement of an annual sample of field plots with options for updating data for plots measured in previous years. One imputation and two model-based updating techniques are...

  5. Human joint motion estimation for electromyography (EMG)-based dynamic motion control.

    PubMed

    Zhang, Qin; Hosoda, Ryo; Venture, Gentiane

    2013-01-01

    This study aims to investigate a joint motion estimation method from Electromyography (EMG) signals during dynamic movement. In most EMG-based humanoid or prosthetics control systems, EMG features were directly or indirectly used to trigger intended motions. However, both physiological and nonphysiological factors can influence EMG characteristics during dynamic movements, resulting in subject-specific, non-stationary and crosstalk problems. Particularly, when motion velocity and/or joint torque are not constrained, joint motion estimation from EMG signals are more challenging. In this paper, we propose a joint motion estimation method based on muscle activation recorded from a pair of agonist and antagonist muscles of the joint. A linear state-space model with multi input single output is proposed to map the muscle activity to joint motion. An adaptive estimation method is proposed to train the model. The estimation performance is evaluated in performing a single elbow flexion-extension movement in two subjects. All the results in two subjects at two load levels indicate the feasibility and suitability of the proposed method in joint motion estimation. The estimation root-mean-square error is within 8.3% ∼ 10.6%, which is lower than that being reported in several previous studies. Moreover, this method is able to overcome subject-specific problem and compensate non-stationary EMG properties.

  6. Monitoring inter-channel nonlinearity based on differential pilot

    NASA Astrophysics Data System (ADS)

    Wang, Wanli; Yang, Aiying; Guo, Peng; Lu, Yueming; Qiao, Yaojun

    2018-06-01

    We modify and simplify the inter-channel nonlinearity (NL) estimation method by using differential pilot. Compared to previous works, the inter-channel NL estimation method we propose has much lower complexity and does not need modification of the transmitter. The performance of inter-channel NL monitoring with different launch power is tested. For both QPSK and 16QAM systems with 9 channels, the estimation error of inter-channel NL is lower than 1 dB when the total launch power is bigger than 12 dBm after 1000 km optical transmission. At last, we compare our inter-channel NL estimation method with other methods.

  7. Linear mixed model for heritability estimation that explicitly addresses environmental variation.

    PubMed

    Heckerman, David; Gurdasani, Deepti; Kadie, Carl; Pomilla, Cristina; Carstensen, Tommy; Martin, Hilary; Ekoru, Kenneth; Nsubuga, Rebecca N; Ssenyomo, Gerald; Kamali, Anatoli; Kaleebu, Pontiano; Widmer, Christian; Sandhu, Manjinder S

    2016-07-05

    The linear mixed model (LMM) is now routinely used to estimate heritability. Unfortunately, as we demonstrate, LMM estimates of heritability can be inflated when using a standard model. To help reduce this inflation, we used a more general LMM with two random effects-one based on genomic variants and one based on easily measured spatial location as a proxy for environmental effects. We investigated this approach with simulated data and with data from a Uganda cohort of 4,778 individuals for 34 phenotypes including anthropometric indices, blood factors, glycemic control, blood pressure, lipid tests, and liver function tests. For the genomic random effect, we used identity-by-descent estimates from accurately phased genome-wide data. For the environmental random effect, we constructed a covariance matrix based on a Gaussian radial basis function. Across the simulated and Ugandan data, narrow-sense heritability estimates were lower using the more general model. Thus, our approach addresses, in part, the issue of "missing heritability" in the sense that much of the heritability previously thought to be missing was fictional. Software is available at https://github.com/MicrosoftGenomics/FaST-LMM.

  8. RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.

    PubMed

    Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu

    2018-05-30

    One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.

  9. Comprehensive analysis of proton range uncertainties related to stopping-power-ratio estimation using dual-energy CT imaging

    NASA Astrophysics Data System (ADS)

    Li, B.; Lee, H. C.; Duan, X.; Shen, C.; Zhou, L.; Jia, X.; Yang, M.

    2017-09-01

    The dual-energy CT-based (DECT) approach holds promise in reducing the overall uncertainty in proton stopping-power-ratio (SPR) estimation as compared to the conventional stoichiometric calibration approach. The objective of this study was to analyze the factors contributing to uncertainty in SPR estimation using the DECT-based approach and to derive a comprehensive estimate of the range uncertainty associated with SPR estimation in treatment planning. Two state-of-the-art DECT-based methods were selected and implemented on a Siemens SOMATOM Force DECT scanner. The uncertainties were first divided into five independent categories. The uncertainty associated with each category was estimated for lung, soft and bone tissues separately. A single composite uncertainty estimate was eventually determined for three tumor sites (lung, prostate and head-and-neck) by weighting the relative proportion of each tissue group for that specific site. The uncertainties associated with the two selected DECT methods were found to be similar, therefore the following results applied to both methods. The overall uncertainty (1σ) in SPR estimation with the DECT-based approach was estimated to be 3.8%, 1.2% and 2.0% for lung, soft and bone tissues, respectively. The dominant factor contributing to uncertainty in the DECT approach was the imaging uncertainties, followed by the DECT modeling uncertainties. Our study showed that the DECT approach can reduce the overall range uncertainty to approximately 2.2% (2σ) in clinical scenarios, in contrast to the previously reported 1%.

  10. Aerosol Direct Radiative Effects and Heating in the New Era of Active Satellite Observations

    NASA Astrophysics Data System (ADS)

    Matus, Alexander V.

    Atmospheric aerosols impact the global energy budget by scattering and absorbing solar radiation. Despite their impacts, aerosols remain a significant source of uncertainty in our ability to predict future climate. Multi-sensor observations from the A-Train satellite constellation provide valuable observational constraints necessary to reduce uncertainties in model simulations of aerosol direct effects. This study will discuss recent efforts to quantify aerosol direct effects globally and regionally using CloudSat's radiative fluxes and heating rates product. Improving upon previous techniques, this approach leverages the capability of CloudSat and CALIPSO to retrieve vertically resolved estimates of cloud and aerosol properties critical for accurately evaluating the radiative impacts of aerosols. We estimate the global annual mean aerosol direct effect to be -1.9 +/- 0.6 W/m2, which is in better agreement with previously published estimates from global models than previous satellite-based estimates. Detailed comparisons against a fully coupled simulation of the Community Earth System Model, however, reveal that this agreement on the global annual mean masks large regional discrepancies between modeled and observed estimates of aerosol direct effects related to model biases in cloud cover. A low bias in stratocumulus cloud cover over the southeastern Pacific Ocean, for example, leads to an overestimate of the radiative effects of marine aerosols. Stratocumulus clouds over the southeastern Atlantic Ocean can enhance aerosol absorption by 50% allowing aerosol layers to remain self-lofted in an area of subsidence. Aerosol heating is found to peak at 0.6 +/- 0.3 K/day an altitude of 4 km in September when biomass burning reaches a maximum. Finally, the contributions of observed aerosols components are evaluated to estimate the direct radiative forcing of anthropogenic aerosols. Aerosol forcing is computed using satellite-based radiative kernels that describe the sensitivity of shortwave fluxes in response to aerosol optical depth. The direct radiative forcing is estimated to be -0.21 W/m2 with the largest contributions from pollution that is partially offset by a positive forcing from smoke aerosols. The results from these analyses provide new benchmarks on the global radiative effects of aerosols and offer new insights for improving future assessments.

  11. [Differences in mortality between indigenous and non-indigenous persons in Brazil based on the 2010 Population Census].

    PubMed

    Campos, Marden Barbosa de; Borges, Gabriel Mendes; Queiroz, Bernardo Lanza; Santos, Ricardo Ventura

    2017-06-12

    There have been no previous estimates on differences in adult or overall mortality in indigenous peoples in Brazil, although such indicators are extremely important for reducing social iniquities in health in this population segment. Brazil has made significant strides in recent decades to fill the gaps in data on indigenous peoples in the national statistics. The aim of this paper is to present estimated mortality rates for indigenous and non-indigenous persons in different age groups, based on data from the 2010 Population Census. The estimates used the question on deaths from specific household surveys. The results indicate important differences in mortality rates between indigenous and non-indigenous persons in all the selected age groups and in both sexes. These differences are more pronounced in childhood, especially in girls. The indicators corroborate the fact that indigenous peoples in Brazil are in a situation of extreme vulnerability in terms of their health, based on these unprecedented estimates of the size of these differences.

  12. Trust Measurement using Multimodal Behavioral Analysis and Uncertainty Aware Trust Calibration

    DTIC Science & Technology

    2018-01-05

    to estimate their performance based on their estimation on all prior trials. In the meanwhile via comparing the decisions of participants with the...it is easier compared with situations when more trials have been done. It should be noted that if a participant is good at memorizing the previous...them. The proposed study, being quantitative and explorative, are expected to reveal a number of findings that benefit interaction system design and

  13. Identification of open quantum systems from observable time traces

    DOE PAGES

    Zhang, Jun; Sarovar, Mohan

    2015-05-27

    Estimating the parameters that dictate the dynamics of a quantum system is an important task for quantum information processing and quantum metrology, as well as fundamental physics. In our paper we develop a method for parameter estimation for Markovian open quantum systems using a temporal record of measurements on the system. Furthermore, the method is based on system realization theory and is a generalization of our previous work on identification of Hamiltonian parameters.

  14. Estimation of the Contribution of CYP2C8 and CYP3A4 in Repaglinide Metabolism by Human Liver Microsomes Under Various Buffer Conditions.

    PubMed

    Kudo, Toshiyuki; Goda, Hitomi; Yokosuka, Yuki; Tanaka, Ryo; Komatsu, Seina; Ito, Kiyomi

    2017-09-01

    We have previously reported that the microsomal activities of CYP2C8 and CYP3A4 largely depend on the buffer condition used in in vitro metabolic studies, with different patterns observed between the 2 isozymes. In the present study, therefore, the possibility of buffer condition dependence of the fraction metabolized by CYP2C8 (fm2C8) for repaglinide, a dual substrate of CYP2C8 and CYP3A4, was estimated using human liver microsomes under various buffer conditions. Montelukast and ketoconazole showed a potent and concentration-dependent inhibition of CYP2C8-mediated paclitaxel 6α-hydroxylation and CYP3A4-mediated triazolam α-hydroxylation, respectively, without dependence on the buffer condition. Repaglinide depletion was inhibited by both inhibitors, but the degree of inhibition depended on buffer conditions. Based on these results, the contribution of CYP2C8 in repaglinide metabolism was estimated to be larger than that of CYP3A4 under each buffer condition, and the fm2C8 value of 0.760, estimated in 50 mM phosphate buffer, was the closest to the value (0.801) estimated in our previous modeling analysis based on its concentration increase in a clinical drug interaction study. Researchers should be aware of the possibility of buffer condition affecting the estimated contribution of enzyme(s) in drug metabolism processes involving multiple enzymes. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, H; Chen, Z; Nath, R

    Purpose: kV fluoroscopic imaging combined with MV treatment beam imaging has been investigated for intrafractional motion monitoring and correction. It is, however, subject to additional kV imaging dose to normal tissue. To balance tracking accuracy and imaging dose, we previously proposed an adaptive imaging strategy to dynamically decide future imaging type and moments based on motion tracking uncertainty. kV imaging may be used continuously for maximal accuracy or only when the position uncertainty (probability of out of threshold) is high if a preset imaging dose limit is considered. In this work, we propose more accurate methods to estimate tracking uncertaintymore » through analyzing acquired data in real-time. Methods: We simulated motion tracking process based on a previously developed imaging framework (MV + initial seconds of kV imaging) using real-time breathing data from 42 patients. Motion tracking errors for each time point were collected together with the time point’s corresponding features, such as tumor motion speed and 2D tracking error of previous time points, etc. We tested three methods for error uncertainty estimation based on the features: conditional probability distribution, logistic regression modeling, and support vector machine (SVM) classification to detect errors exceeding a threshold. Results: For conditional probability distribution, polynomial regressions on three features (previous tracking error, prediction quality, and cosine of the angle between the trajectory and the treatment beam) showed strong correlation with the variation (uncertainty) of the mean 3D tracking error and its standard deviation: R-square = 0.94 and 0.90, respectively. The logistic regression and SVM classification successfully identified about 95% of tracking errors exceeding 2.5mm threshold. Conclusion: The proposed methods can reliably estimate the motion tracking uncertainty in real-time, which can be used to guide adaptive additional imaging to confirm the tumor is within the margin or initialize motion compensation if it is out of the margin.« less

  16. Non-contact estimation of heart rate and oxygen saturation using ambient light.

    PubMed

    Bal, Ufuk

    2015-01-01

    We propose a robust method for automated computation of heart rate (HR) from digital color video recordings of the human face. In order to extract photoplethysmographic signals, two orthogonal vectors of RGB color space are used. We used a dual tree complex wavelet transform based denoising algorithm to reduce artifacts (e.g. artificial lighting, movement, etc.). Most of the previous work on skin color based HR estimation performed experiments with healthy volunteers and focused to solve motion artifacts. In addition to healthy volunteers we performed experiments with child patients in pediatric intensive care units. In order to investigate the possible factors that affect the non-contact HR monitoring in a clinical environment, we studied the relation between hemoglobin levels and HR estimation errors. Low hemoglobin causes underestimation of HR. Nevertheless, we conclude that our method can provide acceptable accuracy to estimate mean HR of patients in a clinical environment, where the measurements can be performed remotely. In addition to mean heart rate estimation, we performed experiments to estimate oxygen saturation. We observed strong correlations between our SpO2 estimations and the commercial oximeter readings.

  17. Non-contact estimation of heart rate and oxygen saturation using ambient light

    PubMed Central

    Bal, Ufuk

    2014-01-01

    We propose a robust method for automated computation of heart rate (HR) from digital color video recordings of the human face. In order to extract photoplethysmographic signals, two orthogonal vectors of RGB color space are used. We used a dual tree complex wavelet transform based denoising algorithm to reduce artifacts (e.g. artificial lighting, movement, etc.). Most of the previous work on skin color based HR estimation performed experiments with healthy volunteers and focused to solve motion artifacts. In addition to healthy volunteers we performed experiments with child patients in pediatric intensive care units. In order to investigate the possible factors that affect the non-contact HR monitoring in a clinical environment, we studied the relation between hemoglobin levels and HR estimation errors. Low hemoglobin causes underestimation of HR. Nevertheless, we conclude that our method can provide acceptable accuracy to estimate mean HR of patients in a clinical environment, where the measurements can be performed remotely. In addition to mean heart rate estimation, we performed experiments to estimate oxygen saturation. We observed strong correlations between our SpO2 estimations and the commercial oximeter readings PMID:25657877

  18. Statistical field estimators for multiscale simulations.

    PubMed

    Eapen, Jacob; Li, Ju; Yip, Sidney

    2005-11-01

    We present a systematic approach for generating smooth and accurate fields from particle simulation data using the notions of statistical inference. As an extension to a parametric representation based on the maximum likelihood technique previously developed for velocity and temperature fields, a nonparametric estimator based on the principle of maximum entropy is proposed for particle density and stress fields. Both estimators are applied to represent molecular dynamics data on shear-driven flow in an enclosure which exhibits a high degree of nonlinear characteristics. We show that the present density estimator is a significant improvement over ad hoc bin averaging and is also free of systematic boundary artifacts that appear in the method of smoothing kernel estimates. Similarly, the velocity fields generated by the maximum likelihood estimator do not show any edge effects that can be erroneously interpreted as slip at the wall. For low Reynolds numbers, the velocity fields and streamlines generated by the present estimator are benchmarked against Newtonian continuum calculations. For shear velocities that are a significant fraction of the thermal speed, we observe a form of shear localization that is induced by the confining boundary.

  19. Automatic threshold selection for multi-class open set recognition

    NASA Astrophysics Data System (ADS)

    Scherreik, Matthew; Rigling, Brian

    2017-05-01

    Multi-class open set recognition is the problem of supervised classification with additional unknown classes encountered after a model has been trained. An open set classifer often has two core components. The first component is a base classifier which estimates the most likely class of a given example. The second component consists of open set logic which estimates if the example is truly a member of the candidate class. Such a system is operated in a feed-forward fashion. That is, a candidate label is first estimated by the base classifier, and the true membership of the example to the candidate class is estimated afterward. Previous works have developed an iterative threshold selection algorithm for rejecting examples from classes which were not present at training time. In those studies, a Platt-calibrated SVM was used as the base classifier, and the thresholds were applied to class posterior probabilities for rejection. In this work, we investigate the effectiveness of other base classifiers when paired with the threshold selection algorithm and compare their performance with the original SVM solution.

  20. The rate and character of spontaneous mutation in an RNA virus.

    PubMed Central

    Malpica, José M; Fraile, Aurora; Moreno, Ignacio; Obies, Clara I; Drake, John W; García-Arenal, Fernando

    2002-01-01

    Estimates of spontaneous mutation rates for RNA viruses are few and uncertain, most notably due to their dependence on tiny mutation reporter sequences that may not well represent the whole genome. We report here an estimate of the spontaneous mutation rate of tobacco mosaic virus using an 804-base cognate mutational target, the viral MP gene that encodes the movement protein (MP). Selection against newly arising mutants was countered by providing MP function from a transgene. The estimated genomic mutation rate was on the lower side of the range previously estimated for lytic animal riboviruses. We also present the first unbiased riboviral mutational spectrum. The proportion of base substitutions is the same as that in a retrovirus but is lower than that in most DNA-based organisms. Although the MP mutant frequency was 0.02-0.05, 35% of the sequenced mutants contained two or more mutations. Therefore, the mutation process in populations of TMV and perhaps of riboviruses generally differs profoundly from that in populations of DNA-based microbes and may be strongly influenced by a subpopulation of mutator polymerases. PMID:12524327

  1. Fossils matter: improved estimates of divergence times in Pinus reveal older diversification.

    PubMed

    Saladin, Bianca; Leslie, Andrew B; Wüest, Rafael O; Litsios, Glenn; Conti, Elena; Salamin, Nicolas; Zimmermann, Niklaus E

    2017-04-04

    The taxonomy of pines (genus Pinus) is widely accepted and a robust gene tree based on entire plastome sequences exists. However, there is a large discrepancy in estimated divergence times of major pine clades among existing studies, mainly due to differences in fossil placement and dating methods used. We currently lack a dated molecular phylogeny that makes use of the rich pine fossil record, and this study is the first to estimate the divergence dates of pines based on a large number of fossils (21) evenly distributed across all major clades, in combination with applying both node and tip dating methods. We present a range of molecular phylogenetic trees of Pinus generated within a Bayesian framework. We find the origin of crown Pinus is likely up to 30 Myr older (Early Cretaceous) than inferred in most previous studies (Late Cretaceous) and propose generally older divergence times for major clades within Pinus than previously thought. Our age estimates vary significantly between the different dating approaches, but the results generally agree on older divergence times. We present a revised list of 21 fossils that are suitable to use in dating or comparative analyses of pines. Reliable estimates of divergence times in pines are essential if we are to link diversification processes and functional adaptation of this genus to geological events or to changing climates. In addition to older divergence times in Pinus, our results also indicate that node age estimates in pines depend on dating approaches and the specific fossil sets used, reflecting inherent differences in various dating approaches. The sets of dated phylogenetic trees of pines presented here provide a way to account for uncertainties in age estimations when applying comparative phylogenetic methods.

  2. Annual regression-based estimates of evapotranspiration for the contiguous United States based on climate, remote sensing, and stream gage data

    NASA Astrophysics Data System (ADS)

    Reitz, M. D.; Sanford, W. E.; Senay, G. B.; Cazenas, J.

    2015-12-01

    Evapotranspiration (ET) is a key quantity in the hydrologic cycle, accounting for ~70% of precipitation across the contiguous United States (CONUS). However, it is a challenge to estimate, due to difficulty in making direct measurements and gaps in our theoretical understanding. Here we present a new data-driven, ~1km2 resolution map of long-term average actual evapotranspiration rates across the CONUS. The new ET map is a function of the USGS Landsat-derived National Land Cover Database (NLCD), precipitation, temperature, and daily average temperature range (from the PRISM climate dataset), and is calibrated to long-term water balance data from 679 watersheds. It is unique from previously presented ET maps in that (1) it was co-developed with estimates of runoff and recharge; (2) the regression equation was chosen from among many tested, previously published and newly proposed functional forms for its optimal description of long-term water balance ET data; (3) it has values over open-water areas that are derived from separate mass-transfer and humidity equations; and (4) the data include additional precipitation representing amounts converted from 2005 USGS water-use census irrigation data. The regression equation is calibrated using data from 2000-2013, but can also be applied to individual years with their corresponding input datasets. Comparisons among this new map, the more detailed remote-sensing-based estimates of MOD16 and SSEBop, and AmeriFlux ET tower measurements shows encouraging consistency, and indicates that the empirical ET estimate approach presented here produces closer agreement with independent flux tower data for annual average actual ET than other more complex remote sensing approaches.

  3. Fetal QRS detection and heart rate estimation: a wavelet-based approach.

    PubMed

    Almeida, Rute; Gonçalves, Hernâni; Bernardes, João; Rocha, Ana Paula

    2014-08-01

    Fetal heart rate monitoring is used for pregnancy surveillance in obstetric units all over the world but in spite of recent advances in analysis methods, there are still inherent technical limitations that bound its contribution to the improvement of perinatal indicators. In this work, a previously published wavelet transform based QRS detector, validated over standard electrocardiogram (ECG) databases, is adapted to fetal QRS detection over abdominal fetal ECG. Maternal ECG waves were first located using the original detector and afterwards a version with parameters adapted for fetal physiology was applied to detect fetal QRS, excluding signal singularities associated with maternal heartbeats. Single lead (SL) based marks were combined in a single annotator with post processing rules (SLR) from which fetal RR and fetal heart rate (FHR) measures can be computed. Data from PhysioNet with reference fetal QRS locations was considered for validation, with SLR outperforming SL including ICA based detections. The error in estimated FHR using SLR was lower than 20 bpm for more than 80% of the processed files. The median error in 1 min based FHR estimation was 0.13 bpm, with a correlation between reference and estimated FHR of 0.48, which increased to 0.73 when considering only records for which estimated FHR > 110 bpm. This allows us to conclude that the proposed methodology is able to provide a clinically useful estimation of the FHR.

  4. Sleep Quality Estimation based on Chaos Analysis for Heart Rate Variability

    NASA Astrophysics Data System (ADS)

    Fukuda, Toshio; Wakuda, Yuki; Hasegawa, Yasuhisa; Arai, Fumihito; Kawaguchi, Mitsuo; Noda, Akiko

    In this paper, we propose an algorithm to estimate sleep quality based on a heart rate variability using chaos analysis. Polysomnography(PSG) is a conventional and reliable system to diagnose sleep disorder and to evaluate its severity and therapeatic effect, by estimating sleep quality based on multiple channels. However, a recording process requires a lot of time and a controlled environment for measurement and then an analyzing process of PSG data is hard work because the huge sensed data should be manually evaluated. On the other hand, it is focused that some people make a mistake or cause an accident due to lost of regular sleep and of homeostasis these days. Therefore a simple home system for checking own sleep is required and then the estimation algorithm for the system should be developed. Therefore we propose an algorithm to estimate sleep quality based only on a heart rate variability which can be measured by a simple sensor such as a pressure sensor and an infrared sensor in an uncontrolled environment, by experimentally finding the relationship between chaos indices and sleep quality. The system including the estimation algorithm can inform patterns and quality of own daily sleep to a user, and then the user can previously arranges his life schedule, pays more attention based on sleep results and consult with a doctor.

  5. Comparative study on fractal analysis of interferometry images with application to tear film surface quality assessment.

    PubMed

    Szyperski, Piotr D

    2018-06-01

    The purpose of this research was to evaluate the applicability of the fractal dimension (FD) estimators to assess lateral shearing interferometric (LSI) measurements of tear film surface quality. Retrospective recordings of tear film measured with LSI were used: 69 from healthy subjects and 41 from patients diagnosed with dry eye syndrome. Five surface quality descriptors were considered, four based on FD and a previously reported descriptor operating in a spatial frequency domain (M 2 ), presenting temporal kinetics of post-blink tear film. A set of 12 regression parameters has been extracted and analyzed for classification purposes. The classifiers are assessed in terms of receiver operating characteristics and areas under their curves (AUC). Also, the computational loads are estimated. The maximum AUC of 82.4% was achieved for M 2 , closely followed by the binary box-counting (BBC) FD estimator with AUC=78.6%. For all descriptors, statistically significant differences between the subject groups were found (p<0.05). The BBC FD estimator was characterized with the highest empirical computational efficiency that was about 30% faster than that of M 2 , while that based on the differential box-counting exhibited the lowest efficiency (4.5 times slower than the best one). Concluding, FD estimators can be utilized for quantitative assessment of tear film kinetics. They provide a viable alternative to previously used spectral counter parameters, and at the same time allow higher computational efficiency.

  6. Application of Zernike polynomials towards accelerated adaptive focusing of transcranial high intensity focused ultrasound.

    PubMed

    Kaye, Elena A; Hertzberg, Yoni; Marx, Michael; Werner, Beat; Navon, Gil; Levoy, Marc; Pauly, Kim Butts

    2012-10-01

    To study the phase aberrations produced by human skulls during transcranial magnetic resonance imaging guided focused ultrasound surgery (MRgFUS), to demonstrate the potential of Zernike polynomials (ZPs) to accelerate the adaptive focusing process, and to investigate the benefits of using phase corrections obtained in previous studies to provide the initial guess for correction of a new data set. The five phase aberration data sets, analyzed here, were calculated based on preoperative computerized tomography (CT) images of the head obtained during previous transcranial MRgFUS treatments performed using a clinical prototype hemispherical transducer. The noniterative adaptive focusing algorithm [Larrat et al., "MR-guided adaptive focusing of ultrasound," IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57(8), 1734-1747 (2010)] was modified by replacing Hadamard encoding with Zernike encoding. The algorithm was tested in simulations to correct the patients' phase aberrations. MR acoustic radiation force imaging (MR-ARFI) was used to visualize the effect of the phase aberration correction on the focusing of a hemispherical transducer. In addition, two methods for constructing initial phase correction estimate based on previous patient's data were investigated. The benefits of the initial estimates in the Zernike-based algorithm were analyzed by measuring their effect on the ultrasound intensity at the focus and on the number of ZP modes necessary to achieve 90% of the intensity of the nonaberrated case. Covariance of the pairs of the phase aberrations data sets showed high correlation between aberration data of several patients and suggested that subgroups can be based on level of correlation. Simulation of the Zernike-based algorithm demonstrated the overall greater correction effectiveness of the low modes of ZPs. The focal intensity achieves 90% of nonaberrated intensity using fewer than 170 modes of ZPs. The initial estimates based on using the average of the phase aberration data from the individual subgroups of subjects was shown to increase the intensity at the focal spot for the five subjects. The application of ZPs to phase aberration correction was shown to be beneficial for adaptive focusing of transcranial ultrasound. The skull-based phase aberrations were found to be well approximated by the number of ZP modes representing only a fraction of the number of elements in the hemispherical transducer. Implementing the initial phase aberration estimate together with Zernike-based algorithm can be used to improve the robustness and can potentially greatly increase the viability of MR-ARFI-based focusing for a clinical transcranial MRgFUS therapy.

  7. Application of Zernike polynomials towards accelerated adaptive focusing of transcranial high intensity focused ultrasound

    PubMed Central

    Kaye, Elena A.; Hertzberg, Yoni; Marx, Michael; Werner, Beat; Navon, Gil; Levoy, Marc; Pauly, Kim Butts

    2012-01-01

    Purpose: To study the phase aberrations produced by human skulls during transcranial magnetic resonance imaging guided focused ultrasound surgery (MRgFUS), to demonstrate the potential of Zernike polynomials (ZPs) to accelerate the adaptive focusing process, and to investigate the benefits of using phase corrections obtained in previous studies to provide the initial guess for correction of a new data set. Methods: The five phase aberration data sets, analyzed here, were calculated based on preoperative computerized tomography (CT) images of the head obtained during previous transcranial MRgFUS treatments performed using a clinical prototype hemispherical transducer. The noniterative adaptive focusing algorithm [Larrat , “MR-guided adaptive focusing of ultrasound,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57(8), 1734–1747 (2010)]10.1109/TUFFC.2010.1612 was modified by replacing Hadamard encoding with Zernike encoding. The algorithm was tested in simulations to correct the patients’ phase aberrations. MR acoustic radiation force imaging (MR-ARFI) was used to visualize the effect of the phase aberration correction on the focusing of a hemispherical transducer. In addition, two methods for constructing initial phase correction estimate based on previous patient's data were investigated. The benefits of the initial estimates in the Zernike-based algorithm were analyzed by measuring their effect on the ultrasound intensity at the focus and on the number of ZP modes necessary to achieve 90% of the intensity of the nonaberrated case. Results: Covariance of the pairs of the phase aberrations data sets showed high correlation between aberration data of several patients and suggested that subgroups can be based on level of correlation. Simulation of the Zernike-based algorithm demonstrated the overall greater correction effectiveness of the low modes of ZPs. The focal intensity achieves 90% of nonaberrated intensity using fewer than 170 modes of ZPs. The initial estimates based on using the average of the phase aberration data from the individual subgroups of subjects was shown to increase the intensity at the focal spot for the five subjects. Conclusions: The application of ZPs to phase aberration correction was shown to be beneficial for adaptive focusing of transcranial ultrasound. The skull-based phase aberrations were found to be well approximated by the number of ZP modes representing only a fraction of the number of elements in the hemispherical transducer. Implementing the initial phase aberration estimate together with Zernike-based algorithm can be used to improve the robustness and can potentially greatly increase the viability of MR-ARFI-based focusing for a clinical transcranial MRgFUS therapy. PMID:23039661

  8. A diagnostic model to estimate winds and small-scale drag from Mars Observer PMIRR data

    NASA Technical Reports Server (NTRS)

    Barnes, J. R.

    1993-01-01

    Theoretical and modeling studies indicate that small-scale drag due to breaking gravity waves is likely to be of considerable importance for the circulation in the middle atmospheric region (approximately 40-100 km altitude) on Mars. Recent earth-based spectroscopic observations have provided evidence for the existence of circulation features, in particular, a warm winter polar region, associated with gravity wave drag. Since the Mars Observer PMIRR experiment will obtain temperature profiles extending from the surface up to about 80 km altitude, it will be extensively sampling middle atmospheric regions in which gravity wave drag may play a dominant role. Estimating the drag then becomes crucial to the estimation of the atmospheric winds from the PMIRR-observed temperatures. An interative diagnostic model based upon one previously developed and tested with earth satellite temperature data will be applied to the PMIRR measurements to produce estimates of the small-scale zonal drag and three-dimensional wind fields in the Mars middle atmosphere. This model is based on the primitive equations, and can allow for time dependence (the time tendencies used may be based upon those computed in a Fast Fourier Mapping procedure). The small-scale zonal drag is estimated as the residual in the zonal momentum equation; the horizontal winds having first been estimated from the meridional momentum equation and the continuity equation. The scheme estimates the vertical motions from the thermodynamic equation, and thus needs estimates of the diabatic heating based upon the observed temperatures. The latter will be generated using a radiative model. It is hoped that the diagnostic scheme will be able to produce good estimates of the zonal gravity wave drag in the Mars middle atmosphere, estimates that can then be used in other diagnostic or assimilation efforts, as well as more theoretical studies.

  9. Color-magnitude diagrams for six metal-rich, low-latitude globular clusters

    NASA Technical Reports Server (NTRS)

    Armandroff, Taft E.

    1988-01-01

    Colors and magnitudes for stars on CCD frames for six metal-rich, low-latitude, previously unstudied globular clusters and one well-studied, metal-rich cluster (47 Tuc) have been derived and color-magnitude diagrams have been constructed. The photometry for stars in 47 Tuc are in good agreement with previous studies, while the V magnitudes of the horizontal-branch stars in the six program clusters do not agree with estimates based on secondary methods. The distances to these clusters are different from prior estimates. Redding values are derived for each program cluster. The horizontal branches of the program clusters all appear to lie entirely redwards of the red edge of the instability strip, as is normal for their metallicities.

  10. Asteroid mass estimation with Markov-chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Siltala, Lauri; Granvik, Mikael

    2017-10-01

    Estimates for asteroid masses are based on their gravitational perturbations on the orbits of other objects such as Mars, spacecraft, or other asteroids and/or their satellites. In the case of asteroid-asteroid perturbations, this leads to a 13-dimensional inverse problem at minimum where the aim is to derive the mass of the perturbing asteroid and six orbital elements for both the perturbing asteroid and the test asteroid by fitting their trajectories to their observed positions. The fitting has typically been carried out with linearized methods such as the least-squares method. These methods need to make certain assumptions regarding the shape of the probability distributions of the model parameters. This is problematic as these assumptions have not been validated. We have developed a new Markov-chain Monte Carlo method for mass estimation which does not require an assumption regarding the shape of the parameter distribution. Recently, we have implemented several upgrades to our MCMC method including improved schemes for handling observational errors and outlier data alongside the option to consider multiple perturbers and/or test asteroids simultaneously. These upgrades promise significantly improved results: based on two separate results for (19) Fortuna with different test asteroids we previously hypothesized that simultaneous use of both test asteroids would lead to an improved result similar to the average literature value for (19) Fortuna with substantially reduced uncertainties. Our upgraded algorithm indeed finds a result essentially equal to the literature value for this asteroid, confirming our previous hypothesis. Here we show these new results for (19) Fortuna and other example cases, and compare our results to previous estimates. Finally, we discuss our plans to improve our algorithm further, particularly in connection with Gaia.

  11. Methodology of automated ionosphere front velocity estimation for ground-based augmentation of GNSS

    NASA Astrophysics Data System (ADS)

    Bang, Eugene; Lee, Jiyun

    2013-11-01

    ionospheric anomalies occurring during severe ionospheric storms can pose integrity threats to Global Navigation Satellite System (GNSS) Ground-Based Augmentation Systems (GBAS). Ionospheric anomaly threat models for each region of operation need to be developed to analyze the potential impact of these anomalies on GBAS users and develop mitigation strategies. Along with the magnitude of ionospheric gradients, the speed of the ionosphere "fronts" in which these gradients are embedded is an important parameter for simulation-based GBAS integrity analysis. This paper presents a methodology for automated ionosphere front velocity estimation which will be used to analyze a vast amount of ionospheric data, build ionospheric anomaly threat models for different regions, and monitor ionospheric anomalies continuously going forward. This procedure automatically selects stations that show a similar trend of ionospheric delays, computes the orientation of detected fronts using a three-station-based trigonometric method, and estimates speeds for the front using a two-station-based method. It also includes fine-tuning methods to improve the estimation to be robust against faulty measurements and modeling errors. It demonstrates the performance of the algorithm by comparing the results of automated speed estimation to those manually computed previously. All speed estimates from the automated algorithm fall within error bars of ± 30% of the manually computed speeds. In addition, this algorithm is used to populate the current threat space with newly generated threat points. A larger number of velocity estimates helps us to better understand the behavior of ionospheric gradients under geomagnetic storm conditions.

  12. Systemic Thinking: Enhancing Intelligence Preparation and Estimates

    DTIC Science & Technology

    2010-04-30

    informally based on previous combat experience of the staff participants. 47 Peter Checkland , Systems Thinking, Systems Practice (John Wiley & Sons...Deparment, 2009, 3-5; http://journals.isss.org/index.php/proceedings52nd/article/view/1032/322 (accessed 23 March 2010). Checkland , Peter

  13. Product Deformulation to Inform High-throughput Exposure Predictions (SOT)

    EPA Science Inventory

    The health risks posed by the thousands of chemicals in our environment depends on both chemical hazard and exposure. However, relatively few chemicals have estimates of exposure intake, limiting the understanding of risks. We have previously developed a heuristics-based exposur...

  14. An evaluation of study design for estimating a time-of-day noise weighting

    NASA Technical Reports Server (NTRS)

    Fields, J. M.

    1986-01-01

    The relative importance of daytime and nighttime noise of the same noise level is represented by a time-of-day weight in noise annoyance models. The high correlations between daytime and nighttime noise were regarded as a major reason that previous social surveys of noise annoyance could not accurately estimate the value of the time-of-day weight. Study designs which would reduce the correlation between daytime and nighttime noise are described. It is concluded that designs based on short term variations in nighttime noise levels would not be able to provide valid measures of response to nighttime noise. The accuracy of the estimate of the time-of-day weight is predicted for designs which are based on long term variations in nighttime noise levels. For these designs it is predicted that it is not possible to form satisfactorily precise estimates of the time-of-day weighting.

  15. Estimating population size of Pygoscelid Penguins from TM data

    NASA Technical Reports Server (NTRS)

    Olson, Charles E., Jr.; Schwaller, Mathew R.; Dahmer, Paul A.

    1987-01-01

    An estimate was made toward a continent wide population of penguins. The results indicate that Thematic Mapper data can be used to identify penguin rookeries due to the unique reflectance properties of guano. Strong correlations exist between nesting populations and rookery area occupied by the birds. These correlations allow estimation of the number of nesting pairs in colonies. The success of remote sensing and biometric analyses leads one to believe that a continent wide estimate of penguin populations is possible based on a timely sample employing ground based and remote sensing techniques. Satellite remote sensing along the coastline may well locate previously undiscovered penguin nesting sites, or locate rookeries which have been assumed to exist for over a half century, but never located. Observations which found that penguins are one of the most sensitive elements in the complex of Southern Ocean ecosystems motivated this study.

  16. [Dual process in large number estimation under uncertainty].

    PubMed

    Matsumuro, Miki; Miwa, Kazuhisa; Terai, Hitoshi; Yamada, Kento

    2016-08-01

    According to dual process theory, there are two systems in the mind: an intuitive and automatic System 1 and a logical and effortful System 2. While many previous studies about number estimation have focused on simple heuristics and automatic processes, the deliberative System 2 process has not been sufficiently studied. This study focused on the System 2 process for large number estimation. First, we described an estimation process based on participants’ verbal reports. The task, corresponding to the problem-solving process, consisted of creating subgoals, retrieving values, and applying operations. Second, we investigated the influence of such deliberative process by System 2 on intuitive estimation by System 1, using anchoring effects. The results of the experiment showed that the System 2 process could mitigate anchoring effects.

  17. Estimation of Dynamic Friction Process of the Akatani Landslide Based on the Waveform Inversion and Numerical Simulation

    NASA Astrophysics Data System (ADS)

    Yamada, M.; Mangeney, A.; Moretti, L.; Matsushi, Y.

    2014-12-01

    Understanding physical parameters, such as frictional coefficients, velocity change, and dynamic history, is important issue for assessing and managing the risks posed by deep-seated catastrophic landslides. Previously, landslide motion has been inferred qualitatively from topographic changes caused by the event, and occasionally from eyewitness reports. However, these conventional approaches are unable to evaluate source processes and dynamic parameters. In this study, we use broadband seismic recordings to trace the dynamic process of the deep-seated Akatani landslide that occurred on the Kii Peninsula, Japan, which is one of the best recorded large slope failures. Based on the previous results of waveform inversions and precise topographic surveys done before and after the event, we applied numerical simulations using the SHALTOP numerical model (Mangeney et al., 2007). This model describes homogeneous continuous granular flows on a 3D topography based on a depth averaged thin layer approximation. We assume a Coulomb's friction law with a constant friction coefficient, i. e. the friction is independent of the sliding velocity. We varied the friction coefficients in the simulation so that the resulting force acting on the surface agrees with the single force estimated from the seismic waveform inversion. Figure shows the force history of the east-west components after the band-pass filtering between 10-100 seconds. The force history of the simulation with frictional coefficient 0.27 (thin red line) the best agrees with the result of seismic waveform inversion (thick gray line). Although the amplitude is slightly different, phases are coherent for the main three pulses. This is an evidence that the point-source approximation works reasonably well for this particular event. The friction coefficient during the sliding was estimated to be 0.38 based on the seismic waveform inversion performed by the previous study and on the sliding block model (Yamada et al., 2013), whereas the frictional coefficient estimated from the numerical simulation was about 0.27. This discrepancy may be due to the digital elevation model, to the other forces such as pressure gradients and centrifugal acceleration included in the model. However, quantitative interpretation of this difference requires further investigation.

  18. Testing survey-based methods for rapid monitoring of child mortality, with implications for summary birth history data.

    PubMed

    Brady, Eoghan; Hill, Kenneth

    2017-01-01

    Under-five mortality estimates are increasingly used in low and middle income countries to target interventions and measure performance against global development goals. Two new methods to rapidly estimate under-5 mortality based on Summary Birth Histories (SBH) were described in a previous paper and tested with data available. This analysis tests the methods using data appropriate to each method from 5 countries that lack vital registration systems. SBH data are collected across many countries through censuses and surveys, and indirect methods often rely upon their quality to estimate mortality rates. The Birth History Imputation method imputes data from a recent Full Birth History (FBH) onto the birth, death and age distribution of the SBH to produce estimates based on the resulting distribution of child mortality. DHS FBHs and MICS SBHs are used for all five countries. In the implementation, 43 of 70 estimates are within 20% of validation estimates (61%). Mean Absolute Relative Error is 17.7.%. 1 of 7 countries produces acceptable estimates. The Cohort Change method considers the differences in births and deaths between repeated Summary Birth Histories at 1 or 2-year intervals to estimate the mortality rate in that period. SBHs are taken from Brazil's PNAD Surveys 2004-2011 and validated against IGME estimates. 2 of 10 estimates are within 10% of validation estimates. Mean absolute relative error is greater than 100%. Appropriate testing of these new methods demonstrates that they do not produce sufficiently good estimates based on the data available. We conclude this is due to the poor quality of most SBH data included in the study. This has wider implications for the next round of censuses and future household surveys across many low- and middle- income countries.

  19. Adaptive tracking of a time-varying field with a quantum sensor

    NASA Astrophysics Data System (ADS)

    Bonato, Cristian; Berry, Dominic W.

    2017-05-01

    Sensors based on single spins can enable magnetic-field detection with very high sensitivity and spatial resolution. Previous work has concentrated on sensing of a constant magnetic field or a periodic signal. Here, we instead investigate the problem of estimating a field with nonperiodic variation described by a Wiener process. We propose and study, by numerical simulations, an adaptive tracking protocol based on Bayesian estimation. The tracking protocol updates the probability distribution for the magnetic field based on measurement outcomes and adapts the choice of sensing time and phase in real time. By taking the statistical properties of the signal into account, our protocol strongly reduces the required measurement time. This leads to a reduction of the error in the estimation of a time-varying signal by up to a factor of four compare with protocols that do not take this information into account.

  20. Comparison of eating quality and physicochemical properties between Japanese and Chinese rice cultivars.

    PubMed

    Nakamura, Sumiko; Cui, Jing; Zhang, Xin; Yang, Fan; Xu, Ximing; Sheng, Hua; Ohtsubo, Ken'ichi

    2016-12-01

    In this study, we evaluated 16 Japanese and Chinese rice cultivars in terms of their main chemical components, iodine absorption curve, apparent amylose content (AAC), pasting property, resistant starch content, physical properties, sodium dodecyl sulfate-polyacrylamide gel electrophoresis analysis, and enzyme activity. Based on these quality evaluations, we concluded that Chinese rice varieties are characterized by a high protein and the grain texture after cooking has high hardness and low stickiness. In a previous study, we developed a novel formula for estimating AAC based on the iodine absorption curve. The validation test showed a determination coefficient of 0.996 for estimating AAC of Chinese rice cultivars as unknown samples. In the present study, we developed a novel formulae for estimating the balance degree of the surface layer of cooked rice (A3/A1: a ratio of workload of stickiness and hardness) based on the iodine absorption curve obtained using milled rice.

  1. Is the northern high latitude land-based CO2 sink weakening?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mcguire, David; Kicklighter, David W.; Gurney, Kevin R

    2011-01-01

    Studies indicate that, historically, terrestrial ecosystems of the northern high latitude region may have been responsible for up to 60% of the global net land-based sink for atmospheric CO2. However, these regions have recently experienced remarkable modification of the major driving forces of the carbon cycle, including surface air temperature warming that is significantly greater than the global average and associated increases in the frequency and severity of disturbances. Whether arctic tundra and boreal forest ecosystems will continue to sequester atmospheric CO2 in the face of these dramatic changes is unknown. Here we show the results of model simulations thatmore » estimate a 41 Tg C yr-1 sink in the boreal land regions from 1997 to 2006, which represents a 73% reduction in the strength of the sink estimated for previous decades in the late 20th Century. Our results suggest that CO2 uptake by the region in previous decades may not be as strong as previously estimated. The recent decline in sink strength is the combined result of 1) weakening sinks due to warming-induced increases in soil organic matter decomposition and 2) strengthening sources from pyrogenic CO2 emissions as a result of the substantial area of boreal forest burned in wildfires across the region in recent years. Such changes create positive feedbacks to the climate system that accelerate global warming, putting further pressure on emission reductions to achieve atmospheric stabilization targets.« less

  2. Is the northern high-latitude land-based CO2 sink weakening?

    USGS Publications Warehouse

    Hayes, D.J.; McGuire, A.D.; Kicklighter, D.W.; Gurney, K.R.; Burnside, T.J.; Melillo, J.M.

    2011-01-01

    Studies indicate that, historically, terrestrial ecosystems of the northern high-latitude region may have been responsible for up to 60% of the global net land-based sink for atmospheric CO2. However, these regions have recently experienced remarkable modification of the major driving forces of the carbon cycle, including surface air temperature warming that is significantly greater than the global average and associated increases in the frequency and severity of disturbances. Whether Arctic tundra and boreal forest ecosystems will continue to sequester atmospheric CO2 in the face of these dramatic changes is unknown. Here we show the results of model simulations that estimate a 41 Tg C yr-1 sink in the boreal land regions from 1997 to 2006, which represents a 73% reduction in the strength of the sink estimated for previous decades in the late 20th century. Our results suggest that CO 2 uptake by the region in previous decades may not be as strong as previously estimated. The recent decline in sink strength is the combined result of (1) weakening sinks due to warming-induced increases in soil organic matter decomposition and (2) strengthening sources from pyrogenic CO2 emissions as a result of the substantial area of boreal forest burned in wildfires across the region in recent years. Such changes create positive feedbacks to the climate system that accelerate global warming, putting further pressure on emission reductions to achieve atmospheric stabilization targets. Copyright 2011 by the American Geophysical Union.

  3. MOLA Topography of Small Volcanoes in Tempe Terra and Ceraunius Fossae, Mars: Implications for Eruptive Styles

    NASA Technical Reports Server (NTRS)

    Wong, M. P.; Sakimoto, S. E. H.; Garvin, J. B.

    2001-01-01

    We use Mars Orbiter Laser Altimeter (MOLA) data to measure small volcanoes in the Tempe Terra and Ceraunius Fossae regions of Mars. We find that previous geometry estimates based on imagery alone are inaccurate, but MOLA data support image-based interpretations of eruptive style. Additional information is contained in the original extended abstract.

  4. The volume and mean depth of Earth's lakes

    NASA Astrophysics Data System (ADS)

    Cael, B. B.; Heathcote, A. J.; Seekell, D. A.

    2017-01-01

    Global lake volume estimates are scarce, highly variable, and poorly documented. We developed a rigorous method for estimating global lake depth and volume based on the Hurst coefficient of Earth's surface, which provides a mechanistic connection between lake area and volume. Volume-area scaling based on the Hurst coefficient is accurate and consistent when applied to lake data sets spanning diverse regions. We applied these relationships to a global lake area census to estimate global lake volume and depth. The volume of Earth's lakes is 199,000 km3 (95% confidence interval 196,000-202,000 km3). This volume is in the range of historical estimates (166,000-280,000 km3), but the overall mean depth of 41.8 m (95% CI 41.2-42.4 m) is significantly lower than previous estimates (62-151 m). These results highlight and constrain the relative scarcity of lake waters in the hydrosphere and have implications for the role of lakes in global biogeochemical cycles.

  5. PoMo: An Allele Frequency-Based Approach for Species Tree Estimation

    PubMed Central

    De Maio, Nicola; Schrempf, Dominik; Kosiol, Carolin

    2015-01-01

    Incomplete lineage sorting can cause incongruencies of the overall species-level phylogenetic tree with the phylogenetic trees for individual genes or genomic segments. If these incongruencies are not accounted for, it is possible to incur several biases in species tree estimation. Here, we present a simple maximum likelihood approach that accounts for ancestral variation and incomplete lineage sorting. We use a POlymorphisms-aware phylogenetic MOdel (PoMo) that we have recently shown to efficiently estimate mutation rates and fixation biases from within and between-species variation data. We extend this model to perform efficient estimation of species trees. We test the performance of PoMo in several different scenarios of incomplete lineage sorting using simulations and compare it with existing methods both in accuracy and computational speed. In contrast to other approaches, our model does not use coalescent theory but is allele frequency based. We show that PoMo is well suited for genome-wide species tree estimation and that on such data it is more accurate than previous approaches. PMID:26209413

  6. Comparison of anatomical, functional and regression methods for estimating the rotation axes of the forearm.

    PubMed

    Fraysse, François; Thewlis, Dominic

    2014-11-07

    Numerous methods exist to estimate the pose of the axes of rotation of the forearm. These include anatomical definitions, such as the conventions proposed by the ISB, and functional methods based on instantaneous helical axes, which are commonly accepted as the modelling gold standard for non-invasive, in-vivo studies. We investigated the validity of a third method, based on regression equations, to estimate the rotation axes of the forearm. We also assessed the accuracy of both ISB methods. Axes obtained from a functional method were considered as the reference. Results indicate a large inter-subject variability in the axes positions, in accordance with previous studies. Both ISB methods gave the same level of accuracy in axes position estimations. Regression equations seem to improve estimation of the flexion-extension axis but not the pronation-supination axis. Overall, given the large inter-subject variability, the use of regression equations cannot be recommended. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. New distributed fusion filtering algorithm based on covariances over sensor networks with random packet dropouts

    NASA Astrophysics Data System (ADS)

    Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J.

    2017-07-01

    This paper studies the distributed fusion estimation problem from multisensor measured outputs perturbed by correlated noises and uncertainties modelled by random parameter matrices. Each sensor transmits its outputs to a local processor over a packet-erasure channel and, consequently, random losses may occur during transmission. Different white sequences of Bernoulli variables are introduced to model the transmission losses. For the estimation, each lost output is replaced by its estimator based on the information received previously, and only the covariances of the processes involved are used, without requiring the signal evolution model. First, a recursive algorithm for the local least-squares filters is derived by using an innovation approach. Then, the cross-correlation matrices between any two local filters is obtained. Finally, the distributed fusion filter weighted by matrices is obtained from the local filters by applying the least-squares criterion. The performance of the estimators and the influence of both sensor uncertainties and transmission losses on the estimation accuracy are analysed in a numerical example.

  8. Yield estimation of sugarcane based on agrometeorological-spectral models

    NASA Technical Reports Server (NTRS)

    Rudorff, Bernardo Friedrich Theodor; Batista, Getulio Teixeira

    1990-01-01

    This work has the objective to assess the performance of a yield estimation model for sugarcane (Succharum officinarum). The model uses orbital gathered spectral data along with yield estimated from an agrometeorological model. The test site includes the sugarcane plantations of the Barra Grande Plant located in Lencois Paulista municipality in Sao Paulo State. Production data of four crop years were analyzed. Yield data observed in the first crop year (1983/84) were regressed against spectral and agrometeorological data of that same year. This provided the model to predict the yield for the following crop year i.e., 1984/85. The model to predict the yield of subsequent years (up to 1987/88) were developed similarly, incorporating all previous years data. The yield estimations obtained from these models explained 69, 54, and 50 percent of the yield variation in the 1984/85, 1985/86, and 1986/87 crop years, respectively. The accuracy of yield estimations based on spectral data only (vegetation index model) and on agrometeorological data only (agrometeorological model) were also investigated.

  9. 2-D Myocardial Deformation Imaging Based on RF-Based Nonrigid Image Registration.

    PubMed

    Chakraborty, Bidisha; Liu, Zhi; Heyde, Brecht; Luo, Jianwen; D'hooge, Jan

    2018-06-01

    Myocardial deformation imaging is a well-established echocardiographic technique for the assessment of myocardial function. Although some solutions make use of speckle tracking of the reconstructed B-mode images, others apply block matching (BM) on the underlying radio frequency (RF) data in order to increase sensitivity to small interframe motion and deformation. However, for both approaches, lateral motion estimation remains a challenge due to the relatively poor lateral resolution of the ultrasound image in combination with the lack of phase information in this direction. Hereto, nonrigid image registration (NRIR) of B-mode images has previously been proposed as an attractive solution. However, hereby, the advantages of RF-based tracking were lost. The aim of this paper was, therefore, to develop an NRIR motion estimator adapted to RF data sets. The accuracy of this estimator was quantified using synthetic data and was contrasted against a state-of-the-art BM solution. The results show that RF-based NRIR outperforms BM in terms of tracking accuracy, particularly, as hypothesized, in the lateral direction. Finally, this RF-based NRIR algorithm was applied clinically, illustrating its ability to estimate both in-plane velocity components in vivo.

  10. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    NASA Astrophysics Data System (ADS)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and posterior eye segment as well as in skin imaging. The new estimator shows superior performance and also shows clearer image contrast.

  11. Impact of air temperature on physically-based maximum precipitation estimation through change in moisture holding capacity of air

    NASA Astrophysics Data System (ADS)

    Ishida, K.; Ohara, N.; Kavvas, M. L.; Chen, Z. Q.; Anderson, M. L.

    2018-01-01

    Impact of air temperature on the Maximum Precipitation (MP) estimation through change in moisture holding capacity of air was investigated. A series of previous studies have estimated the MP of 72-h basin-average precipitation over the American River watershed (ARW) in Northern California by means of the Maximum Precipitation (MP) estimation approach, which utilizes a physically-based regional atmospheric model. For the MP estimation, they have selected 61 severe storm events for the ARW, and have maximized them by means of the atmospheric boundary condition shifting (ABCS) and relative humidity maximization (RHM) methods. This study conducted two types of numerical experiments in addition to the MP estimation by the previous studies. First, the air temperature on the entire lateral boundaries of the outer model domain was increased uniformly by 0.0-8.0 °C with 0.5 °C increments for the two severest maximized historical storm events in addition to application of the ABCS + RHM method to investigate the sensitivity of the basin-average precipitation over the ARW to air temperature rise. In this investigation, a monotonous increase was found in the maximum 72-h basin-average precipitation over the ARW with air temperature rise for both of the storm events. The second numerical experiment used specific amounts of air temperature rise that is assumed to happen under future climate change conditions. Air temperature was increased by those specified amounts uniformly on the entire lateral boundaries in addition to application of the ABCS + RHM method to investigate the impact of air temperature on the MP estimate over the ARW under changing climate. The results in the second numerical experiment show that temperature increases in the future climate may amplify the MP estimate over the ARW. The MP estimate may increase by 14.6% in the middle of the 21st century and by 27.3% in the end of the 21st century compared to the historical period.

  12. Data Anonymization that Leads to the Most Accurate Estimates of Statistical Characteristics: Fuzzy-Motivated Approach

    PubMed Central

    Xiang, G.; Ferson, S.; Ginzburg, L.; Longpré, L.; Mayorga, E.; Kosheleva, O.

    2013-01-01

    To preserve privacy, the original data points (with exact values) are replaced by boxes containing each (inaccessible) data point. This privacy-motivated uncertainty leads to uncertainty in the statistical characteristics computed based on this data. In a previous paper, we described how to minimize this uncertainty under the assumption that we use the same standard statistical estimates for the desired characteristics. In this paper, we show that we can further decrease the resulting uncertainty if we allow fuzzy-motivated weighted estimates, and we explain how to optimally select the corresponding weights. PMID:25187183

  13. Point Cloud Based Approach to Stem Width Extraction of Sorghum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Jihui; Zakhor, Avideh

    A revolution in the field of genomics has produced vast amounts of data and furthered our understanding of the genotypephenotype map, but is currently constrained by manually intensive or limited phenotype data collection. We propose an algorithm to estimate stem width, a key characteristic used for biomass potential evaluation, from 3D point cloud data collected by a robot equipped with a depth sensor in a single pass in a standard field. The algorithm applies a two step alignment to register point clouds in different frames, a Frangi filter to identify stemlike objects in the point cloud and an orientation basedmore » filter to segment out and refine individual stems for width estimation. Individually, detected stems which are split due to occlusions are merged and then registered with previously found stems in previous camera frames in order to track temporally. We then refine the estimates to produce an accurate histogram of width estimates per plot. Since the plants in each plot are genetically identical, distributions of the stem width per plot can be useful in identifying genetically superior sorghum for biofuels.« less

  14. Point Cloud Based Approach to Stem Width Extraction of Sorghum

    DOE PAGES

    Jin, Jihui; Zakhor, Avideh

    2017-01-29

    A revolution in the field of genomics has produced vast amounts of data and furthered our understanding of the genotypephenotype map, but is currently constrained by manually intensive or limited phenotype data collection. We propose an algorithm to estimate stem width, a key characteristic used for biomass potential evaluation, from 3D point cloud data collected by a robot equipped with a depth sensor in a single pass in a standard field. The algorithm applies a two step alignment to register point clouds in different frames, a Frangi filter to identify stemlike objects in the point cloud and an orientation basedmore » filter to segment out and refine individual stems for width estimation. Individually, detected stems which are split due to occlusions are merged and then registered with previously found stems in previous camera frames in order to track temporally. We then refine the estimates to produce an accurate histogram of width estimates per plot. Since the plants in each plot are genetically identical, distributions of the stem width per plot can be useful in identifying genetically superior sorghum for biofuels.« less

  15. Carrying Position Independent User Heading Estimation for Indoor Pedestrian Navigation with Smartphones

    PubMed Central

    Deng, Zhi-An; Wang, Guofeng; Hu, Ying; Cui, Yang

    2016-01-01

    This paper proposes a novel heading estimation approach for indoor pedestrian navigation using the built-in inertial sensors on a smartphone. Unlike previous approaches constraining the carrying position of a smartphone on the user’s body, our approach gives the user a larger freedom by implementing automatic recognition of the device carrying position and subsequent selection of an optimal strategy for heading estimation. We firstly predetermine the motion state by a decision tree using an accelerometer and a barometer. Then, to enable accurate and computational lightweight carrying position recognition, we combine a position classifier with a novel position transition detection algorithm, which may also be used to avoid the confusion between position transition and user turn during pedestrian walking. For a device placed in the trouser pockets or held in a swinging hand, the heading estimation is achieved by deploying a principal component analysis (PCA)-based approach. For a device held in the hand or against the ear during a phone call, user heading is directly estimated by adding the yaw angle of the device to the related heading offset. Experimental results show that our approach can automatically detect carrying positions with high accuracy, and outperforms previous heading estimation approaches in terms of accuracy and applicability. PMID:27187391

  16. Occupational COPD and job exposure matrices: a systematic review and meta-analysis

    PubMed Central

    Sadhra, Steven; Kurmi, Om P; Sadhra, Sandeep S; Lam, Kin Bong Hubert; Ayres, Jon G

    2017-01-01

    Background The association between occupational exposure and COPD reported previously has mostly been derived from studies relying on self-reported exposure to vapors, gases, dust, or fumes (VGDF), which could be subjective and prone to biases. The aim of this study was to assess the strength of association between exposure and COPD from studies that derived exposure by job exposure matrices (JEMs). Methods A systematic search of JEM-based occupational COPD studies published between 1980 and 2015 was conducted in PubMed and EMBASE, followed by meta-analysis. Meta-analysis was performed using a random-effects model, with results presented as a pooled effect estimate with 95% confidence intervals (CIs). The quality of study (risk of bias and confounding) was assessed by 13 RTI questionnaires. Heterogeneity between studies and its possible sources were assessed by Egger test and meta-regression, respectively. Results In all, 61 studies were identified and 29 were included in the meta-analysis. Based on JEM-based studies, there was 22% (pooled odds ratio =1.22; 95% CI 1.18–1.27) increased risk of COPD among those exposed to airborne pollutants arising from occupation. Comparatively, higher risk estimates were obtained for general populations JEMs (based on expert consensus) than workplace-based JEM were derived using measured exposure data (1.26; 1.20–1.33 vs 1.14; 1.10–1.19). Higher risk estimates were also obtained for self-reported exposure to VGDF than JEMs-based exposure to VGDF (1.91; 1.72–2.13 vs 1.10; 1.06–1.24). Dusts, particularly biological dusts (1.33; 1.17–1.51), had the highest risk estimates for COPD. Although the majority of occupational COPD studies focus on dusty environments, no difference in risk estimates was found for the common forms of occupational airborne pollutants. Conclusion Our findings highlight the need to interpret previous studies with caution as self-reported exposure to VGDF may have overestimated the risk of occupational COPD. PMID:28260879

  17. Weighted bi-prediction for light field image coding

    NASA Astrophysics Data System (ADS)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2017-09-01

    Light field imaging based on a single-tier camera equipped with a microlens array - also known as integral, holoscopic, and plenoptic imaging - has currently risen up as a practical and prospective approach for future visual applications and services. However, successfully deploying actual light field imaging applications and services will require developing adequate coding solutions to efficiently handle the massive amount of data involved in these systems. In this context, self-similarity compensated prediction is a non-local spatial prediction scheme based on block matching that has been shown to achieve high efficiency for light field image coding based on the High Efficiency Video Coding (HEVC) standard. As previously shown by the authors, this is possible by simply averaging two predictor blocks that are jointly estimated from a causal search window in the current frame itself, referred to as self-similarity bi-prediction. However, theoretical analyses for motion compensated bi-prediction have suggested that it is still possible to achieve further rate-distortion performance improvements by adaptively estimating the weighting coefficients of the two predictor blocks. Therefore, this paper presents a comprehensive study of the rate-distortion performance for HEVC-based light field image coding when using different sets of weighting coefficients for self-similarity bi-prediction. Experimental results demonstrate that it is possible to extend the previous theoretical conclusions to light field image coding and show that the proposed adaptive weighting coefficient selection leads to up to 5 % of bit savings compared to the previous self-similarity bi-prediction scheme.

  18. Gene genealogies for genetic association mapping, with application to Crohn's disease

    PubMed Central

    Burkett, Kelly M.; Greenwood, Celia M. T.; McNeney, Brad; Graham, Jinko

    2013-01-01

    A gene genealogy describes relationships among haplotypes sampled from a population. Knowledge of the gene genealogy for a set of haplotypes is useful for estimation of population genetic parameters and it also has potential application in finding disease-predisposing genetic variants. As the true gene genealogy is unknown, Markov chain Monte Carlo (MCMC) approaches have been used to sample genealogies conditional on data at multiple genetic markers. We previously implemented an MCMC algorithm to sample from an approximation to the distribution of the gene genealogy conditional on haplotype data. Our approach samples ancestral trees, recombination and mutation rates at a genomic focal point. In this work, we describe how our sampler can be used to find disease-predisposing genetic variants in samples of cases and controls. We use a tree-based association statistic that quantifies the degree to which case haplotypes are more closely related to each other around the focal point than control haplotypes, without relying on a disease model. As the ancestral tree is a latent variable, so is the tree-based association statistic. We show how the sampler can be used to estimate the posterior distribution of the latent test statistic and corresponding latent p-values, which together comprise a fuzzy p-value. We illustrate the approach on a publicly-available dataset from a study of Crohn's disease that consists of genotypes at multiple SNP markers in a small genomic region. We estimate the posterior distribution of the tree-based association statistic and the recombination rate at multiple focal points in the region. Reassuringly, the posterior mean recombination rates estimated at the different focal points are consistent with previously published estimates. The tree-based association approach finds multiple sub-regions where the case haplotypes are more genetically related than the control haplotypes, and that there may be one or multiple disease-predisposing loci. PMID:24348515

  19. Red lesion detection using background estimation and lesions characteristics in diabetic retinal image

    NASA Astrophysics Data System (ADS)

    Zhang, Dongbo; Peng, Yinghui; Yi, Yao; Shang, Xingyu

    2013-10-01

    Detection of red lesions [hemorrhages (HRs) and microaneurysms (MAs)] is crucial for the diagnosis of early diabetic retinopathy. A method based on background estimation and adapted to specific characteristics of HRs and MAs is proposed. Candidate red lesions are located by background estimation and Mahalanobis distance measure and then some adaptive postprocessing techniques, which include vessel detection, nonvessel exclusion based on shape analysis, and noise points exclusion by double-ring filter (only used for MAs detection), are conducted to remove nonlesion pixels. The method is evaluated on our collected image dataset, and experimental results show that it is better than or approximate to other previous approaches. It is effective to reduce the false-positive and false-negative results that arise from incomplete and inaccurate vessel structure.

  20. Estimation of the size of drug-like chemical space based on GDB-17 data.

    PubMed

    Polishchuk, P G; Madzhidov, T I; Varnek, A

    2013-08-01

    The goal of this paper is to estimate the number of realistic drug-like molecules which could ever be synthesized. Unlike previous studies based on exhaustive enumeration of molecular graphs or on combinatorial enumeration preselected fragments, we used results of constrained graphs enumeration by Reymond to establish a correlation between the number of generated structures (M) and the number of heavy atoms (N): logM = 0.584 × N × logN + 0.356. The number of atoms limiting drug-like chemical space of molecules which follow Lipinsky's rules (N = 36) has been obtained from the analysis of the PubChem database. This results in M ≈ 10³³ which is in between the numbers estimated by Ertl (10²³) and by Bohacek (10⁶⁰).

  1. Model-based estimation with boundary side information or boundary regularization [cardiac emission CT].

    PubMed

    Chiao, P C; Rogers, W L; Fessler, J A; Clinthorne, N H; Hero, A O

    1994-01-01

    The authors have previously developed a model-based strategy for joint estimation of myocardial perfusion and boundaries using ECT (emission computed tomography). They have also reported difficulties with boundary estimation in low contrast and low count rate situations. Here they propose using boundary side information (obtainable from high resolution MRI and CT images) or boundary regularization to improve both perfusion and boundary estimation in these situations. To fuse boundary side information into the emission measurements, the authors formulate a joint log-likelihood function to include auxiliary boundary measurements as well as ECT projection measurements. In addition, they introduce registration parameters to align auxiliary boundary measurements with ECT measurements and jointly estimate these parameters with other parameters of interest from the composite measurements. In simulated PET O-15 water myocardial perfusion studies using a simplified model, the authors show that the joint estimation improves perfusion estimation performance and gives boundary alignment accuracy of <0.5 mm even at 0.2 million counts. They implement boundary regularization through formulating a penalized log-likelihood function. They also demonstrate in simulations that simultaneous regularization of the epicardial boundary and myocardial thickness gives comparable perfusion estimation accuracy with the use of boundary side information.

  2. ESTIMATION OF FUNCTIONALS OF SPARSE COVARIANCE MATRICES.

    PubMed

    Fan, Jianqing; Rigollet, Philippe; Wang, Weichen

    High-dimensional statistical tests often ignore correlations to gain simplicity and stability leading to null distributions that depend on functionals of correlation matrices such as their Frobenius norm and other ℓ r norms. Motivated by the computation of critical values of such tests, we investigate the difficulty of estimation the functionals of sparse correlation matrices. Specifically, we show that simple plug-in procedures based on thresholded estimators of correlation matrices are sparsity-adaptive and minimax optimal over a large class of correlation matrices. Akin to previous results on functional estimation, the minimax rates exhibit an elbow phenomenon. Our results are further illustrated in simulated data as well as an empirical study of data arising in financial econometrics.

  3. ESTIMATION OF FUNCTIONALS OF SPARSE COVARIANCE MATRICES

    PubMed Central

    Fan, Jianqing; Rigollet, Philippe; Wang, Weichen

    2016-01-01

    High-dimensional statistical tests often ignore correlations to gain simplicity and stability leading to null distributions that depend on functionals of correlation matrices such as their Frobenius norm and other ℓr norms. Motivated by the computation of critical values of such tests, we investigate the difficulty of estimation the functionals of sparse correlation matrices. Specifically, we show that simple plug-in procedures based on thresholded estimators of correlation matrices are sparsity-adaptive and minimax optimal over a large class of correlation matrices. Akin to previous results on functional estimation, the minimax rates exhibit an elbow phenomenon. Our results are further illustrated in simulated data as well as an empirical study of data arising in financial econometrics. PMID:26806986

  4. CDGPS-Based Relative Navigation for Multiple Spacecraft

    NASA Technical Reports Server (NTRS)

    Mitchell, Megan Leigh

    2004-01-01

    This thesis investigates the use of Carrier-phase Differential GPS (CDGPS) in relative navigation filters for formation flying spacecraft. This work analyzes the relationship between the Extended Kalman Filter (EKF) design parameters and the resulting estimation accuracies, and in particular, the effect of the process and measurement noises on the semimajor axis error. This analysis clearly demonstrates that CDGPS-based relative navigation Kalman filters yield good estimation performance without satisfying the strong correlation property that previous work had associated with "good" navigation filters. Several examples are presented to show that the Kalman filter can be forced to create solutions with stronger correlations, but these always result in larger semimajor axis errors. These linear and nonlinear simulations also demonstrated the crucial role of the process noise in determining the semimajor axis knowledge. More sophisticated nonlinear models were included to reduce the propagation error in the estimator, but for long time steps and large separations, the EKF, which only uses a linearized covariance propagation, yielded very poor performance. In contrast, the CDGPS-based Unscented Kalman relative navigation Filter (UKF) handled the dynamic and measurement nonlinearities much better and yielded far superior performance than the EKF. The UKF produced good estimates for scenarios with long baselines and time steps for which the EKF would diverge rapidly. A hardware-in-the-loop testbed that is compatible with the Spirent Simulator at NASA GSFC was developed to provide a very flexible and robust capability for demonstrating CDGPS technologies in closed-loop. This extended previous work to implement the decentralized relative navigation algorithms in real time.

  5. The assessment of the performance of covariance-based structural equation modeling and partial least square path modeling

    NASA Astrophysics Data System (ADS)

    Aimran, Ahmad Nazim; Ahmad, Sabri; Afthanorhan, Asyraf; Awang, Zainudin

    2017-05-01

    Structural equation modeling (SEM) is the second generation statistical analysis technique developed for analyzing the inter-relationships among multiple variables in a model. Previous studies have shown that there seemed to be at least an implicit agreement about the factors that should drive the choice between covariance-based structural equation modeling (CB-SEM) and partial least square path modeling (PLS-PM). PLS-PM appears to be the preferred method by previous scholars because of its less stringent assumption and the need to avoid the perceived difficulties in CB-SEM. Along with this issue has been the increasing debate among researchers on the use of CB-SEM and PLS-PM in studies. The present study intends to assess the performance of CB-SEM and PLS-PM as a confirmatory study in which the findings will contribute to the body of knowledge of SEM. Maximum likelihood (ML) was chosen as the estimator for CB-SEM and was expected to be more powerful than PLS-PM. Based on the balanced experimental design, the multivariate normal data with specified population parameter and sample sizes were generated using Pro-Active Monte Carlo simulation, and the data were analyzed using AMOS for CB-SEM and SmartPLS for PLS-PM. Comparative Bias Index (CBI), construct relationship, average variance extracted (AVE), composite reliability (CR), and Fornell-Larcker criterion were used to study the consequence of each estimator. The findings conclude that CB-SEM performed notably better than PLS-PM in estimation for large sample size (100 and above), particularly in terms of estimations accuracy and consistency.

  6. Grass competition may benefit high density peach orchards

    USDA-ARS?s Scientific Manuscript database

    Previous research demonstrated that grass competition dwarfed and reduced the yield of individual peach trees [Prunus persica (L.) Batsch] grown in narrow vegetation free areas (VFA). In this report, the area-based yield of two peach cultivars, 'Redskin' and 'Jersey Dawn' on 'Lovell', was estimated...

  7. Clarifying springtime temperature reconstructions of the medieval period by gap-filling the cherry blossom phenological data series at Kyoto, Japan

    NASA Astrophysics Data System (ADS)

    Aono, Yasuyuki; Saito, Shizuka

    2010-03-01

    We investigated documents and diaries from the ninth to the fourteenth centuries to supplement the phenological data series of the flowering of Japanese cherry ( Prunus jamasakura) in Kyoto, Japan, to improve and fill gaps in temperature estimates based on previously reported phenological data. We then reconstructed a nearly continuous series of March mean temperatures based on 224 years of cherry flowering data, including 51 years of previously unused data, to clarify springtime climate changes. We also attempted to estimate cherry full-flowering dates from phenological records of other deciduous species, adding further data for 6 years in the tenth and eleventh centuries by using the flowering phenology of Japanese wisteria ( Wisteria floribunda). The reconstructed tenth century March mean temperatures were around 7°C, indicating warmer conditions than at present. Temperatures then fell until the 1180s, recovered gradually until the 1310s, and then declined again in the mid-fourteenth century.

  8. Spatially-controlled illumination with rescan confocal microscopy enhances image quality, resolution and reduces photodamage

    NASA Astrophysics Data System (ADS)

    Krishnaswami, Venkataraman; De Luca, Giulia M. R.; Breedijk, Ronald M. P.; Van Noorden, Cornelis J. F.; Manders, Erik M. M.; Hoebe, Ron A.

    2017-02-01

    Fluorescence microscopy is an important tool in biomedical imaging. An inherent trade-off lies between image quality and photodamage. Recently, we have introduced rescan confocal microscopy (RCM) that improves the lateral resolution of a confocal microscope down to 170 nm. Previously, we have demonstrated that with controlled-light exposure microscopy, spatial control of illumination reduces photodamage without compromising image quality. Here, we show that the combination of these two techniques leads to high resolution imaging with reduced photodamage without compromising image quality. Implementation of spatially-controlled illumination was carried out in RCM using a line scanning-based approach. Illumination is spatially-controlled for every line during imaging with the help of a prediction algorithm that estimates the spatial profile of the fluorescent specimen. The estimation is based on the information available from previously acquired line images. As a proof-of-principle, we show images of N1E-115 neuroblastoma cells, obtained by this new setup with reduced illumination dose, improved resolution and without compromising image quality.

  9. Clarifying springtime temperature reconstructions of the medieval period by gap-filling the cherry blossom phenological data series at Kyoto, Japan.

    PubMed

    Aono, Yasuyuki; Saito, Shizuka

    2010-03-01

    We investigated documents and diaries from the ninth to the fourteenth centuries to supplement the phenological data series of the flowering of Japanese cherry (Prunus jamasakura) in Kyoto, Japan, to improve and fill gaps in temperature estimates based on previously reported phenological data. We then reconstructed a nearly continuous series of March mean temperatures based on 224 years of cherry flowering data, including 51 years of previously unused data, to clarify springtime climate changes. We also attempted to estimate cherry full-flowering dates from phenological records of other deciduous species, adding further data for 6 years in the tenth and eleventh centuries by using the flowering phenology of Japanese wisteria (Wisteria floribunda). The reconstructed tenth century March mean temperatures were around 7 degrees C, indicating warmer conditions than at present. Temperatures then fell until the 1180s, recovered gradually until the 1310s, and then declined again in the mid-fourteenth century.

  10. Nationwide incidence of motor neuron disease using the French health insurance information system database.

    PubMed

    Kab, Sofiane; Moisan, Frédéric; Preux, Pierre-Marie; Marin, Benoît; Elbaz, Alexis

    2017-08-01

    There are no estimates of the nationwide incidence of motor neuron disease (MND) in France. We used the French health insurance information system to identify incident MND cases (2012-2014), and compared incidence figures to those from three external sources. We identified incident MND cases (2012-2014) based on three data sources (riluzole claims, hospitalisation records, long-term chronic disease benefits), and computed MND incidence by age, gender, and geographic region. We used French mortality statistics, Limousin ALS registry data, and previous European studies based on administrative databases to perform external comparisons. We identified 6553 MND incident cases. After standardisation to the United States 2010 population, the age/gender-standardised incidence was 2.72/100,000 person-years (males, 3.37; females, 2.17; male:female ratio = 1.53, 95% CI1.46-1.61). There was no major spatial difference in MND distribution. Our data were in agreement with the French death database (standardised mortality ratio = 1.01, 95% CI = 0.96-1.06) and Limousin ALS registry (standardised incidence ratio = 0.92, 95% CI = 0.72-1.15). Incidence estimates were in the same range as those from previous studies. We report French nationwide incidence estimates of MND. Administrative databases including hospital discharge data and riluzole claims offer an interesting approach to identify large population-based samples of patients with MND for epidemiologic studies and surveillance.

  11. Body composition estimation from selected slices: equations computed from a new semi-automatic thresholding method developed on whole-body CT scans

    PubMed Central

    Villa, Chiara; Brůžek, Jaroslav

    2017-01-01

    Background Estimating volumes and masses of total body components is important for the study and treatment monitoring of nutrition and nutrition-related disorders, cancer, joint replacement, energy-expenditure and exercise physiology. While several equations have been offered for estimating total body components from MRI slices, no reliable and tested method exists for CT scans. For the first time, body composition data was derived from 41 high-resolution whole-body CT scans. From these data, we defined equations for estimating volumes and masses of total body AT and LT from corresponding tissue areas measured in selected CT scan slices. Methods We present a new semi-automatic approach to defining the density cutoff between adipose tissue (AT) and lean tissue (LT) in such material. An intra-class correlation coefficient (ICC) was used to validate the method. The equations for estimating the whole-body composition volume and mass from areas measured in selected slices were modeled with ordinary least squares (OLS) linear regressions and support vector machine regression (SVMR). Results and Discussion The best predictive equation for total body AT volume was based on the AT area of a single slice located between the 4th and 5th lumbar vertebrae (L4-L5) and produced lower prediction errors (|PE| = 1.86 liters, %PE = 8.77) than previous equations also based on CT scans. The LT area of the mid-thigh provided the lowest prediction errors (|PE| = 2.52 liters, %PE = 7.08) for estimating whole-body LT volume. We also present equations to predict total body AT and LT masses from a slice located at L4-L5 that resulted in reduced error compared with the previously published equations based on CT scans. The multislice SVMR predictor gave the theoretical upper limit for prediction precision of volumes and cross-validated the results. PMID:28533960

  12. Body composition estimation from selected slices: equations computed from a new semi-automatic thresholding method developed on whole-body CT scans.

    PubMed

    Lacoste Jeanson, Alizé; Dupej, Ján; Villa, Chiara; Brůžek, Jaroslav

    2017-01-01

    Estimating volumes and masses of total body components is important for the study and treatment monitoring of nutrition and nutrition-related disorders, cancer, joint replacement, energy-expenditure and exercise physiology. While several equations have been offered for estimating total body components from MRI slices, no reliable and tested method exists for CT scans. For the first time, body composition data was derived from 41 high-resolution whole-body CT scans. From these data, we defined equations for estimating volumes and masses of total body AT and LT from corresponding tissue areas measured in selected CT scan slices. We present a new semi-automatic approach to defining the density cutoff between adipose tissue (AT) and lean tissue (LT) in such material. An intra-class correlation coefficient (ICC) was used to validate the method. The equations for estimating the whole-body composition volume and mass from areas measured in selected slices were modeled with ordinary least squares (OLS) linear regressions and support vector machine regression (SVMR). The best predictive equation for total body AT volume was based on the AT area of a single slice located between the 4th and 5th lumbar vertebrae (L4-L5) and produced lower prediction errors (|PE| = 1.86 liters, %PE = 8.77) than previous equations also based on CT scans. The LT area of the mid-thigh provided the lowest prediction errors (|PE| = 2.52 liters, %PE = 7.08) for estimating whole-body LT volume. We also present equations to predict total body AT and LT masses from a slice located at L4-L5 that resulted in reduced error compared with the previously published equations based on CT scans. The multislice SVMR predictor gave the theoretical upper limit for prediction precision of volumes and cross-validated the results.

  13. Agricultural mapping using Support Vector Machine-Based Endmember Extraction (SVM-BEE)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archibald, Richard K; Filippi, Anthony M; Bhaduri, Budhendra L

    Extracting endmembers from remotely sensed images of vegetated areas can present difficulties. In this research, we applied a recently developed endmember-extraction algorithm based on Support Vector Machines (SVMs) to the problem of semi-autonomous estimation of vegetation endmembers from a hyperspectral image. This algorithm, referred to as Support Vector Machine-Based Endmember Extraction (SVM-BEE), accurately and rapidly yields a computed representation of hyperspectral data that can accommodate multiple distributions. The number of distributions is identified without prior knowledge, based upon this representation. Prior work established that SVM-BEE is robustly noise-tolerant and can semi-automatically and effectively estimate endmembers; synthetic data and a geologicmore » scene were previously analyzed. Here we compared the efficacies of the SVM-BEE and N-FINDR algorithms in extracting endmembers from a predominantly agricultural scene. SVM-BEE was able to estimate vegetation and other endmembers for all classes in the image, which N-FINDR failed to do. Classifications based on SVM-BEE endmembers were markedly more accurate compared with those based on N-FINDR endmembers.« less

  14. Combining Satellite Microwave Radiometer and Radar Observations to Estimate Atmospheric Latent Heating Profiles

    NASA Technical Reports Server (NTRS)

    Grecu, Mircea; Olson, William S.; Shie, Chung-Lin; L'Ecuyer, Tristan S.; Tao, Wei-Kuo

    2009-01-01

    In this study, satellite passive microwave sensor observations from the TRMM Microwave Imager (TMI) are utilized to make estimates of latent + eddy sensible heating rates (Q1-QR) in regions of precipitation. The TMI heating algorithm (TRAIN) is calibrated, or "trained" using relatively accurate estimates of heating based upon spaceborne Precipitation Radar (PR) observations collocated with the TMI observations over a one-month period. The heating estimation technique is based upon a previously described Bayesian methodology, but with improvements in supporting cloud-resolving model simulations, an adjustment of precipitation echo tops to compensate for model biases, and a separate scaling of convective and stratiform heating components that leads to an approximate balance between estimated vertically-integrated condensation and surface precipitation. Estimates of Q1-QR from TMI compare favorably with the PR training estimates and show only modest sensitivity to the cloud-resolving model simulations of heating used to construct the training data. Moreover, the net condensation in the corresponding annual mean satellite latent heating profile is within a few percent of the annual mean surface precipitation rate over the tropical and subtropical oceans where the algorithm is applied. Comparisons of Q1 produced by combining TMI Q1-QR with independently derived estimates of QR show reasonable agreement with rawinsonde-based analyses of Q1 from two field campaigns, although the satellite estimates exhibit heating profile structure with sharper and more intense heating peaks than the rawinsonde estimates. 2

  15. Methods for determining time of death.

    PubMed

    Madea, Burkhard

    2016-12-01

    Medicolegal death time estimation must estimate the time since death reliably. Reliability can only be provided empirically by statistical analysis of errors in field studies. Determining the time since death requires the calculation of measurable data along a time-dependent curve back to the starting point. Various methods are used to estimate the time since death. The current gold standard for death time estimation is a previously established nomogram method based on the two-exponential model of body cooling. Great experimental and practical achievements have been realized using this nomogram method. To reduce the margin of error of the nomogram method, a compound method was developed based on electrical and mechanical excitability of skeletal muscle, pharmacological excitability of the iris, rigor mortis, and postmortem lividity. Further increasing the accuracy of death time estimation involves the development of conditional probability distributions for death time estimation based on the compound method. Although many studies have evaluated chemical methods of death time estimation, such methods play a marginal role in daily forensic practice. However, increased precision of death time estimation has recently been achieved by considering various influencing factors (i.e., preexisting diseases, duration of terminal episode, and ambient temperature). Putrefactive changes may be used for death time estimation in water-immersed bodies. Furthermore, recently developed technologies, such as H magnetic resonance spectroscopy, can be used to quantitatively study decompositional changes. This review addresses the gold standard method of death time estimation in forensic practice and promising technological and scientific developments in the field.

  16. Estimation of Road Friction Coefficient in Different Road Conditions Based on Vehicle Braking Dynamics

    NASA Astrophysics Data System (ADS)

    Zhao, You-Qun; Li, Hai-Qing; Lin, Fen; Wang, Jian; Ji, Xue-Wu

    2017-07-01

    The accurate estimation of road friction coefficient in the active safety control system has become increasingly prominent. Most previous studies on road friction estimation have only used vehicle longitudinal or lateral dynamics and often ignored the load transfer, which tends to cause inaccurate of the actual road friction coefficient. A novel method considering load transfer of front and rear axles is proposed to estimate road friction coefficient based on braking dynamic model of two-wheeled vehicle. Sliding mode control technique is used to build the ideal braking torque controller, which control target is to control the actual wheel slip ratio of front and rear wheels tracking the ideal wheel slip ratio. In order to eliminate the chattering problem of the sliding mode controller, integral switching surface is used to design the sliding mode surface. A second order linear extended state observer is designed to observe road friction coefficient based on wheel speed and braking torque of front and rear wheels. The proposed road friction coefficient estimation schemes are evaluated by simulation in ADAMS/Car. The results show that the estimated values can well agree with the actual values in different road conditions. The observer can estimate road friction coefficient exactly in real-time and resist external disturbance. The proposed research provides a novel method to estimate road friction coefficient with strong robustness and more accurate.

  17. On the Methods for Estimating the Corneoscleral Limbus.

    PubMed

    Jesus, Danilo A; Iskander, D Robert

    2017-08-01

    The aim of this study was to develop computational methods for estimating limbus position based on the measurements of three-dimensional (3-D) corneoscleral topography and ascertain whether corneoscleral limbus routinely estimated from the frontal image corresponds to that derived from topographical information. Two new computational methods for estimating the limbus position are proposed: One based on approximating the raw anterior eye height data by series of Zernike polynomials and one that combines the 3-D corneoscleral topography with the frontal grayscale image acquired with the digital camera in-built in the profilometer. The proposed methods are contrasted against a previously described image-only-based procedure and to a technique of manual image annotation. The estimates of corneoscleral limbus radius were characterized with a high precision. The group average (mean ± standard deviation) of the maximum difference between estimates derived from all considered methods was 0.27 ± 0.14 mm and reached up to 0.55 mm. The four estimating methods lead to statistically significant differences (nonparametric ANOVA (the Analysis of Variance) test, p 0.05). Precise topographical limbus demarcation is possible either from the frontal digital images of the eye or from the 3-D topographical information of corneoscleral region. However, the results demonstrated that the corneoscleral limbus estimated from the anterior eye topography does not always correspond to that obtained through image-only based techniques. The experimental findings have shown that 3-D topography of anterior eye, in the absence of a gold standard, has the potential to become a new computational methodology for estimating the corneoscleral limbus.

  18. Estimating Isometric Tension of Finger Muscle Using Needle EMG Signals and the Twitch Contraction Model

    NASA Astrophysics Data System (ADS)

    Tachibana, Hideyuki; Suzuki, Takafumi; Mabuchi, Kunihiko

    We address an estimation method of isometric muscle tension of fingers, as fundamental research for a neural signal-based prosthesis of fingers. We utilize needle electromyogram (EMG) signals, which have approximately equivalent information to peripheral neural signals. The estimating algorithm comprised two convolution operations. The first convolution is between normal distribution and a spike array, which is detected by needle EMG signals. The convolution estimates the probability density of spike-invoking time in the muscle. In this convolution, we hypothesize that each motor unit in a muscle activates spikes independently based on a same probability density function. The second convolution is between the result of the previous convolution and isometric twitch, viz., the impulse response of the motor unit. The result of the calculation is the sum of all estimated tensions of whole muscle fibers, i.e., muscle tension. We confirmed that there is good correlation between the estimated tension of the muscle and the actual tension, with >0.9 correlation coefficients at 59%, and >0.8 at 89% of all trials.

  19. Comparison between field mill and corona point instrumentation at Kennedy Space Center - Use of these data with a model to determine cloudbase electric fields

    NASA Technical Reports Server (NTRS)

    Markson, R.; Anderson, B.; Govaert, J.; Fairall, C. W.

    1989-01-01

    A novel coronal current-determining instrument is being used at NASA-KSC which overcomes previous difficulties with wind sensitivity and a voltage-threshold 'deadband'. The mounting of the corona needle at an elevated location reduces coronal and electrode layer space-charge influences on electric fields, rendering the measurement of space charge density possible. In conjunction with a space-charge compensation model, these features allow a more realistic estimation of cloud base electric fields and the potential for lightning strike than has previously been possible with ground-based sensors.

  20. Analysis of Measurement Error and Estimator Shape in Three-Point Hydraulic Gradient Estimators

    NASA Astrophysics Data System (ADS)

    McKenna, S. A.; Wahi, A. K.

    2003-12-01

    Three spatially separated measurements of head provide a means of estimating the magnitude and orientation of the hydraulic gradient. Previous work with three-point estimators has focused on the effect of the size (area) of the three-point estimator and measurement error on the final estimates of the gradient magnitude and orientation in laboratory and field studies (Mizell, 1980; Silliman and Frost, 1995; Silliman and Mantz, 2000; Ruskauff and Rumbaugh, 1996). However, a systematic analysis of the combined effects of measurement error, estimator shape and estimator orientation relative to the gradient orientation has not previously been conducted. Monte Carlo simulation with an underlying assumption of a homogeneous transmissivity field is used to examine the effects of uncorrelated measurement error on a series of eleven different three-point estimators having the same size but different shapes as a function of the orientation of the true gradient. Results show that the variance in the estimate of both the magnitude and the orientation increase linearly with the increase in measurement error in agreement with the results of stochastic theory for estimators that are small relative to the correlation length of transmissivity (Mizell, 1980). Three-point estimator shapes with base to height ratios between 0.5 and 5.0 provide accurate estimates of magnitude and orientation across all orientations of the true gradient. As an example, these results are applied to data collected from a monitoring network of 25 wells at the WIPP site during two different time periods. The simulation results are used to reduce the set of all possible combinations of three wells to those combinations with acceptable measurement errors relative to the amount of head drop across the estimator and base to height ratios between 0.5 and 5.0. These limitations reduce the set of all possible well combinations by 98 percent and show that size alone as defined by triangle area is not a valid discriminator of whether or not the estimator provides accurate estimates of the gradient magnitude and orientation. This research was funded by WIPP programs administered by the U.S Department of Energy. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  1. Constraining global air-sea gas exchange for CO2 with recent bomb 14C measurements

    NASA Astrophysics Data System (ADS)

    Sweeney, Colm; Gloor, Emanuel; Jacobson, Andrew R.; Key, Robert M.; McKinley, Galen; Sarmiento, Jorge L.; Wanninkhof, Rik

    2007-06-01

    The 14CO2 released into the stratosphere during bomb testing in the early 1960s provides a global constraint on air-sea gas exchange of soluble atmospheric gases like CO2. Using the most complete database of dissolved inorganic radiocarbon, DI14C, available to date and a suite of ocean general circulation models in an inverse mode we recalculate the ocean inventory of bomb-produced DI14C in the global ocean and confirm that there is a 25% decrease from previous estimates using older DI14C data sets. Additionally, we find a 33% lower globally averaged gas transfer velocity for CO2 compared to previous estimates (Wanninkhof, 1992) using the NCEP/NCAR Reanalysis 1 1954-2000 where the global mean winds are 6.9 m s-1. Unlike some earlier ocean radiocarbon studies, the implied gas transfer velocity finally closes the gap between small-scale deliberate tracer studies and global-scale estimates. Additionally, the total inventory of bomb-produced radiocarbon in the ocean is now in agreement with global budgets based on radiocarbon measurements made in the stratosphere and troposphere. Using the implied relationship between wind speed and gas transfer velocity ks = 0.27(Sc/660)-0.5 and standard partial pressure difference climatology of CO2 we obtain an net air-sea flux estimate of 1.3 ± 0.5 PgCyr-1 for 1995. After accounting for the carbon transferred from rivers to the deep ocean, our estimate of oceanic uptake (1.8 ± 0.5 PgCyr-1) compares well with estimates based on ocean inventories, ocean transport inversions using ocean concentration data, and model simulations.

  2. Optimal estimation for global ground-level fine particulate matter concentrations

    NASA Astrophysics Data System (ADS)

    Donkelaar, Aaron; Martin, Randall V.; Spurr, Robert J. D.; Drury, Easan; Remer, Lorraine A.; Levy, Robert C.; Wang, Jun

    2013-06-01

    We develop an optimal estimation (OE) algorithm based on top-of-atmosphere reflectances observed by the MODIS satellite instrument to retrieve near-surface fine particulate matter (PM2.5). The GEOS-Chem chemical transport model is used to provide prior information for the Aerosol Optical Depth (AOD) retrieval and to relate total column AOD to PM2.5. We adjust the shape of the GEOS-Chem relative vertical extinction profiles by comparison with lidar retrievals from the CALIOP satellite instrument. Surface reflectance relationships used in the OE algorithm are indexed by land type. Error quantities needed for this OE algorithm are inferred by comparison with AOD observations taken by a worldwide network of sun photometers (AERONET) and extended globally based upon aerosol speciation and cross correlation for simulated values, and upon land type for observational values. Significant agreement in PM2.5 is found over North America for 2005 (slope = 0.89; r = 0.82; 1-σ error = 1 µg/m3 + 27%), with improved coverage and correlation relative to previous work for the same region and time period, although certain subregions, such as the San Joaquin Valley of California are better represented by previous estimates. Independently derived error estimates of the OE PM2.5 values at in situ locations over North America (of ±(2.5 µg/m3 + 31%) and Europe of ±(3.5 µg/m3 + 30%) are corroborated by comparison with in situ observations, although globally (error estimates of ±(3.0 µg/m3 + 35%), may be underestimated. Global population-weighted PM2.5 at 50% relative humidity is estimated as 27.8 µg/m3 at 0.1° × 0.1° resolution.

  3. Probabilistic reanalysis of twentieth-century sea-level rise.

    PubMed

    Hay, Carling C; Morrow, Eric; Kopp, Robert E; Mitrovica, Jerry X

    2015-01-22

    Estimating and accounting for twentieth-century global mean sea level (GMSL) rise is critical to characterizing current and future human-induced sea-level change. Several previous analyses of tide gauge records--employing different methods to accommodate the spatial sparsity and temporal incompleteness of the data and to constrain the geometry of long-term sea-level change--have concluded that GMSL rose over the twentieth century at a mean rate of 1.6 to 1.9 millimetres per year. Efforts to account for this rate by summing estimates of individual contributions from glacier and ice-sheet mass loss, ocean thermal expansion, and changes in land water storage fall significantly short in the period before 1990. The failure to close the budget of GMSL during this period has led to suggestions that several contributions may have been systematically underestimated. However, the extent to which the limitations of tide gauge analyses have affected estimates of the GMSL rate of change is unclear. Here we revisit estimates of twentieth-century GMSL rise using probabilistic techniques and find a rate of GMSL rise from 1901 to 1990 of 1.2 ± 0.2 millimetres per year (90% confidence interval). Based on individual contributions tabulated in the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, this estimate closes the twentieth-century sea-level budget. Our analysis, which combines tide gauge records with physics-based and model-derived geometries of the various contributing signals, also indicates that GMSL rose at a rate of 3.0 ± 0.7 millimetres per year between 1993 and 2010, consistent with prior estimates from tide gauge records.The increase in rate relative to the 1901-90 trend is accordingly larger than previously thought; this revision may affect some projections of future sea-level rise.

  4. Improved argument-FFT frequency offset estimation for QPSK coherent optical Systems

    NASA Astrophysics Data System (ADS)

    Han, Jilong; Li, Wei; Yuan, Zhilin; Li, Haitao; Huang, Liyan; Hu, Qianggao

    2016-02-01

    A frequency offset estimation (FOE) algorithm based on fast Fourier transform (FFT) of the signal's argument is investigated, which does not require removing the modulated data phase. In this paper, we analyze the flaw of the argument-FFT algorithm and propose a combined FOE algorithm, in which the absolute of frequency offset (FO) is accurately calculated by argument-FFT algorithm with a relatively large number of samples and the sign of FO is determined by FFT-based interpolation discrete Fourier transformation (DFT) algorithm with a relatively small number of samples. Compared with the previous algorithms based on argument-FFT, the proposed one has low complexity and can still effectively work with a relatively less number of samples.

  5. An adaptive Gaussian process-based method for efficient Bayesian experimental design in groundwater contaminant source identification problems: ADAPTIVE GAUSSIAN PROCESS-BASED INVERSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jiangjiang; Li, Weixuan; Zeng, Lingzao

    Surrogate models are commonly used in Bayesian approaches such as Markov Chain Monte Carlo (MCMC) to avoid repetitive CPU-demanding model evaluations. However, the approximation error of a surrogate may lead to biased estimations of the posterior distribution. This bias can be corrected by constructing a very accurate surrogate or implementing MCMC in a two-stage manner. Since the two-stage MCMC requires extra original model evaluations, the computational cost is still high. If the information of measurement is incorporated, a locally accurate approximation of the original model can be adaptively constructed with low computational cost. Based on this idea, we propose amore » Gaussian process (GP) surrogate-based Bayesian experimental design and parameter estimation approach for groundwater contaminant source identification problems. A major advantage of the GP surrogate is that it provides a convenient estimation of the approximation error, which can be incorporated in the Bayesian formula to avoid over-confident estimation of the posterior distribution. The proposed approach is tested with a numerical case study. Without sacrificing the estimation accuracy, the new approach achieves about 200 times of speed-up compared to our previous work using two-stage MCMC.« less

  6. Spatially Common Sparsity Based Adaptive Channel Estimation and Feedback for FDD Massive MIMO

    NASA Astrophysics Data System (ADS)

    Gao, Zhen; Dai, Linglong; Wang, Zhaocheng; Chen, Sheng

    2015-12-01

    This paper proposes a spatially common sparsity based adaptive channel estimation and feedback scheme for frequency division duplex based massive multi-input multi-output (MIMO) systems, which adapts training overhead and pilot design to reliably estimate and feed back the downlink channel state information (CSI) with significantly reduced overhead. Specifically, a non-orthogonal downlink pilot design is first proposed, which is very different from standard orthogonal pilots. By exploiting the spatially common sparsity of massive MIMO channels, a compressive sensing (CS) based adaptive CSI acquisition scheme is proposed, where the consumed time slot overhead only adaptively depends on the sparsity level of the channels. Additionally, a distributed sparsity adaptive matching pursuit algorithm is proposed to jointly estimate the channels of multiple subcarriers. Furthermore, by exploiting the temporal channel correlation, a closed-loop channel tracking scheme is provided, which adaptively designs the non-orthogonal pilot according to the previous channel estimation to achieve an enhanced CSI acquisition. Finally, we generalize the results of the multiple-measurement-vectors case in CS and derive the Cramer-Rao lower bound of the proposed scheme, which enlightens us to design the non-orthogonal pilot signals for the improved performance. Simulation results demonstrate that the proposed scheme outperforms its counterparts, and it is capable of approaching the performance bound.

  7. Incorporating New Technologies Into Toxicity Testing and Risk Assessment: Moving From 21st Century Vision to a Data-Driven Framework

    PubMed Central

    Thomas, Russell S.

    2013-01-01

    Based on existing data and previous work, a series of studies is proposed as a basis toward a pragmatic early step in transforming toxicity testing. These studies were assembled into a data-driven framework that invokes successive tiers of testing with margin of exposure (MOE) as the primary metric. The first tier of the framework integrates data from high-throughput in vitro assays, in vitro-to-in vivo extrapolation (IVIVE) pharmacokinetic modeling, and exposure modeling. The in vitro assays are used to separate chemicals based on their relative selectivity in interacting with biological targets and identify the concentration at which these interactions occur. The IVIVE modeling converts in vitro concentrations into external dose for calculation of the point of departure (POD) and comparisons to human exposure estimates to yield a MOE. The second tier involves short-term in vivo studies, expanded pharmacokinetic evaluations, and refined human exposure estimates. The results from the second tier studies provide more accurate estimates of the POD and the MOE. The third tier contains the traditional animal studies currently used to assess chemical safety. In each tier, the POD for selective chemicals is based primarily on endpoints associated with a proposed mode of action, whereas the POD for nonselective chemicals is based on potential biological perturbation. Based on the MOE, a significant percentage of chemicals evaluated in the first 2 tiers could be eliminated from further testing. The framework provides a risk-based and animal-sparing approach to evaluate chemical safety, drawing broadly from previous experience but incorporating technological advances to increase efficiency. PMID:23958734

  8. Downlink Training Techniques for FDD Massive MIMO Systems: Open-Loop and Closed-Loop Training With Memory

    NASA Astrophysics Data System (ADS)

    Choi, Junil; Love, David J.; Bidigare, Patrick

    2014-10-01

    The concept of deploying a large number of antennas at the base station, often called massive multiple-input multiple-output (MIMO), has drawn considerable interest because of its potential ability to revolutionize current wireless communication systems. Most literature on massive MIMO systems assumes time division duplexing (TDD), although frequency division duplexing (FDD) dominates current cellular systems. Due to the large number of transmit antennas at the base station, currently standardized approaches would require a large percentage of the precious downlink and uplink resources in FDD massive MIMO be used for training signal transmissions and channel state information (CSI) feedback. To reduce the overhead of the downlink training phase, we propose practical open-loop and closed-loop training frameworks in this paper. We assume the base station and the user share a common set of training signals in advance. In open-loop training, the base station transmits training signals in a round-robin manner, and the user successively estimates the current channel using long-term channel statistics such as temporal and spatial correlations and previous channel estimates. In closed-loop training, the user feeds back the best training signal to be sent in the future based on channel prediction and the previously received training signals. With a small amount of feedback from the user to the base station, closed-loop training offers better performance in the data communication phase, especially when the signal-to-noise ratio is low, the number of transmit antennas is large, or prior channel estimates are not accurate at the beginning of the communication setup, all of which would be mostly beneficial for massive MIMO systems.

  9. Regional ground-water evapotranspiration and ground-water budgets, Great Basin, Nevada

    USGS Publications Warehouse

    Nichols, William D.

    2000-01-01

    PART A: Ground-water evapotranspiration data from five sites in Nevada and seven sites in Owens Valley, California, were used to develop equations for estimating ground-water evapotranspiration as a function of phreatophyte plant cover or as a function of the depth to ground water. Equations are given for estimating mean daily seasonal and annual ground-water evapotranspiration. The equations that estimate ground-water evapotranspiration as a function of plant cover can be used to estimate regional-scale ground-water evapotranspiration using vegetation indices derived from satellite data for areas where the depth to ground water is poorly known. Equations that estimate ground-water evapotranspiration as a function of the depth to ground water can be used where the depth to ground water is known, but for which information on plant cover is lacking. PART B: Previous ground-water studies estimated groundwater evapotranspiration by phreatophytes and bare soil in Nevada on the basis of results of field studies published in 1912 and 1932. More recent studies of evapotranspiration by rangeland phreatophytes, using micrometeorological methods as discussed in Chapter A of this report, provide new data on which to base estimates of ground-water evapotranspiration. An approach correlating ground-water evapotranspiration with plant cover is used in conjunction with a modified soil-adjusted vegetation index derived from Landsat data to develop a method for estimating the magnitude and distribution of ground-water evapotranspiration at a regional scale. Large areas of phreatophytes near Duckwater and Lockes in Railroad Valley are believed to subsist on ground water discharged from nearby regional springs. Ground-water evapotranspiration by the Duckwater phreatophytes of about 11,500 acre-feet estimated by the method described in this report compares well with measured discharge of about 13,500 acre-feet from the springs near Duckwater. Measured discharge from springs near Lockes was about 2,400 acre-feet; estimated ground-water evapotranspiration using the proposed method was about 2,450 acre-feet. PART C: Previous estimates of ground-water budgets in Nevada were based on methods and data that now are more than 60 years old. Newer methods, data, and technologies were used in the present study to estimate ground-water recharge from precipitation and ground-water discharge by evapotranspiration by phreatophytes for 16 contiguous valleys in eastern Nevada. Annual ground-water recharge to these valleys was estimated to be about 855,000 acre-feet and annual ground-water evapotranspiration was estimated to be about 790,000 acrefeet; both are a little more than two times greater than previous estimates. The imbalance of recharge over evapotranspiration represents recharge that either (1) leaves the area as interbasin flow or (2) is derived from precipitation that falls on terrain within the topographic boundary of the study area but contributes to discharge from hydrologic systems that lie outside these topographic limits. A vegetation index derived from Landsat-satellite data was used to estimate phreatophyte plant cover on the floors of the 16 valleys. The estimated phreatophyte plant cover then was used to estimate annual ground-water evapotranspiration. Detailed estimates of summer, winter, and annual ground-water evapotranspiration for areas with different ranges of phreatophyte plant cover were prepared for each valley. The estimated ground-water discharge from 15 valleys, combined with independent estimates of interbasin ground-water flow into or from a valley, were used to calculate the percentage of recharge derived from precipitation within the topographic boundary of each valley. These percentages then were used to estimate ground-water recharge from precipitation within each valley. Ground-water budgets for all 16 valleys were based on the estimated recharge from precipitation and estimated evapotranspiration. Any imba

  10. BENTHIC MICROBIAL RESPIRATION IN APPALACHIAN MOUNTAIN, PIEDMONT, AND COASTAL PLAINS, STREAMS OF THE EASTERN USA

    EPA Science Inventory

    Our study had two objectives. First, in order to quantify the potential underestimation of community respiration caused by the exclusion of anaerobic processes, we compared benthic microbial respiration measured as 02 consumption with estimated based on DHA. Second, our previous ...

  11. Sources and Loading of Nitrogen to U.S. Estuaries

    EPA Science Inventory

    Previous assessments of land-based nitrogen loading and sources to U.S. estuaries have been limited to estimates for larger systems with watersheds at the scale of 8-digit HUCs and larger, in part due to the coarse resolution of available data, including estuarine watershed bound...

  12. Traffic monitoring using satellite and ground data : preparation for feasibility tests and an operational system, final report.

    DOT National Transportation Integrated Search

    2000-04-01

    Satellite imagery could conceivably be added to data traditionally collected in traffic monitoring programs to allow wide spatial coverage unobtainable from ground-based sensors in a safe, off-the-road environment. Previously, we estimated that 1-m r...

  13. Traffic monitoring using satellite and ground data : preparation for feasibility tests and an operational system, executive summary.

    DOT National Transportation Integrated Search

    2000-04-01

    Satellite imagery could conceivably be added to data traditionally collected in traffic monitoring programs to allow wide spatial coverage unobtainable from ground-based sensors in a safe, off-the-road environment. Previously, we estimated that 1-m r...

  14. 75 FR 71704 - Agency Information Collection Activities; Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-24

    ... for decisions, and follow-up), recordkeeping, and annual audits. The Rule requires that IDSMs... not include any sensitive personal information, such as any individual's Social Security number, date..., staff has adjusted its previous estimates based on the following two factors. First, the annual audits...

  15. Formation Conditions of Basalts at Gale Crater, Mars from ChemCam Analyses

    NASA Astrophysics Data System (ADS)

    Filiberto, J.; Bridges, J.; Dasgupta, R.; Edwards, P.; Schwenzer, S. P.; Wiens, R. C.

    2015-12-01

    Surface igneous rocks shed light onto the chemistry, tectonic, and thermal state of planetary interiors. For the purpose of comparative planetology, therefore, it is critical to fully utilize the compositional diversity of igneous rocks for different terrestrial planets. For Mars, igneous float rocks and conglomerate clasts at Gale Crater, as analyzed by ChemCam [1] using a new calibration [2], have a larger range in chemistry than have been analyzed at any other landing site or within the Martian meteorite collection [3, 4]. These rocks may reflect different conditions of melting within the Martian interior than any previously analyzed basalts. Here we present new formation conditions for basaltic and trachybasalt/dioritic rocks at Gale Crater from ChemCam analyses following previous procedures [5, 6]. We then compare these estimates of basalt formation with previous estimates for rocks from the Noachian (Gusev Crater, Meridiani Planum, and a clast in the NWA 7034 meteorite [5, 6]), Hesperian (surface volcanics [7]), and Amazonian (surface volcanics and shergottites [7-8]), to calculate an average mantle potential temperature for different Martian epochs and investigate how the interior of Mars has changed through time. Finally, we will compare Martian mantle potential temperatures with petrologic estimate of cooling for the Earth. Our calculated estimate for the mantle potential temperature (TP) of rocks at Gale Crater is 1450 ± 45 °C which is within error of previous estimates for Noachian aged rocks [5, 6]. The TP estimates for the Hesperian and Amazonian, based on orbital analyses of the crust [7], are lower in temperature than the estimates for the Noachian. Our results are consistent with simple convective cooling of the Martian interior. [1] Wiens R. et al. (2012) Space Sci Rev 170. 167-227. [2] Anderson R. et al. (2015) LPSC. Abstract #7031. [3] Schmidt M.E. et al. (2014) JGRP 2013JE004481. [4] Sautter V. et al. (2014) JGRP 2013JE004472. [5] Filiberto J. and Dasgupta R. (2011) EPSL 304. 527-537. [6] Filiberto J. and Dasgupta R. (2015) JGRP 2014JE004745. [7] Baratoux D. et al. (2011) Nature 472. 338-341. [8] Musselwhite D.S. et al. (2006) MaPS 41. 1271-1290.

  16. a Hybrid Method in Vegetation Height Estimation Using Polinsar Images of Campaign Biosar

    NASA Astrophysics Data System (ADS)

    Dehnavi, S.; Maghsoudi, Y.

    2015-12-01

    Recently, there have been plenty of researches on the retrieval of forest height by PolInSAR data. This paper aims at the evaluation of a hybrid method in vegetation height estimation based on L-band multi-polarized air-borne SAR images. The SAR data used in this paper were collected by the airborne E-SAR system. The objective of this research is firstly to describe each interferometry cross correlation as a sum of contributions corresponding to single bounce, double bounce and volume scattering processes. Then, an ESPIRIT (Estimation of Signal Parameters via Rotational Invariance Techniques) algorithm is implemented, to determine the interferometric phase of each local scatterer (ground and canopy). Secondly, the canopy height is estimated by phase differencing method, according to the RVOG (Random Volume Over Ground) concept. The applied model-based decomposition method is unrivaled, as it is not limited to specific type of vegetation, unlike the previous decomposition techniques. In fact, the usage of generalized probability density function based on the nth power of a cosine-squared function, which is characterized by two parameters, makes this method useful for different vegetation types. Experimental results show the efficiency of the approach for vegetation height estimation in the test site.

  17. Shuttle Orbiter-like Cargo Carrier on Crew Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Martinovic, Zoran

    2009-01-01

    The following document summarizes the results of a conceptual design study for which the goal was to investigate the possibility of using a crew launch vehicle to deliver the remaining International Space Station elements should the Space Shuttle orbiter not be available to complete that task. Conceptual designs and structural weight estimates for two designs are presented. A previously developed systematic approach that was based on finite-element analysis and structural sizing was used to estimate growth of structural weight from analytical to "as built" conditions.

  18. Estimating the long-term costs of ischemic and hemorrhagic stroke for Australia: new evidence derived from the North East Melbourne Stroke Incidence Study (NEMESIS).

    PubMed

    Cadilhac, Dominique A; Carter, Rob; Thrift, Amanda G; Dewey, Helen M

    2009-03-01

    Stroke is associated with considerable societal costs. Cost-of-illness studies have been undertaken to estimate lifetime costs; most incorporating data up to 12 months after stroke. Costs of stroke, incorporating data collected up to 12 months, have previously been reported from the North East Melbourne Stroke Incidence Study (NEMESIS). NEMESIS now has patient-level resource use data for 5 years. We aimed to recalculate the long-term resource utilization of first-ever stroke patients and compare these to previous estimates obtained using data collected to 12 months. Population structure, life expectancy, and unit prices within the original cost-of-illness models were updated from 1997 to 2004. New Australian stroke survival and recurrence data up to 10 years were incorporated, as well as cross-sectional resource utilization data at 3, 4, and 5 years from NEMESIS. To enable comparisons, 1997 costs were inflated to 2004 prices and discounting was standardized. In 2004, 27 291 ischemic stroke (IS) and 4291 intracerebral hemorrhagic stroke (ICH) first-ever events were estimated. Average annual resource use after 12 months was AU$6022 for IS and AU$3977 for ICH. This is greater than the 1997 estimates for IS (AU$4848) and less than those for ICH (previously AU$10 692). The recalculated average lifetime costs per first-ever case differed for IS (AU$57 106 versus AU$52 855 [1997]), but differed more for ICH (AU$49 995 versus AU$92 308 [1997]). Basing lifetime cost estimates on short-term data overestimated the costs for ICH and underestimated those for IS. Patterns of resource use varied by stroke subtype and, overall, the societal cost impact was large.

  19. Quantifying groundwater discharge through fringing wetlands to estuaries: Seasonal variability, methods comparison, and implications for wetland-estuary exchange

    USGS Publications Warehouse

    Tobias, C.R.; Harvey, J.W.; Anderson, I.C.

    2001-01-01

    Because groundwater discharge along coastal shorelines is often concentrated in zones inhabited by fringing wetlands, accurately estimating discharge is essential for understanding its effect on the function and maintenance of these ecosystems. Most previous estimates of groundwater discharge to coastal wetlands have been temporally limited and have used only a single approach to estimate discharge. Furthermore, groundwater input has not been considered as a major mechanism controlling pore-water flushing. We estimated seasonally varying groundwater discharge into a fringing estuarine wetland using three independent methods (Darcy's Law, salt balance, and Br- tracer). Seasonal patterns of discharge predicted by both Darcy's Law and the salt balance yielded similar seasonal patterns with discharge maxima and minima in spring and early fall, respectively. They differed, however, in the estimated magnitude of discharge by two- to fourfold in spring and by 10-fold in fall. Darcy estimates of mean discharge ranged between -8.0 and 80 L m-2 d-1, whereas the salt balance predicted groundwater discharge of 0.6 to 22 L m-2 d-1. Results from the Br- tracer experiment estimated discharge at 16 L m-2 d-t, or nearly equal to the salt balance estimate at that time. Based upon the tracer test, pore-water conductivity profiles, and error estimates for the Darcy and salt balance approaches, we concluded that the salt balance provided a more certain estimate of groundwater discharge at high flow (spring). In contrast, the Darcy method provided a more reliable estimate during low flow (fall). Groundwater flushing of pore water in the spring exported solutes to the estuary at rates similar to tidally driven surface exchange seen in previous studies. Based on pore-water turnover times, the groundwater-driven flux of dissolved organic carbon (DOC), dissolved organic nitrogen (DON), and NH4+ to the estuary was 11.9, 1.6, and 1.3 g C or g N m-2 wetland for the 90 d encompassing peak spring discharge. Groundwater-induced flushing of the wetland subsurface therefore represents an important mechanism by which narrow fringing marshes may seasonally relieve salt stress and export material to adjacent water masses.

  20. Improved population estimates through the use of auxiliary information

    USGS Publications Warehouse

    Johnson, D.H.; Ralph, C.J.; Scott, J.M.

    1981-01-01

    When estimating the size of a population of birds, the investigator may have, in addition to an estimator based on a statistical sample, information on one of several auxiliary variables, such as: (1) estimates of the population made on previous occasions, (2) measures of habitat variables associated with the size of the population, and (3) estimates of the population sizes of other species that correlate with the species of interest. Although many studies have described the relationships between each of these kinds of data and the population size to be estimated, very little work has been done to improve the estimator by incorporating such auxiliary information. A statistical methodology termed 'empirical Bayes' seems to be appropriate to these situations. The potential that empirical Bayes methodology has for improved estimation of the population size of the Mallard (Anas platyrhynchos) is explored. In the example considered, three empirical Bayes estimators were found to reduce the error by one-fourth to one-half of that of the usual estimator.

  1. A framework for the meta-analysis of Bland-Altman studies based on a limits of agreement approach.

    PubMed

    Tipton, Elizabeth; Shuster, Jonathan

    2017-10-15

    Bland-Altman method comparison studies are common in the medical sciences and are used to compare a new measure to a gold-standard (often costlier or more invasive) measure. The distribution of these differences is summarized by two statistics, the 'bias' and standard deviation, and these measures are combined to provide estimates of the limits of agreement (LoA). When these LoA are within the bounds of clinically insignificant differences, the new non-invasive measure is preferred. Very often, multiple Bland-Altman studies have been conducted comparing the same two measures, and random-effects meta-analysis provides a means to pool these estimates. We provide a framework for the meta-analysis of Bland-Altman studies, including methods for estimating the LoA and measures of uncertainty (i.e., confidence intervals). Importantly, these LoA are likely to be wider than those typically reported in Bland-Altman meta-analyses. Frequently, Bland-Altman studies report results based on repeated measures designs but do not properly adjust for this design in the analysis. Meta-analyses of Bland-Altman studies frequently exclude these studies for this reason. We provide a meta-analytic approach that allows inclusion of estimates from these studies. This includes adjustments to the estimate of the standard deviation and a method for pooling the estimates based upon robust variance estimation. An example is included based on a previously published meta-analysis. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  2. Terminal Area Productivity Airport Wind Analysis and Chicago O'Hare Model Description

    NASA Technical Reports Server (NTRS)

    Hemm, Robert; Shapiro, Gerald

    1998-01-01

    This paper describes two results from a continuing effort to provide accurate cost-benefit analyses of the NASA Terminal Area Productivity (TAP) program technologies. Previous tasks have developed airport capacity and delay models and completed preliminary cost benefit estimates for TAP technologies at 10 U.S. airports. This task covers two improvements to the capacity and delay models. The first improvement is the completion of a detailed model set for the Chicago O'Hare (ORD) airport. Previous analyses used a more general model to estimate the benefits for ORD. This paper contains a description of the model details with results corresponding to current conditions. The second improvement is the development of specific wind speed and direction criteria for use in the delay models to predict when the Aircraft Vortex Spacing System (AVOSS) will allow use of reduced landing separations. This paper includes a description of the criteria and an estimate of AVOSS utility for 10 airports based on analysis of 35 years of weather data.

  3. The Dynamics of Glomerular Ultrafiltration in the Rat

    PubMed Central

    Brenner, Barry M.; Troy, Julia L.; Daugharty, Terrance M.

    1971-01-01

    Using a unique strain of Wistar rats endowed with glomeruli situated directly on the renal cortical surface, we measured glomerular capillary pressures using servo-nulling micropipette transducer techniques. Pressures in 12 glomerular capillaries from 7 rats averaged 60 cm H2O, or approximately 50% of mean systemic arterial values. Wave form characteristics for these glomerular capillaries were found to be remarkably similar to those of the central aorta. From similarly direct estimates of hydrostatic pressures in proximal tubules, and colloid osmotic pressures in systemic and efferent arteriolar plasmas, the net driving force for ultrafiltration was calculated. The average value of 14 cm H2O is lower by some two-thirds than the majority of estimates reported previously based on indirect techniques. Single nephron GFR (glomerular filtration rate) was also measured in these rats, thereby permitting calculation of the glomerular capillary ultrafiltration coefficient. The average value of 0.044 nl sec−1 cm H2O−1 glomerulus−1 is at least fourfold greater than previous estimates derived from indirect observations. PMID:5097578

  4. Effect of visual field presentation on action planning (estimating reach) in children.

    PubMed

    Gabbard, Carl; Cordova, Alberto

    2012-01-01

    In this article, the authors examined the effects of target information presented in different visual fields (lower, upper, central) on estimates of reach via use of motor imagery in children (5-11 years old) and young adults. Results indicated an advantage for estimating reach movements for targets placed in lower visual field (LoVF), with all groups having greater difficulty in the upper visual field (UpVF) condition, especially 5- and 7-year-olds. Complementing these results was an overall age-related increase in accuracy. Based in part on the equivalence hypothesis suggesting that motor imagery and motor planning and execution are similar, the findings support previous work of executed behaviors showing that there is a LoVF bias for motor skill actions of the hand. Given that previous research hints that the UpVF may be bias for visuospatial (perceptual) qualities, research in that area and its association with visuomotor processing (LoVF) should be considered.

  5. A non-stationary cost-benefit analysis approach for extreme flood estimation to explore the nexus of 'Risk, Cost and Non-stationarity'

    NASA Astrophysics Data System (ADS)

    Qi, Wei

    2017-11-01

    Cost-benefit analysis is commonly used for engineering planning and design problems in practice. However, previous cost-benefit based design flood estimation is based on stationary assumption. This study develops a non-stationary cost-benefit based design flood estimation approach. This approach integrates a non-stationary probability distribution function into cost-benefit analysis, and influence of non-stationarity on expected total cost (including flood damage and construction costs) and design flood estimation can be quantified. To facilitate design flood selections, a 'Risk-Cost' analysis approach is developed, which reveals the nexus of extreme flood risk, expected total cost and design life periods. Two basins, with 54-year and 104-year flood data respectively, are utilized to illustrate the application. It is found that the developed approach can effectively reveal changes of expected total cost and extreme floods in different design life periods. In addition, trade-offs are found between extreme flood risk and expected total cost, which reflect increases in cost to mitigate risk. Comparing with stationary approaches which generate only one expected total cost curve and therefore only one design flood estimation, the proposed new approach generate design flood estimation intervals and the 'Risk-Cost' approach selects a design flood value from the intervals based on the trade-offs between extreme flood risk and expected total cost. This study provides a new approach towards a better understanding of the influence of non-stationarity on expected total cost and design floods, and could be beneficial to cost-benefit based non-stationary design flood estimation across the world.

  6. Variance to mean ratio, R(t), for poisson processes on phylogenetic trees.

    PubMed

    Goldman, N

    1994-09-01

    The ratio of expected variance to mean, R(t), of numbers of DNA base substitutions for contemporary sequences related by a "star" phylogeny is widely seen as a measure of the adherence of the sequences' evolution to a Poisson process with a molecular clock, as predicted by the "neutral theory" of molecular evolution under certain conditions. A number of estimators of R(t) have been proposed, all predicted to have mean 1 and distributions based on the chi 2. Various genes have previously been analyzed and found to have values of R(t) far in excess of 1, calling into question important aspects of the neutral theory. In this paper, I use Monte Carlo simulation to show that the previously suggested means and distributions of estimators of R(t) are highly inaccurate. The analysis is applied to star phylogenies and to general phylogenetic trees, and well-known gene sequences are reanalyzed. For star phylogenies the results show that Kimura's estimators ("The Neutral Theory of Molecular Evolution," Cambridge Univ. Press, Cambridge, 1983) are unsatisfactory for statistical testing of R(t), but confirm the accuracy of Bulmer's correction factor (Genetics 123: 615-619, 1989). For all three nonstar phylogenies studied, attained values of all three estimators of R(t), although larger than 1, are within their true confidence limits under simple Poisson process models. This shows that lineage effects can be responsible for high estimates of R(t), restoring some limited confidence in the molecular clock and showing that the distinction between lineage and molecular clock effects is vital.(ABSTRACT TRUNCATED AT 250 WORDS)

  7. Journal: A Review of Some Tracer-Test Design Equations for ...

    EPA Pesticide Factsheets

    Determination of necessary tracer mass, initial sample-collection time, and subsequent sample-collection frequency are the three most difficult aspects to estimate for a proposed tracer test prior to conducting the tracer test. To facilitate tracer-mass estimation, 33 mass-estimation equations are reviewed here, 32 of which were evaluated using previously published tracer-test design examination parameters. Comparison of the results produced a wide range of estimated tracer mass, but no means is available by which one equation may be reasonably selected over the others. Each equation produces a simple approximation for tracer mass. Most of the equations are based primarily on estimates or measurements of discharge, transport distance, and suspected transport times. Although the basic field parameters commonly employed are appropriate for estimating tracer mass, the 33 equations are problematic in that they were all probably based on the original developers' experience in a particular field area and not necessarily on measured hydraulic parameters or solute-transport theory. Suggested sampling frequencies are typically based primarily on probable transport distance, but with little regard to expected travel times. This too is problematic in that tends to result in false negatives or data aliasing. Simulations from the recently developed efficient hydrologic tracer-test design methodology (EHTD) were compared with those obtained from 32 of the 33 published tracer-

  8. Measurement and Estimation of Riverbed Scour in a Mountain River

    NASA Astrophysics Data System (ADS)

    Song, L. A.; Chan, H. C.; Chen, B. A.

    2016-12-01

    Mountains are steep with rapid flows in Taiwan. After installing a structure in a mountain river, scour usually occurs around the structure because of the high energy gradient. Excessive scouring has been reported as one of the main causes of failure of river structures. The scouring disaster related to the flood can be reduced if the riverbed variation can be properly evaluated based on the flow conditions. This study measures the riverbed scour by using an improved "float-out device". Scouring and hydrodynamic data were simultaneously collected in the Mei River, Nantou County located in central Taiwan. The semi-empirical models proposed by previous researchers were used to estimate the scour depths based on the measured flow characteristics. The differences between the measured and estimated scour depths were discussed. Attempts were then made to improve the estimating results by developing a semi-empirical model to predict the riverbed scour based on the local field data. It is expected to setup a warning system of river structure safety by using the flow conditions. Keywords: scour, model, float-out device

  9. Rank-based estimation in the {ell}1-regularized partly linear model for censored outcomes with application to integrated analyses of clinical predictors and gene expression data.

    PubMed

    Johnson, Brent A

    2009-10-01

    We consider estimation and variable selection in the partial linear model for censored data. The partial linear model for censored data is a direct extension of the accelerated failure time model, the latter of which is a very important alternative model to the proportional hazards model. We extend rank-based lasso-type estimators to a model that may contain nonlinear effects. Variable selection in such partial linear model has direct application to high-dimensional survival analyses that attempt to adjust for clinical predictors. In the microarray setting, previous methods can adjust for other clinical predictors by assuming that clinical and gene expression data enter the model linearly in the same fashion. Here, we select important variables after adjusting for prognostic clinical variables but the clinical effects are assumed nonlinear. Our estimator is based on stratification and can be extended naturally to account for multiple nonlinear effects. We illustrate the utility of our method through simulation studies and application to the Wisconsin prognostic breast cancer data set.

  10. Mammalian cell culture process for monoclonal antibody production: nonlinear modelling and parameter estimation.

    PubMed

    Selişteanu, Dan; Șendrescu, Dorin; Georgeanu, Vlad; Roman, Monica

    2015-01-01

    Monoclonal antibodies (mAbs) are at present one of the fastest growing products of pharmaceutical industry, with widespread applications in biochemistry, biology, and medicine. The operation of mAbs production processes is predominantly based on empirical knowledge, the improvements being achieved by using trial-and-error experiments and precedent practices. The nonlinearity of these processes and the absence of suitable instrumentation require an enhanced modelling effort and modern kinetic parameter estimation strategies. The present work is dedicated to nonlinear dynamic modelling and parameter estimation for a mammalian cell culture process used for mAb production. By using a dynamical model of such kind of processes, an optimization-based technique for estimation of kinetic parameters in the model of mammalian cell culture process is developed. The estimation is achieved as a result of minimizing an error function by a particle swarm optimization (PSO) algorithm. The proposed estimation approach is analyzed in this work by using a particular model of mammalian cell culture, as a case study, but is generic for this class of bioprocesses. The presented case study shows that the proposed parameter estimation technique provides a more accurate simulation of the experimentally observed process behaviour than reported in previous studies.

  11. Mammalian Cell Culture Process for Monoclonal Antibody Production: Nonlinear Modelling and Parameter Estimation

    PubMed Central

    Selişteanu, Dan; Șendrescu, Dorin; Georgeanu, Vlad

    2015-01-01

    Monoclonal antibodies (mAbs) are at present one of the fastest growing products of pharmaceutical industry, with widespread applications in biochemistry, biology, and medicine. The operation of mAbs production processes is predominantly based on empirical knowledge, the improvements being achieved by using trial-and-error experiments and precedent practices. The nonlinearity of these processes and the absence of suitable instrumentation require an enhanced modelling effort and modern kinetic parameter estimation strategies. The present work is dedicated to nonlinear dynamic modelling and parameter estimation for a mammalian cell culture process used for mAb production. By using a dynamical model of such kind of processes, an optimization-based technique for estimation of kinetic parameters in the model of mammalian cell culture process is developed. The estimation is achieved as a result of minimizing an error function by a particle swarm optimization (PSO) algorithm. The proposed estimation approach is analyzed in this work by using a particular model of mammalian cell culture, as a case study, but is generic for this class of bioprocesses. The presented case study shows that the proposed parameter estimation technique provides a more accurate simulation of the experimentally observed process behaviour than reported in previous studies. PMID:25685797

  12. Estimating uncertainty in respondent-driven sampling using a tree bootstrap method.

    PubMed

    Baraff, Aaron J; McCormick, Tyler H; Raftery, Adrian E

    2016-12-20

    Respondent-driven sampling (RDS) is a network-based form of chain-referral sampling used to estimate attributes of populations that are difficult to access using standard survey tools. Although it has grown quickly in popularity since its introduction, the statistical properties of RDS estimates remain elusive. In particular, the sampling variability of these estimates has been shown to be much higher than previously acknowledged, and even methods designed to account for RDS result in misleadingly narrow confidence intervals. In this paper, we introduce a tree bootstrap method for estimating uncertainty in RDS estimates based on resampling recruitment trees. We use simulations from known social networks to show that the tree bootstrap method not only outperforms existing methods but also captures the high variability of RDS, even in extreme cases with high design effects. We also apply the method to data from injecting drug users in Ukraine. Unlike other methods, the tree bootstrap depends only on the structure of the sampled recruitment trees, not on the attributes being measured on the respondents, so correlations between attributes can be estimated as well as variability. Our results suggest that it is possible to accurately assess the high level of uncertainty inherent in RDS.

  13. Time-to-impact sensors in robot vision applications based on the near-sensor image processing concept

    NASA Astrophysics Data System (ADS)

    Åström, Anders; Forchheimer, Robert

    2012-03-01

    Based on the Near-Sensor Image Processing (NSIP) concept and recent results concerning optical flow and Time-to- Impact (TTI) computation with this architecture, we show how these results can be used and extended for robot vision applications. The first case involves estimation of the tilt of an approaching planar surface. The second case concerns the use of two NSIP cameras to estimate absolute distance and speed similar to a stereo-matching system but without the need to do image correlations. Going back to a one-camera system, the third case deals with the problem to estimate the shape of the approaching surface. It is shown that the previously developed TTI method not only gives a very compact solution with respect to hardware complexity, but also surprisingly high performance.

  14. A direct estimate of evapotranspiration over the Amazon basin and implications for our understanding of carbon and water cycling

    NASA Astrophysics Data System (ADS)

    Swann, A. L. S.; Koven, C.; Lombardozzi, D.; Bonan, G. B.

    2017-12-01

    Evapotranspiration (ET) is a critical term in the surface energy budget as well as the water cycle. There are few direct measurements of ET, and thus the magnitude and variability is poorly constrained at large spatial scales. Estimates of the annual cycle of ET over the Amazon are critical because they influence predictions of the seasonal cycle of carbon fluxes, as well as atmospheric dynamics and circulation. We estimate ET for the Amazon basin using a water budget approach, by differencing rainfall, discharge, and time-varying storage from the Gravity Recovery and Climate Experiment. We find that the climatological annual cycle of ET over the Amazon basin upstream of Óbidos shows suppression of ET during the wet season, and higher ET during the dry season, consistent with flux tower based observations in seasonally dry forests. We also find a statistically significant decrease in ET over the time period 2002-2015 of -1.46 mm/yr. Our direct estimate of the seasonal cycle of ET is largely consistent with previous indirect estimates, including energy budget based approaches, an up-scaled station based estimate, and land surface model estimates, but suggests that suppression of ET during the wet season is underestimated by existing products. We further quantify possible contributors to the phasing of the seasonal cycle and downward time trend using land surface models.

  15. Improved theory of time domain reflectometry with variable coaxial cable length for electrical conductivity measurements

    USDA-ARS?s Scientific Manuscript database

    Although empirical models have been developed previously, a mechanistic model is needed for estimating electrical conductivity (EC) using time domain reflectometry (TDR) with variable lengths of coaxial cable. The goals of this study are to: (1) derive a mechanistic model based on multisection tra...

  16. Emotional Reasoning and Parent-Based Reasoning in Normal Children

    ERIC Educational Resources Information Center

    Morren, Mattijn; Muris, Peter; Kindt, Merel

    2004-01-01

    A previous study by Muris, Merckelbach, and Van Spauwen [1] demonstrated that children display emotional reasoning irrespective of their anxiety levels. That is, when estimating whether a situation is dangerous, children not only rely on objective danger information but also on their "own" anxiety-response. The present study further examined…

  17. Military Base Closures: Observations on Prior and Current BRAC Rounds

    EPA Pesticide Factsheets

    DOD indicates that recommendations from the previous BRAC rounds were implemented within the 6-year period mandated by law. As a result, DOD estimated that it reduced its domestic infrastructure by about 20 percent; about 90 percent of unneeded BRAC property is now available for reuse.

  18. Health Literacy and Happiness: A Community-Based Study

    ERIC Educational Resources Information Center

    Angner, Erik; Miller, Michael J.; Ray, Midge N.; Saag, Kenneth G.; Allison, Jeroan J.

    2010-01-01

    The relationship between health literacy and happiness was explored using a cross-sectional survey of community-dwelling older primary-care patients. Health literacy status was estimated with the following previously validated question: "How confident are you in filling out medical forms by yourself?" Happiness was measured using an adapted…

  19. Joint Symbol Timing and CFO Estimation for OFDM/OQAM Systems in Multipath Channels

    NASA Astrophysics Data System (ADS)

    Fusco, Tilde; Petrella, Angelo; Tanda, Mario

    2009-12-01

    The problem of data-aided synchronization for orthogonal frequency division multiplexing (OFDM) systems based on offset quadrature amplitude modulation (OQAM) in multipath channels is considered. In particular, the joint maximum-likelihood (ML) estimator for carrier-frequency offset (CFO), amplitudes, phases, and delays, exploiting a short known preamble, is derived. The ML estimators for phases and amplitudes are in closed form. Moreover, under the assumption that the CFO is sufficiently small, a closed form approximate ML (AML) CFO estimator is obtained. By exploiting the obtained closed form solutions a cost function whose peaks provide an estimate of the delays is derived. In particular, the symbol timing (i.e., the delay of the first multipath component) is obtained by considering the smallest estimated delay. The performance of the proposed joint AML estimator is assessed via computer simulations and compared with that achieved by the joint AML estimator designed for AWGN channel and that achieved by a previously derived joint estimator for OFDM systems.

  20. Estimation of body temperature rhythm based on heart activity parameters in daily life.

    PubMed

    Sooyoung Sim; Heenam Yoon; Hosuk Ryou; Kwangsuk Park

    2014-01-01

    Body temperature contains valuable health related information such as circadian rhythm and menstruation cycle. Also, it was discovered from previous studies that body temperature rhythm in daily life is related with sleep disorders and cognitive performances. However, monitoring body temperature with existing devices during daily life is not easy because they are invasive, intrusive, or expensive. Therefore, the technology which can accurately and nonintrusively monitor body temperature is required. In this study, we developed body temperature estimation model based on heart rate and heart rate variability parameters. Although this work was inspired by previous research, we originally identified that the model can be applied to body temperature monitoring in daily life. Also, we could find out that normalized Mean heart rate (nMHR) and frequency domain parameters of heart rate variability showed better performance than other parameters. Although we should validate the model with more number of subjects and consider additional algorithms to decrease the accumulated estimation error, we could verify the usefulness of this approach. Through this study, we expect that we would be able to monitor core body temperature and circadian rhythm from simple heart rate monitor. Then, we can obtain various health related information derived from daily body temperature rhythm.

  1. Bayesian population analysis of a washin-washout physiologically based pharmacokinetic model for acetone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moerk, Anna-Karin, E-mail: anna-karin.mork@ki.s; Jonsson, Fredrik; Pharsight, a Certara company, St. Louis, MO

    2009-11-01

    The aim of this study was to derive improved estimates of population variability and uncertainty of physiologically based pharmacokinetic (PBPK) model parameters, especially of those related to the washin-washout behavior of polar volatile substances. This was done by optimizing a previously published washin-washout PBPK model for acetone in a Bayesian framework using Markov chain Monte Carlo simulation. The sensitivity of the model parameters was investigated by creating four different prior sets, where the uncertainty surrounding the population variability of the physiological model parameters was given values corresponding to coefficients of variation of 1%, 25%, 50%, and 100%, respectively. The PBPKmore » model was calibrated to toxicokinetic data from 2 previous studies where 18 volunteers were exposed to 250-550 ppm of acetone at various levels of workload. The updated PBPK model provided a good description of the concentrations in arterial, venous, and exhaled air. The precision of most of the model parameter estimates was improved. New information was particularly gained on the population distribution of the parameters governing the washin-washout effect. The results presented herein provide a good starting point to estimate the target dose of acetone in the working and general populations for risk assessment purposes.« less

  2. Reconsidering the use of rankings in the valuation of health states: a model for estimating cardinal values from ordinal data

    PubMed Central

    Salomon, Joshua A

    2003-01-01

    Background In survey studies on health-state valuations, ordinal ranking exercises often are used as precursors to other elicitation methods such as the time trade-off (TTO) or standard gamble, but the ranking data have not been used in deriving cardinal valuations. This study reconsiders the role of ordinal ranks in valuing health and introduces a new approach to estimate interval-scaled valuations based on aggregate ranking data. Methods Analyses were undertaken on data from a previously published general population survey study in the United Kingdom that included rankings and TTO values for hypothetical states described using the EQ-5D classification system. The EQ-5D includes five domains (mobility, self-care, usual activities, pain/discomfort and anxiety/depression) with three possible levels on each. Rank data were analysed using a random utility model, operationalized through conditional logit regression. In the statistical model, probabilities of observed rankings were related to the latent utilities of different health states, modeled as a linear function of EQ-5D domain scores, as in previously reported EQ-5D valuation functions. Predicted valuations based on the conditional logit model were compared to observed TTO values for the 42 states in the study and to predictions based on a model estimated directly from the TTO values. Models were evaluated using the intraclass correlation coefficient (ICC) between predictions and mean observations, and the root mean squared error of predictions at the individual level. Results Agreement between predicted valuations from the rank model and observed TTO values was very high, with an ICC of 0.97, only marginally lower than for predictions based on the model estimated directly from TTO values (ICC = 0.99). Individual-level errors were also comparable in the two models, with root mean squared errors of 0.503 and 0.496 for the rank-based and TTO-based predictions, respectively. Conclusions Modeling health-state valuations based on ordinal ranks can provide results that are similar to those obtained from more widely analyzed valuation techniques such as the TTO. The information content in aggregate ranking data is not currently exploited to full advantage. The possibility of estimating cardinal valuations from ordinal ranks could also simplify future data collection dramatically and facilitate wider empirical study of health-state valuations in diverse settings and population groups. PMID:14687419

  3. An estimate of periodontal treatment needs in the U.S. based on epidemiologic data.

    PubMed

    Oliver, R C; Brown, L J; Löe, H

    1989-07-01

    It has generally been assumed, based on previous epidemiologic and utilization studies as well as the increasing elderly population, that there would be an increasing need for periodontal treatment. Analysis of a more recent household epidemiologic survey conducted in 1981 indicates that the need for treatment of periodontitis is less than previous estimates. These epidemiologic data have been translated into treatment needs through a series of conversion rules derived from previous studies and current patterns of treatment, and applied to the 1985 U.S. population. The total periodontal services needed for scaling, surgery, and prophylaxes would require 120 to 133 million hours and $5 to $6 billion annually if the total population were treated for periodontitis over a 4-year period. Only 11% of the total hours needed would be for scaling and surgery whereas 89% would be needed for prophylaxes. Expenditures for periodontal treatment total approximately 10% of the amount being spent on dental care in 1985. On the basis of these data, it seems unlikely that there will be a substantial increase in the need for periodontal treatment in a growing and aging U.S. population. These figures represent the upper limits of treatment need and are reduced by factoring in current utilization of periodontal treatment.

  4. Reported Energy and Cost Savings from the DOE ESPC Program: FY 2014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slattery, Bob S.

    2015-03-01

    The objective of this work was to determine the realization rate of energy and cost savings from the Department of Energy’s Energy Savings Performance Contract (ESPC) program based on information reported by the energy services companies (ESCOs) that are carrying out ESPC projects at federal sites. Information was extracted from 156 Measurement and Verification (M&V) reports to determine reported, estimated, and guaranteed cost savings and reported and estimated energy savings for the previous contract year. Because the quality of the reports varied, it was not possible to determine all of these parameters for each project. For all 156 projects, theremore » was sufficient information to compare estimated, reported, and guaranteed cost savings. For this group, the total estimated cost savings for the reporting periods addressed were $210.6 million, total reported cost savings were $215.1 million, and total guaranteed cost savings were $204.5 million. This means that on average: ESPC contractors guaranteed 97% of the estimated cost savings; projects reported achieving 102% of the estimated cost savings; and projects reported achieving 105% of the guaranteed cost savings. For 155 of the projects examined, there was sufficient information to compare estimated and reported energy savings. On the basis of site energy, estimated savings for those projects for the previous year totaled 11.938 million MMBtu, and reported savings were 12.138 million MMBtu, 101.7% of the estimated energy savings. On the basis of source energy, total estimated energy savings for the 155 projects were 19.052 million MMBtu, and reported saving were 19.516 million MMBtu, 102.4% of the estimated energy savings.« less

  5. Density-based global sensitivity analysis of sheet-flow travel time: Kinematic wave-based formulations

    NASA Astrophysics Data System (ADS)

    Hosseini, Seiyed Mossa; Ataie-Ashtiani, Behzad; Simmons, Craig T.

    2018-04-01

    Despite advancements in developing physics-based formulations to estimate the sheet-flow travel time (tSHF), the quantification of the relative impacts of influential parameters on tSHF has not previously been considered. In this study, a brief review of the physics-based formulations to estimate tSHF including kinematic wave (K-W) theory in combination with Manning's roughness (K-M) and with Darcy-Weisbach friction formula (K-D) over single and multiple planes is provided. Then, the relative significance of input parameters to the developed approaches is quantified by a density-based global sensitivity analysis (GSA). The performance of K-M considering zero-upstream and uniform flow depth (so-called K-M1 and K-M2), and K-D formulae to estimate the tSHF over single plane surface were assessed using several sets of experimental data collected from the previous studies. The compatibility of the developed models to estimate tSHF over multiple planes considering temporal rainfall distributions of Natural Resources Conservation Service, NRCS (I, Ia, II, and III) are scrutinized by several real-world examples. The results obtained demonstrated that the main controlling parameters of tSHF through K-D and K-M formulae are the length of surface plane (mean sensitivity index T̂i = 0.72) and flow resistance (mean T̂i = 0.52), respectively. Conversely, the flow temperature and initial abstraction ratio of rainfall have the lowest influence on tSHF (mean T̂i is 0.11 and 0.12, respectively). The significant role of the flow regime on the estimation of tSHF over a single and a cascade of planes are also demonstrated. Results reveal that the K-D formulation provides more precise tSHF over the single plane surface with an average percentage of error, APE equal to 9.23% (the APE for K-M1 and K-M2 formulae were 13.8%, and 36.33%, respectively). The superiority of Manning-jointed formulae in estimation of tSHF is due to the incorporation of effects from different flow regimes as flow moves downgradient that is affected by one or more factors including high excess rainfall intensities, low flow resistance, high degrees of imperviousness, long surfaces, steep slope, and domination of rainfall distribution as NRCS Type I, II, or III.

  6. Spatiotemporal requirements of the Hainan gibbon: Does home range constrain recovery of the world's rarest ape?

    PubMed

    Bryant, Jessica V; Zeng, Xingyuan; Hong, Xiaojiang; Chatterjee, Helen J; Turvey, Samuel T

    2017-03-01

    Conservation management requires an evidence-based approach, as uninformed decisions can signify the difference between species recovery and loss. The Hainan gibbon, the world's rarest ape, reportedly exploits the largest home range of any gibbon species, with these apparently large spatial requirements potentially limiting population recovery. However, previous home range assessments rarely reported survey methods, effort, or analytical approaches, hindering critical evaluation of estimate reliability. For extremely rare species where data collection is challenging, it also is unclear what impact such limitations have on estimating home range requirements. We re-evaluated Hainan gibbon spatial ecology using 75 hr of observations from 35 contact days over 93 field-days across dry (November 2010-February 2011) and wet (June 2011-September 2011) seasons. We calculated home range area for three social groups (N = 21 individuals) across the sampling period, seasonal estimates for one group (based on 24 days of observation; 12 days per season), and between-group home range overlap using multiple approaches (Minimum Convex Polygon, Kernel Density Estimation, Local Convex Hull, Brownian Bridge Movement Model), and assessed estimate reliability and representativeness using three approaches (Incremental Area Analysis, spatial concordance, and exclusion of expected holes). We estimated a yearly home range of 1-2 km 2 , with 1.49 km 2 closest to the median of all estimates. Although Hainan gibbon spatial requirements are relatively large for gibbons, our new estimates are smaller than previous estimates used to explain the species' limited recovery, suggesting that habitat availability may be less important in limiting population growth. We argue that other ecological, genetic, and/or anthropogenic factors are more likely to constrain Hainan gibbon recovery, and conservation attention should focus on elucidating and managing these factors. Re-evaluation reveals Hainan gibbon home range as c. 1-2 km 2 . Hainan gibbon home range is, therefore, similar to other Nomascus gibbons. Limited data for extremely rare species does not necessarily prevent derivation of robust home range estimates. © 2016 Wiley Periodicals, Inc.

  7. Use of Mobile Device Data To Better Estimate Dynamic Population Size for Wastewater-Based Epidemiology.

    PubMed

    Thomas, Kevin V; Amador, Arturo; Baz-Lomba, Jose Antonio; Reid, Malcolm

    2017-10-03

    Wastewater-based epidemiology is an established approach for quantifying community drug use and has recently been applied to estimate population exposure to contaminants such as pesticides and phthalate plasticizers. A major source of uncertainty in the population weighted biomarker loads generated is related to estimating the number of people present in a sewer catchment at the time of sample collection. Here, the population quantified from mobile device-based population activity patterns was used to provide dynamic population normalized loads of illicit drugs and pharmaceuticals during a known period of high net fluctuation in the catchment population. Mobile device-based population activity patterns have for the first time quantified the high degree of intraday, week, and month variability within a specific sewer catchment. Dynamic population normalization showed that per capita pharmaceutical use remained unchanged during the period when static normalization would have indicated an average reduction of up to 31%. Per capita illicit drug use increased significantly during the monitoring period, an observation that was only possible to measure using dynamic population normalization. The study quantitatively confirms previous assessments that population estimates can account for uncertainties of up to 55% in static normalized data. Mobile device-based population activity patterns allow for dynamic normalization that yields much improved temporal and spatial trend analysis.

  8. Shipborne LF-VLF oceanic lightning observations and modeling

    NASA Astrophysics Data System (ADS)

    Zoghzoghy, F. G.; Cohen, M. B.; Said, R. K.; Lehtinen, N. G.; Inan, U. S.

    2015-10-01

    Approximately 90% of natural lightning occurs over land, but recent observations, using Global Lightning Detection (GLD360) geolocation peak current estimates and satellite optical data, suggested that cloud-to-ground flashes are on average stronger over the ocean. We present initial statistics from a novel experiment using a Low Frequency (LF) magnetic field receiver system installed aboard the National Oceanic Atmospheric Agency (NOAA) Ronald W. Brown research vessel that allowed the detection of impulsive radio emissions from deep-oceanic discharges at short distances. Thousands of LF waveforms were recorded, facilitating the comparison of oceanic waveforms to their land counterparts. A computationally efficient electromagnetic radiation model that accounts for propagation over lossy and curved ground is constructed and compared with previously published models. We include the effects of Earth curvature on LF ground wave propagation and quantify the effects of channel-base current risetime, channel-base current falltime, and return stroke speed on the radiated LF waveforms observed at a given distance. We compare simulation results to data and conclude that previously reported larger GLD360 peak current estimates over the ocean are unlikely to fully result from differences in channel-base current risetime, falltime, or return stroke speed between ocean and land flashes.

  9. Density estimation in aerial images of large crowds for automatic people counting

    NASA Astrophysics Data System (ADS)

    Herrmann, Christian; Metzler, Juergen

    2013-05-01

    Counting people is a common topic in the area of visual surveillance and crowd analysis. While many image-based solutions are designed to count only a few persons at the same time, like pedestrians entering a shop or watching an advertisement, there is hardly any solution for counting large crowds of several hundred persons or more. We addressed this problem previously by designing a semi-automatic system being able to count crowds consisting of hundreds or thousands of people based on aerial images of demonstrations or similar events. This system requires major user interaction to segment the image. Our principle aim is to reduce this manual interaction. To achieve this, we propose a new and automatic system. Besides counting the people in large crowds, the system yields the positions of people allowing a plausibility check by a human operator. In order to automatize the people counting system, we use crowd density estimation. The determination of crowd density is based on several features like edge intensity or spatial frequency. They indicate the density and discriminate between a crowd and other image regions like buildings, bushes or trees. We compare the performance of our automatic system to the previous semi-automatic system and to manual counting in images. By counting a test set of aerial images showing large crowds containing up to 12,000 people, the performance gain of our new system will be measured. By improving our previous system, we will increase the benefit of an image-based solution for counting people in large crowds.

  10. National Estimates of Recovery-Remission From Serious Mental Illness.

    PubMed

    Salzer, Mark S; Brusilovskiy, Eugene; Townley, Greg

    2018-05-01

    A broad range of estimates of recovery among previously institutionalized persons has been reported, but no current, community-based national estimate of recovery from serious mental illness exists. This study reports recovery rate results, based on a remission definition, and explores related demographic factors. A national, geographically stratified, and random cross-sectional survey conducted from September 2014 to December 2015 resulted in responses from more than 41,000 individuals. Lifetime prevalence of serious mental illness was assessed by asking about receipt of a diagnosis (major depression, bipolar disorder, manic depression, and schizophrenia or schizoaffective disorder) and hospitalization and impairment associated with the diagnosis. Recovery was determined by asking about impairments over the past 12 months. Almost 17% reported receiving one of the diagnoses in their lifetime, 6% had a lifetime rate of a serious mental illness, and nearly 4% continued to experience interference associated with serious mental illness. One-third of those with a lifetime serious mental illness reported having been in remission for at least the past 12 months. Recovery rates were low until age 32 and then progressively increased. Lifetime estimates of diagnosed illness and current prevalence of serious mental illness are consistent with previous research. Results indicate that recovery is possible and is associated with age. Further research is needed to understand factors that promote recovery, and sustained evaluation efforts using similar parsimonious approaches may be useful in conducting timely assessments of national and local mental health policies.

  11. Mathematical modeling improves EC50 estimations from classical dose-response curves.

    PubMed

    Nyman, Elin; Lindgren, Isa; Lövfors, William; Lundengård, Karin; Cervin, Ida; Sjöström, Theresia Arbring; Altimiras, Jordi; Cedersund, Gunnar

    2015-03-01

    The β-adrenergic response is impaired in failing hearts. When studying β-adrenergic function in vitro, the half-maximal effective concentration (EC50 ) is an important measure of ligand response. We previously measured the in vitro contraction force response of chicken heart tissue to increasing concentrations of adrenaline, and observed a decreasing response at high concentrations. The classical interpretation of such data is to assume a maximal response before the decrease, and to fit a sigmoid curve to the remaining data to determine EC50 . Instead, we have applied a mathematical modeling approach to interpret the full dose-response curve in a new way. The developed model predicts a non-steady-state caused by a short resting time between increased concentrations of agonist, which affect the dose-response characterization. Therefore, an improved estimate of EC50 may be calculated using steady-state simulations of the model. The model-based estimation of EC50 is further refined using additional time-resolved data to decrease the uncertainty of the prediction. The resulting model-based EC50 (180-525 nm) is higher than the classically interpreted EC50 (46-191 nm). Mathematical modeling thus makes it possible to re-interpret previously obtained datasets, and to make accurate estimates of EC50 even when steady-state measurements are not experimentally feasible. The mathematical models described here have been submitted to the JWS Online Cellular Systems Modelling Database, and may be accessed at http://jjj.bio.vu.nl/database/nyman. © 2015 FEBS.

  12. Compiling Techniques for East Antarctic Ice Velocity Mapping Based on Historical Optical Imagery

    NASA Astrophysics Data System (ADS)

    Li, X.; Li, R.; Qiao, G.; Cheng, Y.; Ye, W.; Gao, T.; Huang, Y.; Tian, Y.; Tong, X.

    2018-05-01

    Ice flow velocity over long time series in East Antarctica plays a vital role in estimating and predicting the mass balance of Antarctic Ice Sheet and its contribution to global sea level rise. However, there is no Antarctic ice velocity product with large space scale available showing the East Antarctic ice flow velocity pattern before the 1990s. We proposed three methods including parallax decomposition, grid-based NCC image matching, feature and gird-based image matching with constraints for estimation of surface velocity in East Antarctica based on ARGON KH-5 and LANDSAT imagery, showing the feasibility of using historical optical imagery to obtain Antarctic ice motion. Based on these previous studies, we presented a set of systematic method for developing ice surface velocity product for the entire East Antarctica from the 1960s to the 1980s in this paper.

  13. Emission estimates of selected volatile organic compounds from tropical savanna burning in northern Australia

    NASA Astrophysics Data System (ADS)

    Shirai, T.; Blake, D. R.; Meinardi, S.; Rowland, F. S.; Russell-Smith, J.; Edwards, A.; Kondo, Y.; Koike, M.; Kita, K.; Machida, T.; Takegawa, N.; Nishi, N.; Kawakami, S.; Ogawa, T.

    2003-02-01

    Here we present measurements of a range of carbon-based compounds: carbon dioxide (CO2), carbon monoxide (CO), methane (CH4), nonmethane hydrocarbons (NMHCs), methyl halides, and dimethyl sulfide (DMS) emitted by Australian savanna fires studied as part of the Biomass Burning and Lightning Experiment (BIBLE) phase B aircraft campaign, which took place during the local late dry season (28 August to 13 September 1999). Significant enhancements of short-lived NMHCs were observed in the boundary layer (BL) over the region of intensive fires and indicate recent emissions for which the mean transport time was estimated to be about 9 hours. Emission ratios relative to CO were determined for 20 NMHCs, 3 methyl halides, DMS, and CH4 based on the BL enhancements in the source region. Tight correlations with CO were obtained for most of those compounds, indicating the homogeneity of the local savanna source. The emission ratios were in good agreement with some previous measurements of savanna fires for stable compounds but indicated the decay of emission ratios during transport for several reactive compounds. Based on the observed emission ratios, emission factors were derived and compared to previous studies. While emission factors (g species/kg dry mole) of CO2 varied little according to the vegetation types, those of CO and NMHCs varied significantly. Higher combustion efficiency and a lower emission factor for methane in this study, compared to forest fires, agreed well with results for savanna fires in other tropical regions. The amount of biomass burned was estimated by modeling methods using available satellite data, and showed that 1999 was an above average year for savanna burning. The gross emissions of the trace gases from Australian savanna fires were estimated.

  14. Testing and comparison of three frequency-based magnitude estimating parameters for earthquake early warning based events in the Yunnan region, China in 2014

    NASA Astrophysics Data System (ADS)

    Zhang, Jianjing; Li, Hongjie

    2018-06-01

    To mitigate potential seismic disasters in the Yunnan region, China, building up suitable magnitude estimation scaling laws for an earthquake early warning system (EEWS) is in high demand. In this paper, the records from the main and after-shocks of the Yingjiang earthquake (M W 5.9), the Ludian earthquake (M W 6.2) and the Jinggu earthquake (M W 6.1), which occurred in Yunnan in 2014, were used to develop three estimators, including the maximum of the predominant period ({{τ }{{p}}}\\max ), the characteristic period (τ c) and the log-average period (τ log), for estimating earthquake magnitude. The correlations between these three frequency-based parameters and catalog magnitudes were developed, compared and evaluated against previous studies. The amplitude and period of seismic waves might be amplified in the Ludian mountain-canyon area by multiple reflections and resonance, leading to excessive values of the calculated parameters, which are consistent with Sichuan’s scaling. As a result, τ log was best correlated with magnitude and τ c had the highest slope of regression equation, while {{τ }{{p}}}\\max performed worst with large scatter and less sensitivity for the change of magnitude. No evident saturation occurred in the case of M 6.1 and M 6.2 in this study. Even though both τ c and τ log performed similarly and can well reflect the size of the Earthquake, τ log has slightly fewer prediction errors for small scale earthquakes (M ≤ 4.5), which was also observed by previous research. Our work offers an insight into the feasibility of a EEWS in Yunnan, China, and this study shows that it is necessary to build up an appropriate scaling law suitable for the warning region.

  15. Global Kalman filter approaches to estimate absolute angles of lower limb segments.

    PubMed

    Nogueira, Samuel L; Lambrecht, Stefan; Inoue, Roberto S; Bortole, Magdo; Montagnoli, Arlindo N; Moreno, Juan C; Rocon, Eduardo; Terra, Marco H; Siqueira, Adriano A G; Pons, Jose L

    2017-05-16

    In this paper we propose the use of global Kalman filters (KFs) to estimate absolute angles of lower limb segments. Standard approaches adopt KFs to improve the performance of inertial sensors based on individual link configurations. In consequence, for a multi-body system like a lower limb exoskeleton, the inertial measurements of one link (e.g., the shank) are not taken into account in other link angle estimations (e.g., foot). Global KF approaches, on the other hand, correlate the collective contribution of all signals from lower limb segments observed in the state-space model through the filtering process. We present a novel global KF (matricial global KF) relying only on inertial sensor data, and validate both this KF and a previously presented global KF (Markov Jump Linear Systems, MJLS-based KF), which fuses data from inertial sensors and encoders from an exoskeleton. We furthermore compare both methods to the commonly used local KF. The results indicate that the global KFs performed significantly better than the local KF, with an average root mean square error (RMSE) of respectively 0.942° for the MJLS-based KF, 1.167° for the matrical global KF, and 1.202° for the local KFs. Including the data from the exoskeleton encoders also resulted in a significant increase in performance. The results indicate that the current practice of using KFs based on local models is suboptimal. Both the presented KF based on inertial sensor data, as well our previously presented global approach fusing inertial sensor data with data from exoskeleton encoders, were superior to local KFs. We therefore recommend to use global KFs for gait analysis and exoskeleton control.

  16. Combining heuristic and statistical techniques in landslide hazard assessments

    NASA Astrophysics Data System (ADS)

    Cepeda, Jose; Schwendtner, Barbara; Quan, Byron; Nadim, Farrokh; Diaz, Manuel; Molina, Giovanni

    2014-05-01

    As a contribution to the Global Assessment Report 2013 - GAR2013, coordinated by the United Nations International Strategy for Disaster Reduction - UNISDR, a drill-down exercise for landslide hazard assessment was carried out by entering the results of both heuristic and statistical techniques into a new but simple combination rule. The data available for this evaluation included landslide inventories, both historical and event-based. In addition to the application of a heuristic method used in the previous editions of GAR, the availability of inventories motivated the use of statistical methods. The heuristic technique is largely based on the Mora & Vahrson method, which estimates hazard as the product of susceptibility and triggering factors, where classes are weighted based on expert judgment and experience. Two statistical methods were also applied: the landslide index method, which estimates weights of the classes for the susceptibility and triggering factors based on the evidence provided by the density of landslides in each class of the factors; and the weights of evidence method, which extends the previous technique to include both positive and negative evidence of landslide occurrence in the estimation of weights for the classes. One key aspect during the hazard evaluation was the decision on the methodology to be chosen for the final assessment. Instead of opting for a single methodology, it was decided to combine the results of the three implemented techniques using a combination rule based on a normalization of the results of each method. The hazard evaluation was performed for both earthquake- and rainfall-induced landslides. The country chosen for the drill-down exercise was El Salvador. The results indicate that highest hazard levels are concentrated along the central volcanic chain and at the centre of the northern mountains.

  17. A Robust Adaptive Unscented Kalman Filter for Nonlinear Estimation with Uncertain Noise Covariance.

    PubMed

    Zheng, Binqi; Fu, Pengcheng; Li, Baoqing; Yuan, Xiaobing

    2018-03-07

    The Unscented Kalman filter (UKF) may suffer from performance degradation and even divergence while mismatch between the noise distribution assumed as a priori by users and the actual ones in a real nonlinear system. To resolve this problem, this paper proposes a robust adaptive UKF (RAUKF) to improve the accuracy and robustness of state estimation with uncertain noise covariance. More specifically, at each timestep, a standard UKF will be implemented first to obtain the state estimations using the new acquired measurement data. Then an online fault-detection mechanism is adopted to judge if it is necessary to update current noise covariance. If necessary, innovation-based method and residual-based method are used to calculate the estimations of current noise covariance of process and measurement, respectively. By utilizing a weighting factor, the filter will combine the last noise covariance matrices with the estimations as the new noise covariance matrices. Finally, the state estimations will be corrected according to the new noise covariance matrices and previous state estimations. Compared with the standard UKF and other adaptive UKF algorithms, RAUKF converges faster to the actual noise covariance and thus achieves a better performance in terms of robustness, accuracy, and computation for nonlinear estimation with uncertain noise covariance, which is demonstrated by the simulation results.

  18. A Robust Adaptive Unscented Kalman Filter for Nonlinear Estimation with Uncertain Noise Covariance

    PubMed Central

    Zheng, Binqi; Yuan, Xiaobing

    2018-01-01

    The Unscented Kalman filter (UKF) may suffer from performance degradation and even divergence while mismatch between the noise distribution assumed as a priori by users and the actual ones in a real nonlinear system. To resolve this problem, this paper proposes a robust adaptive UKF (RAUKF) to improve the accuracy and robustness of state estimation with uncertain noise covariance. More specifically, at each timestep, a standard UKF will be implemented first to obtain the state estimations using the new acquired measurement data. Then an online fault-detection mechanism is adopted to judge if it is necessary to update current noise covariance. If necessary, innovation-based method and residual-based method are used to calculate the estimations of current noise covariance of process and measurement, respectively. By utilizing a weighting factor, the filter will combine the last noise covariance matrices with the estimations as the new noise covariance matrices. Finally, the state estimations will be corrected according to the new noise covariance matrices and previous state estimations. Compared with the standard UKF and other adaptive UKF algorithms, RAUKF converges faster to the actual noise covariance and thus achieves a better performance in terms of robustness, accuracy, and computation for nonlinear estimation with uncertain noise covariance, which is demonstrated by the simulation results. PMID:29518960

  19. Phase retrieval in digital speckle pattern interferometry by application of two-dimensional active contours called snakes.

    PubMed

    Federico, Alejandro; Kaufmann, Guillermo H

    2006-03-20

    We propose a novel approach to retrieving the phase map coded by a single closed-fringe pattern in digital speckle pattern interferometry, which is based on the estimation of the local sign of the quadrature component. We obtain the estimate by calculating the local orientation of the fringes that have previously been denoised by a weighted smoothing spline method. We carry out the procedure of sign estimation by determining the local abrupt jumps of size pi in the orientation field of the fringes and by segmenting the regions defined by these jumps. The segmentation method is based on the application of two-dimensional active contours (snakes), with which one can also estimate absent jumps, i.e., those that cannot be detected from the local orientation of the fringes. The performance of the proposed phase-retrieval technique is evaluated for synthetic and experimental fringes and compared with the results obtained with the spiral-phase- and Fourier-transform methods.

  20. Estimation of limb adiposity by bioimpedance spectroscopy in lymphoedema

    NASA Astrophysics Data System (ADS)

    Ward, L. C.; Essex, T.; Gaw, R.; Czerniec, S.; Dylke, E.; Abell, B.; Kilbreath, S. L.

    2013-04-01

    Lymphoedema is a chronic debilitating condition that may occur in approximately 25% of women treated for breast cancer. As the condition progresses, accumulated lymph fluid becomes fibrotic with infiltration of adipose tissue. Bioelectrical impedance spectroscopy is the preferred method for early detection of lymphoedema based on the measurement of impedance of extracellular fluid. The present study assessed whether these impedance measurements could also be used to estimate the adipose tissue content of the arm based on a model previously used to predict whole body composition. Estimates of arm adipose tissue in a cohort of women with lymphoedema were found to be highly correlated (r > 0.82) with measurements of adipose tissue obtained using the reference method of dual energy X-ray absorptiometry. Paired t-tests confirmed that there was no significant difference between the adipose tissue volumes obtained by the two methods. These results support the view that the method shows promise for the estimation of arm adiposity in lymphoedema.

  1. On the angular error of intensity vector based direction of arrival estimation in reverberant sound fields.

    PubMed

    Levin, Dovid; Habets, Emanuël A P; Gannot, Sharon

    2010-10-01

    An acoustic vector sensor provides measurements of both the pressure and particle velocity of a sound field in which it is placed. These measurements are vectorial in nature and can be used for the purpose of source localization. A straightforward approach towards determining the direction of arrival (DOA) utilizes the acoustic intensity vector, which is the product of pressure and particle velocity. The accuracy of an intensity vector based DOA estimator in the presence of noise has been analyzed previously. In this paper, the effects of reverberation upon the accuracy of such a DOA estimator are examined. It is shown that particular realizations of reverberation differ from an ideal isotropically diffuse field, and induce an estimation bias which is dependent upon the room impulse responses (RIRs). The limited knowledge available pertaining the RIRs is expressed statistically by employing the diffuse qualities of reverberation to extend Polack's statistical RIR model. Expressions for evaluating the typical bias magnitude as well as its probability distribution are derived.

  2. A Comprehensive Estimation of the Economic Effects of Meteorological Services Based on the Input-Output Method

    PubMed Central

    Wu, Xianhua; Yang, Lingjuan; Guo, Ji; Lu, Huaguo; Chen, Yunfeng; Sun, Jian

    2014-01-01

    Concentrating on consuming coefficient, partition coefficient, and Leontief inverse matrix, relevant concepts and algorithms are developed for estimating the impact of meteorological services including the associated (indirect, complete) economic effect. Subsequently, quantitative estimations are particularly obtained for the meteorological services in Jiangxi province by utilizing the input-output method. It is found that the economic effects are noticeably rescued by the preventive strategies developed from both the meteorological information and internal relevance (interdependency) in the industrial economic system. Another finding is that the ratio range of input in the complete economic effect on meteorological services is about 1 : 108.27–1 : 183.06, remarkably different from a previous estimation based on the Delphi method (1 : 30–1 : 51). Particularly, economic effects of meteorological services are higher for nontraditional users of manufacturing, wholesale and retail trades, services sector, tourism and culture, and art and lower for traditional users of agriculture, forestry, livestock, fishery, and construction industries. PMID:24578666

  3. A comprehensive estimation of the economic effects of meteorological services based on the input-output method.

    PubMed

    Wu, Xianhua; Wei, Guo; Yang, Lingjuan; Guo, Ji; Lu, Huaguo; Chen, Yunfeng; Sun, Jian

    2014-01-01

    Concentrating on consuming coefficient, partition coefficient, and Leontief inverse matrix, relevant concepts and algorithms are developed for estimating the impact of meteorological services including the associated (indirect, complete) economic effect. Subsequently, quantitative estimations are particularly obtained for the meteorological services in Jiangxi province by utilizing the input-output method. It is found that the economic effects are noticeably rescued by the preventive strategies developed from both the meteorological information and internal relevance (interdependency) in the industrial economic system. Another finding is that the ratio range of input in the complete economic effect on meteorological services is about 1 : 108.27-1 : 183.06, remarkably different from a previous estimation based on the Delphi method (1 : 30-1 : 51). Particularly, economic effects of meteorological services are higher for nontraditional users of manufacturing, wholesale and retail trades, services sector, tourism and culture, and art and lower for traditional users of agriculture, forestry, livestock, fishery, and construction industries.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chhiber, R; Usmanov, AV; Matthaeus, WH

    Simple estimates of the number of Coulomb collisions experienced by the interplanetary plasma to the point of observation, i.e., the “collisional age”, can be usefully employed in the study of non-thermal features of the solar wind. Usually these estimates are based on local plasma properties at the point of observation. Here we improve the method of estimation of the collisional age by employing solutions obtained from global three-dimensional magnetohydrodynamics simulations. This enables evaluation of the complete analytical expression for the collisional age without using approximations. The improved estimation of the collisional timescale is compared with turbulence and expansion timescales tomore » assess the relative importance of collisions. The collisional age computed using the approximate formula employed in previous work is compared with the improved simulation-based calculations to examine the validity of the simplified formula. We also develop an analytical expression for the evaluation of the collisional age and we find good agreement between the numerical and analytical results. Finally, we briefly discuss the implications for an improved estimation of collisionality along spacecraft trajectories, including Solar Probe Plus.« less

  5. Multiunit Activity-Based Real-Time Limb-State Estimation from Dorsal Root Ganglion Recordings

    PubMed Central

    Han, Sungmin; Chu, Jun-Uk; Kim, Hyungmin; Park, Jong Woong; Youn, Inchan

    2017-01-01

    Proprioceptive afferent activities could be useful for providing sensory feedback signals for closed-loop control during functional electrical stimulation (FES). However, most previous studies have used the single-unit activity of individual neurons to extract sensory information from proprioceptive afferents. This study proposes a new decoding method to estimate ankle and knee joint angles using multiunit activity data. Proprioceptive afferent signals were recorded from a dorsal root ganglion with a single-shank microelectrode during passive movements of the ankle and knee joints, and joint angles were measured as kinematic data. The mean absolute value (MAV) was extracted from the multiunit activity data, and a dynamically driven recurrent neural network (DDRNN) was used to estimate ankle and knee joint angles. The multiunit activity-based MAV feature was sufficiently informative to estimate limb states, and the DDRNN showed a better decoding performance than conventional linear estimators. In addition, processing time delay satisfied real-time constraints. These results demonstrated that the proposed method could be applicable for providing real-time sensory feedback signals in closed-loop FES systems. PMID:28276474

  6. Fast focus estimation using frequency analysis in digital holography.

    PubMed

    Oh, Seungtaik; Hwang, Chi-Young; Jeong, Il Kwon; Lee, Sung-Keun; Park, Jae-Hyeung

    2014-11-17

    A novel fast frequency-based method to estimate the focus distance of digital hologram for a single object is proposed. The focus distance is computed by analyzing the distribution of intersections of smoothed-rays. The smoothed-rays are determined by the directions of energy flow which are computed from local spatial frequency spectrum based on the windowed Fourier transform. So our method uses only the intrinsic frequency information of the optical field on the hologram and therefore does not require any sequential numerical reconstructions and focus detection techniques of conventional photography, both of which are the essential parts in previous methods. To show the effectiveness of our method, numerical results and analysis are presented as well.

  7. Geodesic regression for image time-series.

    PubMed

    Niethammer, Marc; Huang, Yang; Vialard, François-Xavier

    2011-01-01

    Registration of image-time series has so far been accomplished (i) by concatenating registrations between image pairs, (ii) by solving a joint estimation problem resulting in piecewise geodesic paths between image pairs, (iii) by kernel based local averaging or (iv) by augmenting the joint estimation with additional temporal irregularity penalties. Here, we propose a generative model extending least squares linear regression to the space of images by using a second-order dynamic formulation for image registration. Unlike previous approaches, the formulation allows for a compact representation of an approximation to the full spatio-temporal trajectory through its initial values. The method also opens up possibilities to design image-based approximation algorithms. The resulting optimization problem is solved using an adjoint method.

  8. Observations and implications of large-amplitude longitudinal oscillations in a solar filament

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luna, M.; Knizhnik, K.; Muglach, K.

    On 2010 August 20, an energetic disturbance triggered large-amplitude longitudinal oscillations in a nearby filament. The triggering mechanism appears to be episodic jets connecting the energetic event with the filament threads. In the present work, we analyze this periodic motion in a large fraction of the filament to characterize the underlying physics of the oscillation as well as the filament properties. The results support our previous theoretical conclusions that the restoring force of large-amplitude longitudinal oscillations is solar gravity, and the damping mechanism is the ongoing accumulation of mass onto the oscillating threads. Based on our previous work, we usedmore » the fitted parameters to determine the magnitude and radius of curvature of the dipped magnetic field along the filament, as well as the mass accretion rate onto the filament threads. These derived properties are nearly uniform along the filament, indicating a remarkable degree of cohesiveness throughout the filament channel. Moreover, the estimated mass accretion rate implies that the footpoint heating responsible for the thread formation, according to the thermal nonequilibrium model, agrees with previous coronal heating estimates. We estimate the magnitude of the energy released in the nearby event by studying the dynamic response of the filament threads, and discuss the implications of our study for filament structure and heating.« less

  9. PockDrug-Server: a new web server for predicting pocket druggability on holo and apo proteins

    PubMed Central

    Hussein, Hiba Abi; Borrel, Alexandre; Geneix, Colette; Petitjean, Michel; Regad, Leslie; Camproux, Anne-Claude

    2015-01-01

    Predicting protein pocket's ability to bind drug-like molecules with high affinity, i.e. druggability, is of major interest in the target identification phase of drug discovery. Therefore, pocket druggability investigations represent a key step of compound clinical progression projects. Currently computational druggability prediction models are attached to one unique pocket estimation method despite pocket estimation uncertainties. In this paper, we propose ‘PockDrug-Server’ to predict pocket druggability, efficient on both (i) estimated pockets guided by the ligand proximity (extracted by proximity to a ligand from a holo protein structure) and (ii) estimated pockets based solely on protein structure information (based on amino atoms that form the surface of potential binding cavities). PockDrug-Server provides consistent druggability results using different pocket estimation methods. It is robust with respect to pocket boundary and estimation uncertainties, thus efficient using apo pockets that are challenging to estimate. It clearly distinguishes druggable from less druggable pockets using different estimation methods and outperformed recent druggability models for apo pockets. It can be carried out from one or a set of apo/holo proteins using different pocket estimation methods proposed by our web server or from any pocket previously estimated by the user. PockDrug-Server is publicly available at: http://pockdrug.rpbs.univ-paris-diderot.fr. PMID:25956651

  10. PockDrug-Server: a new web server for predicting pocket druggability on holo and apo proteins.

    PubMed

    Hussein, Hiba Abi; Borrel, Alexandre; Geneix, Colette; Petitjean, Michel; Regad, Leslie; Camproux, Anne-Claude

    2015-07-01

    Predicting protein pocket's ability to bind drug-like molecules with high affinity, i.e. druggability, is of major interest in the target identification phase of drug discovery. Therefore, pocket druggability investigations represent a key step of compound clinical progression projects. Currently computational druggability prediction models are attached to one unique pocket estimation method despite pocket estimation uncertainties. In this paper, we propose 'PockDrug-Server' to predict pocket druggability, efficient on both (i) estimated pockets guided by the ligand proximity (extracted by proximity to a ligand from a holo protein structure) and (ii) estimated pockets based solely on protein structure information (based on amino atoms that form the surface of potential binding cavities). PockDrug-Server provides consistent druggability results using different pocket estimation methods. It is robust with respect to pocket boundary and estimation uncertainties, thus efficient using apo pockets that are challenging to estimate. It clearly distinguishes druggable from less druggable pockets using different estimation methods and outperformed recent druggability models for apo pockets. It can be carried out from one or a set of apo/holo proteins using different pocket estimation methods proposed by our web server or from any pocket previously estimated by the user. PockDrug-Server is publicly available at: http://pockdrug.rpbs.univ-paris-diderot.fr. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. Modelling HIV/AIDS epidemics in sub-Saharan Africa using seroprevalence data from antenatal clinics.

    PubMed Central

    Salomon, J. A.; Murray, C. J.

    2001-01-01

    OBJECTIVE: To improve the methodological basis for modelling the HIV/AIDS epidemics in adults in sub-Saharan Africa, with examples from Botswana, Central African Republic, Ethiopia, and Zimbabwe. Understanding the magnitude and trajectory of the HIV/AIDS epidemic is essential for planning and evaluating control strategies. METHODS: Previous mathematical models were developed to estimate epidemic trends based on sentinel surveillance data from pregnant women. In this project, we have extended these models in order to take full advantage of the available data. We developed a maximum likelihood approach for the estimation of model parameters and used numerical simulation methods to compute uncertainty intervals around the estimates. FINDINGS: In the four countries analysed, there were an estimated half a million new adult HIV infections in 1999 (range: 260 to 960 thousand), 4.7 million prevalent infections (range: 3.0 to 6.6 million), and 370 thousand adult deaths from AIDS (range: 266 to 492 thousand). CONCLUSION: While this project addresses some of the limitations of previous modelling efforts, an important research agenda remains, including the need to clarify the relationship between sentinel data from pregnant women and the epidemiology of HIV and AIDS in the general population. PMID:11477962

  12. Number line estimation and mental addition: examining the potential roles of language and education.

    PubMed

    Laski, Elida V; Yu, Qingyi

    2014-01-01

    This study investigated the relative importance of language and education to the development of numerical knowledge. Consistent with previous research suggesting that counting systems that transparently reflect the base-10 system facilitate an understanding of numerical concepts, Chinese and Chinese American kindergartners' and second graders' number line estimation (0-100 and 0-1000) was 1 to 2 years more advanced than that of American children tested in previous studies. However, Chinese children performed better than their Chinese American peers, who were fluent in Chinese but had been educated in America, at kindergarten on 0-100 number lines, at second grade on 0-1000 number lines, and at both time points on complex addition problems. Overall, the pattern of findings suggests that educational approach may have a greater influence on numerical development than the linguistic structure of the counting system. The findings also demonstrate that, despite generating accurate estimates of numerical magnitude on 0-100 number lines earlier, it still takes Chinese children approximately 2 years to demonstrate accurate estimates on 0-1000 number lines, which raises questions about how to promote the mapping of knowledge across numerical scales. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. [Chronic obstructive pulmonary disease prevalence estimated using a standard algorithm based on electronic health data in various areas of Italy].

    PubMed

    Faustini, Annunziata; Cascini, Silvia; Arcà, Massimo; Balzi, Daniela; Barchielli, Alessandro; Canova, Cristina; Galassi, Claudia; Migliore, Enrica; Minerba, Sante; Protti, Maria Angela; Romanelli, Anna; Tessari, Roberta; Vigotti, Maria Angela; Simonato, Lorenzo

    2008-01-01

    to estimate the prevalence of chronic obstructive pulmonary disease (COPD) by integrating various administrative health information systems. prevalent COPD cases were defined as those reported in the hospital discharge registry (HDR) and cause of mortality registry (CMR) with codes 490*, 491*, 492*, 494* and 496* of the International diseases classification 9th revision. Annual prevalence was estimated in 35+ year-old residents in six Italian areas ofb different sizes, in the period 2002-2004. We included cases observed in the previous four years who were alive at the beginning of each year. in 2003, age-standardized prevalence rates varied from 1.6% in Venice to 5% in Taranto. Prevalence was higher in males and increased with age. The highest rates were observed in central (Rome) and southern (Taranto) cities, especially in the 35-64 age group. HDR contributed 91% of cases. Health-tax exemption registry would increase the prevalence estimate by 0.2% if used as a third data source. with respect to the National Health Status survey, COPD prevalence is underestimated by 1%-3%; this can partly be due to the selection of severe and exacerbated COPD by the algorithm used. However, age, gender and geographical characteristics of prevalent cases were comparable to national estimates. Including cases observed in previous years (longitudinal estimates) increased the point estimate (yearly) of prevalence two or three times in each area.

  14. Creating Protected Areas on Public Lands: Is There Room for Additional Conservation?

    PubMed

    Arriagada, Rodrigo A; Echeverria, Cristian M; Moya, Danisa E

    2016-01-01

    Most evaluations of the effectiveness of PAs have relied on indirect estimates based on comparisons between protected and unprotected areas. Such methods can be biased when protection is not randomly assigned. We add to the growing literature on the impact of PAs by answering the following research questions: What is the impact of Chilean PAs on deforestation which occurred between 1986 and 2011? How do estimates of the impact of PAs vary when using only public land as control units? We show that the characteristics of the areas in which protected and unprotected lands are located differ significantly. To satisfactorily estimate the effects of PAs, we use matching methods to define adequate control groups, but not as in previous research. We construct control groups using separately non-protected private areas and non-protected public lands. We find that PAs avoid deforestation when using unprotected private lands as valid controls, however results show no impact when the control group is based only on unprotected public land. Different land management regimes, and higher levels of enforcement inside public lands may reduce the opportunity to add additional conservation benefits when the national systems for PAs are based on the protection of previously unprotected public lands. Given that not all PAs are established to avoid deforestation, results also admit the potential for future studies to include other outcomes including forest degradation (not just deforestation), biodiversity, wildlife, primary forests (not forests in general), among others.

  15. Creating Protected Areas on Public Lands: Is There Room for Additional Conservation?

    PubMed Central

    Arriagada, Rodrigo A.; Echeverria, Cristian M.; Moya, Danisa E.

    2016-01-01

    Most evaluations of the effectiveness of PAs have relied on indirect estimates based on comparisons between protected and unprotected areas. Such methods can be biased when protection is not randomly assigned. We add to the growing literature on the impact of PAs by answering the following research questions: What is the impact of Chilean PAs on deforestation which occurred between 1986 and 2011? How do estimates of the impact of PAs vary when using only public land as control units? We show that the characteristics of the areas in which protected and unprotected lands are located differ significantly. To satisfactorily estimate the effects of PAs, we use matching methods to define adequate control groups, but not as in previous research. We construct control groups using separately non-protected private areas and non-protected public lands. We find that PAs avoid deforestation when using unprotected private lands as valid controls, however results show no impact when the control group is based only on unprotected public land. Different land management regimes, and higher levels of enforcement inside public lands may reduce the opportunity to add additional conservation benefits when the national systems for PAs are based on the protection of previously unprotected public lands. Given that not all PAs are established to avoid deforestation, results also admit the potential for future studies to include other outcomes including forest degradation (not just deforestation), biodiversity, wildlife, primary forests (not forests in general), among others. PMID:26848856

  16. Evidence-based pathology in its second decade: toward probabilistic cognitive computing.

    PubMed

    Marchevsky, Alberto M; Walts, Ann E; Wick, Mark R

    2017-03-01

    Evidence-based pathology advocates using a combination of best available data ("evidence") from the literature and personal experience for the diagnosis, estimation of prognosis, and assessment of other variables that impact individual patient care. Evidence-based pathology relies on systematic reviews of the literature, evaluation of the quality of evidence as categorized by evidence levels and statistical tools such as meta-analyses, estimates of probabilities and odds, and others. However, it is well known that previously "statistically significant" information usually does not accurately forecast the future for individual patients. There is great interest in "cognitive computing" in which "data mining" is combined with "predictive analytics" designed to forecast future events and estimate the strength of those predictions. This study demonstrates the use of IBM Watson Analytics software to evaluate and predict the prognosis of 101 patients with typical and atypical pulmonary carcinoid tumors in which Ki-67 indices have been determined. The results obtained with this system are compared with those previously reported using "routine" statistical software and the help of a professional statistician. IBM Watson Analytics interactively provides statistical results that are comparable to those obtained with routine statistical tools but much more rapidly, with considerably less effort and with interactive graphics that are intuitively easy to apply. It also enables analysis of natural language variables and yields detailed survival predictions for patient subgroups selected by the user. Potential applications of this tool and basic concepts of cognitive computing are discussed. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Method paper--distance and travel time to casualty clinics in Norway based on crowdsourced postcode coordinates: a comparison with other methods.

    PubMed

    Raknes, Guttorm; Hunskaar, Steinar

    2014-01-01

    We describe a method that uses crowdsourced postcode coordinates and Google maps to estimate average distance and travel time for inhabitants of a municipality to a casualty clinic in Norway. The new method was compared with methods based on population centroids, median distance and town hall location, and we used it to examine how distance affects the utilisation of out-of-hours primary care services. At short distances our method showed good correlation with mean travel time and distance. The utilisation of out-of-hours services correlated with postcode based distances similar to previous research. The results show that our method is a reliable and useful tool for estimating average travel distances and travel times.

  18. High-Frequency Switching Transients and Power Loss Estimation in Electric Drive Systems that Utilize Wide-Bandgap Semiconductors

    NASA Astrophysics Data System (ADS)

    Fulani, Olatunji T.

    Development of electric drive systems for transportation and industrial applications is rapidly seeing the use of wide-bandgap (WBG) based power semiconductor devices. These devices, such as SiC MOSFETs, enable high switching frequencies and are becoming the preferred choice in inverters because of their lower switching losses and higher allowable operating temperatures. Due to the much shorter turn-on and turn-off times and correspondingly larger output voltage edge rates, traditional models and methods previously used to estimate inverter and motor power losses, based upon a triangular power loss waveform, are no longer justifiable from a physical perspective. In this thesis, more appropriate models and a power loss calculation approach are described with the goal of more accurately estimating the power losses in WBG-based electric drive systems. Sine-triangle modulation with third harmonic injection is used to control the switching of the inverter. The motor and inverter models are implemented using Simulink and computer studies are shown illustrating the application of the new approach.

  19. Estimating and modeling the cure fraction in population-based cancer survival analysis.

    PubMed

    Lambert, Paul C; Thompson, John R; Weston, Claire L; Dickman, Paul W

    2007-07-01

    In population-based cancer studies, cure is said to occur when the mortality (hazard) rate in the diseased group of individuals returns to the same level as that expected in the general population. The cure fraction (the proportion of patients cured of disease) is of interest to patients and is a useful measure to monitor trends in survival of curable disease. There are 2 main types of cure fraction model, the mixture cure fraction model and the non-mixture cure fraction model, with most previous work concentrating on the mixture cure fraction model. In this paper, we extend the parametric non-mixture cure fraction model to incorporate background mortality, thus providing estimates of the cure fraction in population-based cancer studies. We compare the estimates of relative survival and the cure fraction between the 2 types of model and also investigate the importance of modeling the ancillary parameters in the selected parametric distribution for both types of model.

  20. Estimation of the diagnostic threshold accounting for decision costs and sampling uncertainty.

    PubMed

    Skaltsa, Konstantina; Jover, Lluís; Carrasco, Josep Lluís

    2010-10-01

    Medical diagnostic tests are used to classify subjects as non-diseased or diseased. The classification rule usually consists of classifying subjects using the values of a continuous marker that is dichotomised by means of a threshold. Here, the optimum threshold estimate is found by minimising a cost function that accounts for both decision costs and sampling uncertainty. The cost function is optimised either analytically in a normal distribution setting or empirically in a free-distribution setting when the underlying probability distributions of diseased and non-diseased subjects are unknown. Inference of the threshold estimates is based on approximate analytically standard errors and bootstrap-based approaches. The performance of the proposed methodology is assessed by means of a simulation study, and the sample size required for a given confidence interval precision and sample size ratio is also calculated. Finally, a case example based on previously published data concerning the diagnosis of Alzheimer's patients is provided in order to illustrate the procedure.

  1. Output-feedback control of combined sewer networks through receding horizon control with moving horizon estimation

    NASA Astrophysics Data System (ADS)

    Joseph-Duran, Bernat; Ocampo-Martinez, Carlos; Cembrano, Gabriela

    2015-10-01

    An output-feedback control strategy for pollution mitigation in combined sewer networks is presented. The proposed strategy provides means to apply model-based predictive control to large-scale sewer networks, in-spite of the lack of measurements at most of the network sewers. In previous works, the authors presented a hybrid linear control-oriented model for sewer networks together with the formulation of Optimal Control Problems (OCP) and State Estimation Problems (SEP). By iteratively solving these problems, preliminary Receding Horizon Control with Moving Horizon Estimation (RHC/MHE) results, based on flow measurements, were also obtained. In this work, the RHC/MHE algorithm has been extended to take into account both flow and water level measurements and the resulting control loop has been extensively simulated to assess the system performance according different measurement availability scenarios and rain events. All simulations have been carried out using a detailed physically based model of a real case-study network as virtual reality.

  2. Neither slim nor fat: estimating the mass of the dodo (Raphus cucullatus, Aves, Columbiformes) based on the largest sample of dodo bones to date

    PubMed Central

    van Dierendonk, Roland C.H.; van Egmond, Maria A.N.E.; ten Hagen, Sjang L.; Kreuning, Jippe

    2017-01-01

    The dodo (Raphus cucullatus) might be the most enigmatic bird of all times. It is, therefore, highly remarkable that no consensus has yet been reached on its body mass; previous scientific estimates of its mass vary by more than 100%. Until now, the vast amount of bones stored at the Natural History Museum in Mauritius has not yet been studied morphometrically nor in relation to body mass. Here, a new estimate of the dodo’s mass is presented based on the largest sample of dodo femora ever measured (n = 174). In order to do this, we have used the regression method and chosen our variables based on biological, mathematical and physical arguments. The results indicate that the mean mass of the dodo was circa 12 kg, which is approximately five times as heavy as the largest living Columbidae (pigeons and doves), the clade to which the dodo belongs. PMID:29230358

  3. Real-time estimation of ionospheric delay using GPS measurements

    NASA Astrophysics Data System (ADS)

    Lin, Lao-Sheng

    1997-12-01

    When radio waves such as the GPS signals propagate through the ionosphere, they experience an extra time delay. The ionospheric delay can be eliminated (to the first order) through a linear combination of L1 and L2 observations from dual-frequency GPS receivers. Taking advantage of this dispersive principle, one or more dual- frequency GPS receivers can be used to determine a model of the ionospheric delay across a region of interest and, if implemented in real-time, can support single-frequency GPS positioning and navigation applications. The research objectives of this thesis were: (1) to develop algorithms to obtain accurate absolute Total Electron Content (TEC) estimates from dual-frequency GPS observables, and (2) to develop an algorithm to improve the accuracy of real-time ionosphere modelling. In order to fulfil these objectives, four algorithms have been proposed in this thesis. A 'multi-day multipath template technique' is proposed to mitigate the pseudo-range multipath effects at static GPS reference stations. This technique is based on the assumption that the multipath disturbance at a static station will be constant if the physical environment remains unchanged from day to day. The multipath template, either single-day or multi-day, can be generated from the previous days' GPS data. A 'real-time failure detection and repair algorithm' is proposed to detect and repair the GPS carrier phase 'failures', such as the occurrence of cycle slips. The proposed algorithm uses two procedures: (1) application of a statistical test on the state difference estimated from robust and conventional Kalman filters in order to detect and identify the carrier phase failure, and (2) application of a Kalman filter algorithm to repair the 'identified carrier phase failure'. A 'L1/L2 differential delay estimation algorithm' is proposed to estimate GPS satellite transmitter and receiver L1/L2 differential delays. This algorithm, based on the single-site modelling technique, is able to estimate the sum of the satellite and receiver L1/L2 differential delay for each tracked GPS satellite. A 'UNSW grid-based algorithm' is proposed to improve the accuracy of real-time ionosphere modelling. The proposed algorithm is similar to the conventional grid-based algorithm. However, two modifications were made to the algorithm: (1) an 'exponential function' is adopted as the weighting function, and (2) the 'grid-based ionosphere model' estimated from the previous day is used to predict the ionospheric delay ratios between the grid point and reference points. (Abstract shortened by UMI.)

  4. Population growth rates of reef sharks with and without fishing on the great barrier reef: robust estimation with multiple models.

    PubMed

    Hisano, Mizue; Connolly, Sean R; Robbins, William D

    2011-01-01

    Overfishing of sharks is a global concern, with increasing numbers of species threatened by overfishing. For many sharks, both catch rates and underwater visual surveys have been criticized as indices of abundance. In this context, estimation of population trends using individual demographic rates provides an important alternative means of assessing population status. However, such estimates involve uncertainties that must be appropriately characterized to credibly and effectively inform conservation efforts and management. Incorporating uncertainties into population assessment is especially important when key demographic rates are obtained via indirect methods, as is often the case for mortality rates of marine organisms subject to fishing. Here, focusing on two reef shark species on the Great Barrier Reef, Australia, we estimated natural and total mortality rates using several indirect methods, and determined the population growth rates resulting from each. We used bootstrapping to quantify the uncertainty associated with each estimate, and to evaluate the extent of agreement between estimates. Multiple models produced highly concordant natural and total mortality rates, and associated population growth rates, once the uncertainties associated with the individual estimates were taken into account. Consensus estimates of natural and total population growth across multiple models support the hypothesis that these species are declining rapidly due to fishing, in contrast to conclusions previously drawn from catch rate trends. Moreover, quantitative projections of abundance differences on fished versus unfished reefs, based on the population growth rate estimates, are comparable to those found in previous studies using underwater visual surveys. These findings appear to justify management actions to substantially reduce the fishing mortality of reef sharks. They also highlight the potential utility of rigorously characterizing uncertainty, and applying multiple assessment methods, to obtain robust estimates of population trends in species threatened by overfishing.

  5. Population Growth Rates of Reef Sharks with and without Fishing on the Great Barrier Reef: Robust Estimation with Multiple Models

    PubMed Central

    Hisano, Mizue; Connolly, Sean R.; Robbins, William D.

    2011-01-01

    Overfishing of sharks is a global concern, with increasing numbers of species threatened by overfishing. For many sharks, both catch rates and underwater visual surveys have been criticized as indices of abundance. In this context, estimation of population trends using individual demographic rates provides an important alternative means of assessing population status. However, such estimates involve uncertainties that must be appropriately characterized to credibly and effectively inform conservation efforts and management. Incorporating uncertainties into population assessment is especially important when key demographic rates are obtained via indirect methods, as is often the case for mortality rates of marine organisms subject to fishing. Here, focusing on two reef shark species on the Great Barrier Reef, Australia, we estimated natural and total mortality rates using several indirect methods, and determined the population growth rates resulting from each. We used bootstrapping to quantify the uncertainty associated with each estimate, and to evaluate the extent of agreement between estimates. Multiple models produced highly concordant natural and total mortality rates, and associated population growth rates, once the uncertainties associated with the individual estimates were taken into account. Consensus estimates of natural and total population growth across multiple models support the hypothesis that these species are declining rapidly due to fishing, in contrast to conclusions previously drawn from catch rate trends. Moreover, quantitative projections of abundance differences on fished versus unfished reefs, based on the population growth rate estimates, are comparable to those found in previous studies using underwater visual surveys. These findings appear to justify management actions to substantially reduce the fishing mortality of reef sharks. They also highlight the potential utility of rigorously characterizing uncertainty, and applying multiple assessment methods, to obtain robust estimates of population trends in species threatened by overfishing. PMID:21966402

  6. Influence of updating global emission inventory of black carbon on evaluation of the climate and health impact

    NASA Astrophysics Data System (ADS)

    Wang, Rong; Tao, Shu; Balkanski, Yves; Ciais, Philippe

    2013-04-01

    Black carbon (BC) is an air component of particular concern in terms of air quality and climate change. Black carbon emissions are often estimated based on the fuel data and emission factors. However, large variations in emission factors reported in the literature have led to a high uncertainty in previous inventories. Here, we develop a new global 0.1°×0.1° BC emission inventory for 2007 with full uncertainty analysis based on updated source and emission factor databases. Two versions of LMDz-OR-INCA models, named as INCA and INCA-zA, are run to evaluate the new emission inventory. INCA is built up based on a regular grid system with a resolution of 1.27° in latitude and 2.50° in longitude, while INCA-zA is specially zoomed to 0.51°×0.66° (latitude×longitude) in Asia. By checking against field observations, we compare our inventory with ACCMIP, which is used by IPCC in the 5th assessment report, and also evaluate the influence of model resolutions. With the newly calculated BC air concentrations and the nested model, we estimate the direct radiative forcing of BC and the premature death and mortality rate induced by BC exposure with Asia emphasized. Global BC direct radiative forcing at TOA is estimated to be 0.41 W/m2 (0.2 - 0.8 as inter-quartile range), which is 17% higher than that derived from the inventory adopted by IPCC-AR5 (0.34 W/m2). The estimated premature deaths induced by inhalation exposure to anthropogenic BC (0.36 million in 2007) and the percentage of high risk population are higher than those previously estimated. Ninety percents of the global total anthropogenic PD occur in Asia with 0.18 and 0.08 million deaths in China and India, respectively.

  7. Genes with minimal phylogenetic information are problematic for coalescent analyses when gene tree estimation is biased.

    PubMed

    Xi, Zhenxiang; Liu, Liang; Davis, Charles C

    2015-11-01

    The development and application of coalescent methods are undergoing rapid changes. One little explored area that bears on the application of gene-tree-based coalescent methods to species tree estimation is gene informativeness. Here, we investigate the accuracy of these coalescent methods when genes have minimal phylogenetic information, including the implementation of the multilocus bootstrap approach. Using simulated DNA sequences, we demonstrate that genes with minimal phylogenetic information can produce unreliable gene trees (i.e., high error in gene tree estimation), which may in turn reduce the accuracy of species tree estimation using gene-tree-based coalescent methods. We demonstrate that this problem can be alleviated by sampling more genes, as is commonly done in large-scale phylogenomic analyses. This applies even when these genes are minimally informative. If gene tree estimation is biased, however, gene-tree-based coalescent analyses will produce inconsistent results, which cannot be remedied by increasing the number of genes. In this case, it is not the gene-tree-based coalescent methods that are flawed, but rather the input data (i.e., estimated gene trees). Along these lines, the commonly used program PhyML has a tendency to infer one particular bifurcating topology even though it is best represented as a polytomy. We additionally corroborate these findings by analyzing the 183-locus mammal data set assembled by McCormack et al. (2012) using ultra-conserved elements (UCEs) and flanking DNA. Lastly, we demonstrate that when employing the multilocus bootstrap approach on this 183-locus data set, there is no strong conflict between species trees estimated from concatenation and gene-tree-based coalescent analyses, as has been previously suggested by Gatesy and Springer (2014). Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Estimating SPT-N Value Based on Soil Resistivity using Hybrid ANN-PSO Algorithm

    NASA Astrophysics Data System (ADS)

    Nur Asmawisham Alel, Mohd; Ruben Anak Upom, Mark; Asnida Abdullah, Rini; Hazreek Zainal Abidin, Mohd

    2018-04-01

    Standard Penetration Resistance (N value) is used in many empirical geotechnical engineering formulas. Meanwhile, soil resistivity is a measure of soil’s resistance to electrical flow. For a particular site, usually, only a limited N value data are available. In contrast, resistivity data can be obtained extensively. Moreover, previous studies showed evidence of a correlation between N value and resistivity value. Yet, no existing method is able to interpret resistivity data for estimation of N value. Thus, the aim is to develop a method for estimating N-value using resistivity data. This study proposes a hybrid Artificial Neural Network-Particle Swarm Optimization (ANN-PSO) method to estimate N value using resistivity data. Five different ANN-PSO models based on five boreholes were developed and analyzed. The performance metrics used were the coefficient of determination, R2 and mean absolute error, MAE. Analysis of result found that this method can estimate N value (R2 best=0.85 and MAEbest=0.54) given that the constraint, Δ {\\bar{l}}ref, is satisfied. The results suggest that ANN-PSO method can be used to estimate N value with good accuracy.

  9. On the accuracy of palaeopole estimations from magnetic field measurements

    NASA Astrophysics Data System (ADS)

    Vervelidou, F.; Lesur, V.; Morschhauser, A.; Grott, M.; Thomas, P.

    2017-12-01

    Various techniques have been proposed for palaeopole position estimation based on magnetic field measurements. Such estimates can offer insights into the rotational dynamics and the dynamo history of moons and terrestrial planets carrying a crustal magnetic field. Motivated by discrepancies in the estimated palaeopole positions among various studies regarding the Moon and Mars, we examine the limitations of magnetic field measurements as source of information for palaeopole position studies. It is already known that magnetic field measurements cannot constrain the null space of the magnetization nor its full spectral content. However, the extent to which these limitations affect palaeopole estimates has not been previously investigated in a systematic way. In this study, by means of the vector Spherical Harmonics formalism, we show that inferring palaeopole positions from magnetic field measurements necessarily introduces, explicitly or implicitly, assumptions about both the null space and the full spectral content of the magnetization. Moreover, we demonstrate through synthetic tests that if these assumptions are inaccurate, then the resulting palaeopole position estimates are wrong. Based on this finding, we make suggestions that can allow future palaeopole studies to be conducted in a more constructive way.

  10. A reference estimator based on composite sensor pattern noise for source device identification

    NASA Astrophysics Data System (ADS)

    Li, Ruizhe; Li, Chang-Tsun; Guan, Yu

    2014-02-01

    It has been proved that Sensor Pattern Noise (SPN) can serve as an imaging device fingerprint for source camera identification. Reference SPN estimation is a very important procedure within the framework of this application. Most previous works built reference SPN by averaging the SPNs extracted from 50 images of blue sky. However, this method can be problematic. Firstly, in practice we may face the problem of source camera identification in the absence of the imaging cameras and reference SPNs, which means only natural images with scene details are available for reference SPN estimation rather than blue sky images. It is challenging because the reference SPN can be severely contaminated by image content. Secondly, the number of available reference images sometimes is too few for existing methods to estimate a reliable reference SPN. In fact, existing methods lack consideration of the number of available reference images as they were designed for the datasets with abundant images to estimate the reference SPN. In order to deal with the aforementioned problem, in this work, a novel reference estimator is proposed. Experimental results show that our proposed method achieves better performance than the methods based on the averaged reference SPN, especially when few reference images used.

  11. Development and evaluation of a crowdsourcing methodology for knowledge base construction: identifying relationships between clinical problems and medications

    PubMed Central

    Wright, Adam; Laxmisan, Archana; Ottosen, Madelene J; McCoy, Jacob A; Butten, David; Sittig, Dean F

    2012-01-01

    Objective We describe a novel, crowdsourcing method for generating a knowledge base of problem–medication pairs that takes advantage of manually asserted links between medications and problems. Methods Through iterative review, we developed metrics to estimate the appropriateness of manually entered problem–medication links for inclusion in a knowledge base that can be used to infer previously unasserted links between problems and medications. Results Clinicians manually linked 231 223 medications (55.30% of prescribed medications) to problems within the electronic health record, generating 41 203 distinct problem–medication pairs, although not all were accurate. We developed methods to evaluate the accuracy of the pairs, and after limiting the pairs to those meeting an estimated 95% appropriateness threshold, 11 166 pairs remained. The pairs in the knowledge base accounted for 183 127 total links asserted (76.47% of all links). Retrospective application of the knowledge base linked 68 316 medications not previously linked by a clinician to an indicated problem (36.53% of unlinked medications). Expert review of the combined knowledge base, including inferred and manually linked problem–medication pairs, found a sensitivity of 65.8% and a specificity of 97.9%. Conclusion Crowdsourcing is an effective, inexpensive method for generating a knowledge base of problem–medication pairs that is automatically mapped to local terminologies, up-to-date, and reflective of local prescribing practices and trends. PMID:22582202

  12. Assessing the prediction accuracy of cure in the Cox proportional hazards cure model: an application to breast cancer data.

    PubMed

    Asano, Junichi; Hirakawa, Akihiro; Hamada, Chikuma

    2014-01-01

    A cure rate model is a survival model incorporating the cure rate with the assumption that the population contains both uncured and cured individuals. It is a powerful statistical tool for prognostic studies, especially in cancer. The cure rate is important for making treatment decisions in clinical practice. The proportional hazards (PH) cure model can predict the cure rate for each patient. This contains a logistic regression component for the cure rate and a Cox regression component to estimate the hazard for uncured patients. A measure for quantifying the predictive accuracy of the cure rate estimated by the Cox PH cure model is required, as there has been a lack of previous research in this area. We used the Cox PH cure model for the breast cancer data; however, the area under the receiver operating characteristic curve (AUC) could not be estimated because many patients were censored. In this study, we used imputation-based AUCs to assess the predictive accuracy of the cure rate from the PH cure model. We examined the precision of these AUCs using simulation studies. The results demonstrated that the imputation-based AUCs were estimable and their biases were negligibly small in many cases, although ordinary AUC could not be estimated. Additionally, we introduced the bias-correction method of imputation-based AUCs and found that the bias-corrected estimate successfully compensated the overestimation in the simulation studies. We also illustrated the estimation of the imputation-based AUCs using breast cancer data. Copyright © 2014 John Wiley & Sons, Ltd.

  13. HIV Diversity as a Biomarker for HIV Incidence Estimation: Including a High-Resolution Melting Diversity Assay in a Multiassay Algorithm

    PubMed Central

    Cousins, Matthew M.; Konikoff, Jacob; Laeyendecker, Oliver; Celum, Connie; Buchbinder, Susan P.; Seage, George R.; Kirk, Gregory D.; Moore, Richard D.; Mehta, Shruti H.; Margolick, Joseph B.; Brown, Joelle; Mayer, Kenneth H.; Koblin, Beryl A.; Wheeler, Darrell; Justman, Jessica E.; Hodder, Sally L.; Quinn, Thomas C.; Brookmeyer, Ron

    2014-01-01

    Multiassay algorithms (MAAs) can be used to estimate cross-sectional HIV incidence. We previously identified a robust MAA that includes the BED capture enzyme immunoassay (BED-CEIA), the Bio-Rad Avidity assay, viral load, and CD4 cell count. In this report, we evaluated MAAs that include a high-resolution melting (HRM) diversity assay that does not require sequencing. HRM scores were determined for eight regions of the HIV genome (2 in gag, 1 in pol, and 5 in env). The MAAs that were evaluated included the BED-CEIA, the Bio-Rad Avidity assay, viral load, and the HRM diversity assay, using HRM scores from different regions and a range of region-specific HRM diversity assay cutoffs. The performance characteristics based on the proportion of samples that were classified as MAA positive by duration of infection were determined for each MAA, including the mean window period. The cross-sectional incidence estimates obtained using optimized MAAs were compared to longitudinal incidence estimates for three cohorts in the United States. The performance of the HRM-based MAA was nearly identical to that of the MAA that included CD4 cell count. The HRM-based MAA had a mean window period of 154 days and provided cross-sectional incidence estimates that were similar to those based on cohort follow-up. HIV diversity is a useful biomarker for estimating HIV incidence. MAAs that include the HRM diversity assay can provide accurate HIV incidence estimates using stored blood plasma or serum samples without a requirement for CD4 cell count data. PMID:24153134

  14. Estimated flow-duration curves for selected ungaged sites in Kansas

    USGS Publications Warehouse

    Studley, S.E.

    2001-01-01

    Flow-duration curves for 1968-98 were estimated for 32 ungaged sites in the Missouri, Smoky Hill-Saline, Solomon, Marais des Cygnes, Walnut, Verdigris, and Neosho River Basins in Kansas. Also included from a previous report are estimated flow-duration curves for 16 ungaged sites in the Cimarron and lower Arkansas River Basins in Kansas. The method of estimation used six unique factors of flow duration: (1) mean streamflow and percentage duration of mean streamflow, (2) ratio of 1-percent-duration streamflow to mean streamflow, (3) ratio of 0.1-percent-duration streamflow to 1-percent-duration streamflow, (4) ratio of 50-percent-duration streamflow to mean streamflow, (5) percentage duration of appreciable streamflow (0.10 cubic foot per second), and (6) average slope of the flow-duration curve. These factors were previously developed from a regionalized study of flow-duration curves using streamflow data for 1921-76 from streamflow-gaging stations with drainage areas of 100 to 3,000 square miles. The method was tested on a currently (2001) measured, continuous-record streamflow-gaging station on Salt Creek near Lyndon, Kansas, with a drainage area of 111 square miles and was found to adequately estimate the computed flow-duration curve for the station. The method also was tested on a currently (2001) measured, continuous-record, streamflow-gaging station on Soldier Creek near Circleville, Kansas, with a drainage area of 49.3 square miles. The results of the test on Soldier Creek near Circleville indicated that the method could adequately estimate flow-duration curves for sites with drainage areas of less than 100 square miles. The low-flow parts of the estimated flow-duration curves were verified or revised using 137 base-flow discharge measurements made during 1999-2000 at the 32 ungaged sites that were correlated with base-flow measurements and flow-duration analyses performed at nearby, long-term, continuous-record, streamflow-gaging stations (index stations). The method did not adequately estimate the flow-duration curves for two sites in the western one-third of the State because of substantial changes in farming practices (terracing and intensive ground-water withdrawal) that were not accounted for in the two previous studies (Furness, 1959; Jordan, 1983). For these two sites, there was enough historic, continuous-streamflow record available to perform record-extension techniques correlated to their respective index stations for the development of the estimated flow-duration curves. The estimated flow-duration curves at the ungaged sites can be used for projecting future flow frequencies for assessment of total maximum daily loads (TMDLs) or other water-quality constituents, water-availability studies, and for basin-characteristic studies.

  15. Collaborative localization in wireless sensor networks via pattern recognition in radio irregularity using omnidirectional antennas.

    PubMed

    Jiang, Joe-Air; Chuang, Cheng-Long; Lin, Tzu-Shiang; Chen, Chia-Pang; Hung, Chih-Hung; Wang, Jiing-Yi; Liu, Chang-Wang; Lai, Tzu-Yun

    2010-01-01

    In recent years, various received signal strength (RSS)-based localization estimation approaches for wireless sensor networks (WSNs) have been proposed. RSS-based localization is regarded as a low-cost solution for many location-aware applications in WSNs. In previous studies, the radiation patterns of all sensor nodes are assumed to be spherical, which is an oversimplification of the radio propagation model in practical applications. In this study, we present an RSS-based cooperative localization method that estimates unknown coordinates of sensor nodes in a network. Arrangement of two external low-cost omnidirectional dipole antennas is developed by using the distance-power gradient model. A modified robust regression is also proposed to determine the relative azimuth and distance between a sensor node and a fixed reference node. In addition, a cooperative localization scheme that incorporates estimations from multiple fixed reference nodes is presented to improve the accuracy of the localization. The proposed method is tested via computer-based analysis and field test. Experimental results demonstrate that the proposed low-cost method is a useful solution for localizing sensor nodes in unknown or changing environments.

  16. A biomechanical model for fibril recruitment: Evaluation in tendons and arteries.

    PubMed

    Bevan, Tim; Merabet, Nadege; Hornsby, Jack; Watton, Paul N; Thompson, Mark S

    2018-06-06

    Simulations of soft tissue mechanobiological behaviour are increasingly important for clinical prediction of aneurysm, tendinopathy and other disorders. Mechanical behaviour at low stretches is governed by fibril straightening, transitioning into load-bearing at recruitment stretch, resulting in a tissue stiffening effect. Previous investigations have suggested theoretical relationships between stress-stretch measurements and recruitment probability density function (PDF) but not derived these rigorously nor evaluated these experimentally. Other work has proposed image-based methods for measurement of recruitment but made use of arbitrary fibril critical straightness parameters. The aim of this work was to provide a sound theoretical basis for estimating recruitment PDF from stress-stretch measurements and to evaluate this relationship using image-based methods, clearly motivating the choice of fibril critical straightness parameter in rat tail tendon and porcine artery. Rigorous derivation showed that the recruitment PDF may be estimated from the second stretch derivative of the first Piola-Kirchoff tissue stress. Image-based fibril recruitment identified the fibril straightness parameter that maximised Pearson correlation coefficients (PCC) with estimated PDFs. Using these critical straightness parameters the new method for estimating recruitment PDF showed a PCC with image-based measures of 0.915 and 0.933 for tendons and arteries respectively. This method may be used for accurate estimation of fibril recruitment PDF in mechanobiological simulation where fibril-level mechanical parameters are important for predicting cell behaviour. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Estimating Vehicle Fuel Consumption and Emissions Using GPS Big Data

    PubMed Central

    Kan, Zihan; Zhang, Xia

    2018-01-01

    The energy consumption and emissions from vehicles adversely affect human health and urban sustainability. Analysis of GPS big data collected from vehicles can provide useful insights about the quantity and distribution of such energy consumption and emissions. Previous studies, which estimated fuel consumption/emissions from traffic based on GPS sampled data, have not sufficiently considered vehicle activities and may have led to erroneous estimations. By adopting the analytical construct of the space-time path in time geography, this study proposes methods that more accurately estimate and visualize vehicle energy consumption/emissions based on analysis of vehicles’ mobile activities (MA) and stationary activities (SA). First, we build space-time paths of individual vehicles, extract moving parameters, and identify MA and SA from each space-time path segment (STPS). Then we present an N-Dimensional framework for estimating and visualizing fuel consumption/emissions. For each STPS, fuel consumption, hot emissions, and cold start emissions are estimated based on activity type, i.e., MA, SA with engine-on and SA with engine-off. In the case study, fuel consumption and emissions of a single vehicle and a road network are estimated and visualized with GPS data. The estimation accuracy of the proposed approach is 88.6%. We also analyze the types of activities that produced fuel consumption on each road segment to explore the patterns and mechanisms of fuel consumption in the study area. The results not only show the effectiveness of the proposed approaches in estimating fuel consumption/emissions but also indicate their advantages for uncovering the relationships between fuel consumption and vehicles’ activities in road networks. PMID:29561813

  18. Estimating Vehicle Fuel Consumption and Emissions Using GPS Big Data.

    PubMed

    Kan, Zihan; Tang, Luliang; Kwan, Mei-Po; Zhang, Xia

    2018-03-21

    The energy consumption and emissions from vehicles adversely affect human health and urban sustainability. Analysis of GPS big data collected from vehicles can provide useful insights about the quantity and distribution of such energy consumption and emissions. Previous studies, which estimated fuel consumption/emissions from traffic based on GPS sampled data, have not sufficiently considered vehicle activities and may have led to erroneous estimations. By adopting the analytical construct of the space-time path in time geography, this study proposes methods that more accurately estimate and visualize vehicle energy consumption/emissions based on analysis of vehicles' mobile activities ( MA ) and stationary activities ( SA ). First, we build space-time paths of individual vehicles, extract moving parameters, and identify MA and SA from each space-time path segment (STPS). Then we present an N-Dimensional framework for estimating and visualizing fuel consumption/emissions. For each STPS, fuel consumption, hot emissions, and cold start emissions are estimated based on activity type, i.e., MA , SA with engine-on and SA with engine-off. In the case study, fuel consumption and emissions of a single vehicle and a road network are estimated and visualized with GPS data. The estimation accuracy of the proposed approach is 88.6%. We also analyze the types of activities that produced fuel consumption on each road segment to explore the patterns and mechanisms of fuel consumption in the study area. The results not only show the effectiveness of the proposed approaches in estimating fuel consumption/emissions but also indicate their advantages for uncovering the relationships between fuel consumption and vehicles' activities in road networks.

  19. Maximum likelihood estimation of correction for dilution bias in simple linear regression using replicates from subjects with extreme first measurements.

    PubMed

    Berglund, Lars; Garmo, Hans; Lindbäck, Johan; Svärdsudd, Kurt; Zethelius, Björn

    2008-09-30

    The least-squares estimator of the slope in a simple linear regression model is biased towards zero when the predictor is measured with random error. A corrected slope may be estimated by adding data from a reliability study, which comprises a subset of subjects from the main study. The precision of this corrected slope depends on the design of the reliability study and estimator choice. Previous work has assumed that the reliability study constitutes a random sample from the main study. A more efficient design is to use subjects with extreme values on their first measurement. Previously, we published a variance formula for the corrected slope, when the correction factor is the slope in the regression of the second measurement on the first. In this paper we show that both designs improve by maximum likelihood estimation (MLE). The precision gain is explained by the inclusion of data from all subjects for estimation of the predictor's variance and by the use of the second measurement for estimation of the covariance between response and predictor. The gain of MLE enhances with stronger true relationship between response and predictor and with lower precision in the predictor measurements. We present a real data example on the relationship between fasting insulin, a surrogate marker, and true insulin sensitivity measured by a gold-standard euglycaemic insulin clamp, and simulations, where the behavior of profile-likelihood-based confidence intervals is examined. MLE was shown to be a robust estimator for non-normal distributions and efficient for small sample situations. Copyright (c) 2008 John Wiley & Sons, Ltd.

  20. Model based estimation of image depth and displacement

    NASA Technical Reports Server (NTRS)

    Damour, Kevin T.

    1992-01-01

    Passive depth and displacement map determinations have become an important part of computer vision processing. Applications that make use of this type of information include autonomous navigation, robotic assembly, image sequence compression, structure identification, and 3-D motion estimation. With the reliance of such systems on visual image characteristics, a need to overcome image degradations, such as random image-capture noise, motion, and quantization effects, is clearly necessary. Many depth and displacement estimation algorithms also introduce additional distortions due to the gradient operations performed on the noisy intensity images. These degradations can limit the accuracy and reliability of the displacement or depth information extracted from such sequences. Recognizing the previously stated conditions, a new method to model and estimate a restored depth or displacement field is presented. Once a model has been established, the field can be filtered using currently established multidimensional algorithms. In particular, the reduced order model Kalman filter (ROMKF), which has been shown to be an effective tool in the reduction of image intensity distortions, was applied to the computed displacement fields. Results of the application of this model show significant improvements on the restored field. Previous attempts at restoring the depth or displacement fields assumed homogeneous characteristics which resulted in the smoothing of discontinuities. In these situations, edges were lost. An adaptive model parameter selection method is provided that maintains sharp edge boundaries in the restored field. This has been successfully applied to images representative of robotic scenarios. In order to accommodate image sequences, the standard 2-D ROMKF model is extended into 3-D by the incorporation of a deterministic component based on previously restored fields. The inclusion of past depth and displacement fields allows a means of incorporating the temporal information into the restoration process. A summary on the conditions that indicate which type of filtering should be applied to a field is provided.

  1. The Economic Burden of Vision Loss and Eye Disorders among the United States Population Younger than 40 Years

    PubMed Central

    Wittenborn, John S.; Zhang, Xinzhi; Feagan, Charles W.; Crouse, Wesley L.; Shrestha, Sundar; Kemper, Alex R.; Hoerger, Thomas J.; Saaddine, Jinan B.

    2017-01-01

    Objective To estimate the economic burden of vision loss and eye disorders in the United States population younger than 40 years in 2012. Design Econometric and statistical analysis of survey, commercial claims, and census data. Participants The United States population younger than 40 years in 2012. Methods We categorized costs based on consensus guidelines. We estimated medical costs attributable to diagnosed eye-related disorders, undiagnosed vision loss, and medical vision aids using Medical Expenditure Panel Survey and MarketScan data. The prevalence of vision impairment and blindness were estimated using National Health and Nutrition Examination Survey data. We estimated costs from lost productivity using Survey of Income and Program Participation. We estimated costs of informal care, low vision aids, special education, school screening, government spending, and transfer payments based on published estimates and federal budgets. We estimated quality-adjusted life years (QALYs) lost based on published utility values. Main Outcome Measures Costs and QALYs lost in 2012. Results The economic burden of vision loss and eye disorders among the United States population younger than 40 years was $27.5 billion in 2012 (95% confidence interval, $21.5–$37.2 billion), including $5.9 billion for children and $21.6 billion for adults 18 to 39 years of age. Direct costs were $14.5 billion, including $7.3 billion in medical costs for diagnosed disorders, $4.9 billion in refraction correction, $0.5 billion in medical costs for undiagnosed vision loss, and $1.8 billion in other direct costs. Indirect costs were $13 billion, primarily because of $12.2 billion in productivity losses. In addition, vision loss cost society 215 000 QALYs. Conclusions We found a substantial burden resulting from vision loss and eye disorders in the United States population younger than 40 years, a population excluded from previous studies. Monetizing quality-of-life losses at $50 000 per QALY would add $10.8 billion in additional costs, indicating a total economic burden of $38.2 billion. Relative to previously reported estimates for the population 40 years of age and older, more than one third of the total cost of vision loss and eye disorders may be incurred by persons younger than 40 years. PMID:23631946

  2. PHYSIOLOGICALLY-BASED PHARMACOKINETIC ( PBPK ) MODEL FOR METHYL TERTIARY BUTYL ETHER ( MTBE ): A REVIEW OF EXISTING MODELS

    EPA Science Inventory

    MTBE is a volatile organic compound used as an oxygenate additive to gasoline, added to comply with the 1990 Clean Air Act. Previous PBPK models for MTBE were reviewed and incorporated into the Exposure Related Dose Estimating Model (ERDEM) software. This model also included an e...

  3. MAP-Motivated Carrier Synchronization of GMSK Based on the Laurent AMP Representation

    NASA Technical Reports Server (NTRS)

    Simon, M. K.

    1998-01-01

    Using the MAP estimation approach to carrier synchronization of digital modulations containing ISI together with a two pulse stream AMP representation of GMSK, it is possible to obtain an optimum closed loop configuration in the same manner as has been previously proposed for other conventional modulations with ISI.

  4. 76 FR 56322 - Fisheries of the Northeastern United States; Northeast (NE) Multispecies Fishery; Framework...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-13

    ... common pool vessels for FY 2011 due to overages of FY 2010 catch levels. This measure will help prevent.... SUPPLEMENTARY INFORMATION: FY 2011 Differential DAS Counting for Common Pool Vessels Based on preliminary FY 2010 common pool catch information available in February 2011, NMFS previously estimated that common...

  5. Estimating Critical Values for Strength of Alignment among Curriculum, Assessments, and Instruction

    ERIC Educational Resources Information Center

    Fulmer, Gavin W.

    2010-01-01

    School accountability decisions based on standardized tests hinge on the degree of alignment of the test with a state's standards. Yet no established criteria were available for judging strength of alignment. Previous studies of alignment among tests, standards, and teachers' instruction have yielded mixed results that are difficult to interpret…

  6. Estimating Critical Values for Strength of Alignment among Curriculum, Assessments, and Instruction

    ERIC Educational Resources Information Center

    Fulmer, Gavin W.

    2011-01-01

    School accountability decisions based on standardized tests hinge on the degree of alignment of the test with the state's standards documents. Yet, there exist no established criteria for judging strength of alignment. Previous measures of alignment among tests, standards, and teachers' instruction have yielded mixed results that are difficult to…

  7. An estimate of the shadow price of water in the southern Ogallala Aquifer

    USDA-ARS?s Scientific Manuscript database

    In this paper, we attempt to quantify the shadow price of an additional inch of groundwater resource left in situ for the Southern Ogallala Aquifer. Previous authors have shown the degree to which the optimal resource extraction path may diverge from the competitive extraction path based upon varyin...

  8. Calibration and validation of the COSMOS rover for surface soil moisture

    USDA-ARS?s Scientific Manuscript database

    The mobile COsmic-ray Soil Moisture Observing System (COSMOS) rover may be useful for validating satellite-based estimates of near surface soil moisture, but the accuracy with which the rover can measure 0-5 cm soil moisture has not been previously determined. Our objectives were to calibrate and va...

  9. A Chandra Study of Supernova Remnants in the Large and Small Magellanic Clouds

    NASA Astrophysics Data System (ADS)

    Schenck, Andrew Corey

    2017-08-01

    In the first part of this thesis we measure the interstellar abundances for the elements O, Ne, Mg, Si, and Fe in the Large Magellanic Cloud (LMC), based on the observational data of sixteen supernova remnants (SNRs) in the LMC as available in the public archive of the Chandra X-ray Observatory (Chandra). We find lower abundances than previous measurements based on a similar method using data obtained with the Advanced Satellite for Astrophysics and Cosmology (ASCA). We discuss the origins of the discrepancy between our Chandra and the previous ASCA measurements. We conclude that our measurements are generally more reliable than the ASCA results thanks to the high-resolution imaging spectroscopy with our Chandra data, although there remain some systematic uncertainties due to the use of different spectral modelings between the previous work and ours. We also discuss our results in comparison with the LMC abundance measurements based on optical observations of stars. The second part of this thesis is a detailed study of a core-collapse SNR B0049-73.6 in the Small Magellanic Cloud (SMC). Based on our deep Chandra observation, we detect metal-rich ejecta features extending out to the outermost boundary of B0049-73.6, which were not seen in the previous data. We find that the central nebula is dominated by emission from reverse-shocked ejecta material enriched in O, Ne, Mg, and Si. O-rich ejecta distribution is relatively smooth throughout the central nebula. In contrast the Si-rich material is highly structured. These results suggest that B0049-73.6 was produced by an asymmetric core-collapse explosion of a massive star. The estimated abundance ratios among these ejecta elements are in plausible agreement with the nucleosynthesis products from the explosion of a 13-15M. progenitor. We reveal that the central ring-like (in projection) ejecta nebula extends to ˜9 pc from the SNR center. This suggests that the contact discontinuity (CD) may be located at a further distance from the SNR center than the previous estimate (˜6 pc). Based on our estimated larger size of the CD, we suggest that the significant effect from the presence of a Fe-Ni bubble at the SNR center (as proposed by the previous work) may not be required to describe the overall dynamics of this SNR. Applying the Sedov-Taylor similarity solutions, we estimate the dynamical age of ˜17,000 yr and an explosion energy of E0 ˜ 1:7 x 1051 erg for B0049-73.6. We place a stringent upper limit of LX ˜ 6:0 x 1032 erg s-1 on the 0.3-7.0 keV band luminosity for the embedded compact stellar remnant at the center of B0049-73.6. Our tight estimate for the X-ray luminosity upper limit suggests that the compact stellar remnant of this SNR may be a similar object to those in a peculiar class of low-luminosity neutron stars (e.g., the so-called Dim Isolated neutron stars) or may possibly be a black hole. Finally, we demonstrate our adaptive mesh grid method for the analysis of the rich SNR data. We developed our own computer software to implement this technique which is useful for an efficient spatially-resolved spectroscopic study of high-quality datasets of SNRs. As part of this software we also implement automated spectral model fits for all individual spectra extracted from our adaptively defined small sub- regions. We illustrate the utility of this technique with an example study of SNR N63A in the LMC.

  10. Estimation of sum-to-one constrained parameters with non-Gaussian extensions of ensemble-based Kalman filters: application to a 1D ocean biogeochemical model

    NASA Astrophysics Data System (ADS)

    Simon, E.; Bertino, L.; Samuelsen, A.

    2011-12-01

    Combined state-parameter estimation in ocean biogeochemical models with ensemble-based Kalman filters is a challenging task due to the non-linearity of the models, the constraints of positiveness that apply to the variables and parameters, and the non-Gaussian distribution of the variables in which they result. Furthermore, these models are sensitive to numerous parameters that are poorly known. Previous works [1] demonstrated that the Gaussian anamorphosis extensions of ensemble-based Kalman filters were relevant tools to perform combined state-parameter estimation in such non-Gaussian framework. In this study, we focus on the estimation of the grazing preferences parameters of zooplankton species. These parameters are introduced to model the diet of zooplankton species among phytoplankton species and detritus. They are positive values and their sum is equal to one. Because the sum-to-one constraint cannot be handled by ensemble-based Kalman filters, a reformulation of the parameterization is proposed. We investigate two types of changes of variables for the estimation of sum-to-one constrained parameters. The first one is based on Gelman [2] and leads to the estimation of normal distributed parameters. The second one is based on the representation of the unit sphere in spherical coordinates and leads to the estimation of parameters with bounded distributions (triangular or uniform). These formulations are illustrated and discussed in the framework of twin experiments realized in the 1D coupled model GOTM-NORWECOM with Gaussian anamorphosis extensions of the deterministic ensemble Kalman filter (DEnKF). [1] Simon E., Bertino L. : Gaussian anamorphosis extension of the DEnKF for combined state and parameter estimation : application to a 1D ocean ecosystem model. Journal of Marine Systems, 2011. doi :10.1016/j.jmarsys.2011.07.007 [2] Gelman A. : Method of Moments Using Monte Carlo Simulation. Journal of Computational and Graphical Statistics, 4, 1, 36-54, 1995.

  11. Evidence for a recent origin of penguins

    PubMed Central

    Subramanian, Sankar; Beans-Picón, Gabrielle; Swaminathan, Siva K.; Millar, Craig D.; Lambert, David M.

    2013-01-01

    Penguins are a remarkable group of birds, with the 18 extant species living in diverse climatic zones from the tropics to Antarctica. The timing of the origin of these extant penguins remains controversial. Previous studies based on DNA sequences and fossil records have suggested widely differing times for the origin of the group. This has given rise to widely differing biogeographic narratives about their evolution. To resolve this problem, we sequenced five introns from 11 species representing all genera of living penguins. Using these data and other available DNA sequences, together with the ages of multiple penguin fossils to calibrate the molecular clock, we estimated the age of the most recent common ancestor of extant penguins to be 20.4 Myr (17.0–23.8 Myr). This time is half of the previous estimates based on molecular sequence data. Our results suggest that most of the major groups of extant penguins diverged 11–16 Ma. This overlaps with the sharp decline in Antarctic temperatures that began approximately 12 Ma, suggesting a possible relationship between climate change and penguin evolution. PMID:24227045

  12. Mars Propellant Liquefaction and Storage Performance Modeling using Thermal Desktop with an Integrated Cryocooler Model

    NASA Technical Reports Server (NTRS)

    Desai, Pooja; Hauser, Dan; Sutherlin, Steven

    2017-01-01

    NASAs current Mars architectures are assuming the production and storage of 23 tons of liquid oxygen on the surface of Mars over a duration of 500+ days. In order to do this in a mass efficient manner, an energy efficient refrigeration system will be required. Based on previous analysis NASA has decided to do all liquefaction in the propulsion vehicle storage tanks. In order to allow for transient Martian environmental effects, a propellant liquefaction and storage system for a Mars Ascent Vehicle (MAV) was modeled using Thermal Desktop. The model consisted of a propellant tank containing a broad area cooling loop heat exchanger integrated with a reverse turbo Brayton cryocooler. Cryocooler sizing and performance modeling was conducted using MAV diurnal heat loads and radiator rejection temperatures predicted from a previous thermal model of the MAV. A system was also sized and modeled using an alternative heat rejection system that relies on a forced convection heat exchanger. Cryocooler mass, input power, and heat rejection for both systems were estimated and compared against sizing based on non-transient sizing estimates.

  13. Mars Propellant Liquefaction Modeling in Thermal Desktop

    NASA Technical Reports Server (NTRS)

    Desai, Pooja; Hauser, Dan; Sutherlin, Steven

    2017-01-01

    NASAs current Mars architectures are assuming the production and storage of 23 tons of liquid oxygen on the surface of Mars over a duration of 500+ days. In order to do this in a mass efficient manner, an energy efficient refrigeration system will be required. Based on previous analysis NASA has decided to do all liquefaction in the propulsion vehicle storage tanks. In order to allow for transient Martian environmental effects, a propellant liquefaction and storage system for a Mars Ascent Vehicle (MAV) was modeled using Thermal Desktop. The model consisted of a propellant tank containing a broad area cooling loop heat exchanger integrated with a reverse turbo Brayton cryocooler. Cryocooler sizing and performance modeling was conducted using MAV diurnal heat loads and radiator rejection temperatures predicted from a previous thermal model of the MAV. A system was also sized and modeled using an alternative heat rejection system that relies on a forced convection heat exchanger. Cryocooler mass, input power, and heat rejection for both systems were estimated and compared against sizing based on non-transient sizing estimates.

  14. Approximation of epidemic models by diffusion processes and their statistical inference.

    PubMed

    Guy, Romain; Larédo, Catherine; Vergu, Elisabeta

    2015-02-01

    Multidimensional continuous-time Markov jump processes [Formula: see text] on [Formula: see text] form a usual set-up for modeling [Formula: see text]-like epidemics. However, when facing incomplete epidemic data, inference based on [Formula: see text] is not easy to be achieved. Here, we start building a new framework for the estimation of key parameters of epidemic models based on statistics of diffusion processes approximating [Formula: see text]. First, previous results on the approximation of density-dependent [Formula: see text]-like models by diffusion processes with small diffusion coefficient [Formula: see text], where [Formula: see text] is the population size, are generalized to non-autonomous systems. Second, our previous inference results on discretely observed diffusion processes with small diffusion coefficient are extended to time-dependent diffusions. Consistent and asymptotically Gaussian estimates are obtained for a fixed number [Formula: see text] of observations, which corresponds to the epidemic context, and for [Formula: see text]. A correction term, which yields better estimates non asymptotically, is also included. Finally, performances and robustness of our estimators with respect to various parameters such as [Formula: see text] (the basic reproduction number), [Formula: see text], [Formula: see text] are investigated on simulations. Two models, [Formula: see text] and [Formula: see text], corresponding to single and recurrent outbreaks, respectively, are used to simulate data. The findings indicate that our estimators have good asymptotic properties and behave noticeably well for realistic numbers of observations and population sizes. This study lays the foundations of a generic inference method currently under extension to incompletely observed epidemic data. Indeed, contrary to the majority of current inference techniques for partially observed processes, which necessitates computer intensive simulations, our method being mostly an analytical approach requires only the classical optimization steps.

  15. Development of an atmospheric N2O isotopocule model and optimization procedure, and application to source estimation

    NASA Astrophysics Data System (ADS)

    Ishijima, K.; Takigawa, M.; Sudo, K.; Toyoda, S.; Yoshida, N.; Röckmann, T.; Kaiser, J.; Aoki, S.; Morimoto, S.; Sugawara, S.; Nakazawa, T.

    2015-07-01

    This paper presents the development of an atmospheric N2O isotopocule model based on a chemistry-coupled atmospheric general circulation model (ACTM). We also describe a simple method to optimize the model and present its use in estimating the isotopic signatures of surface sources at the hemispheric scale. Data obtained from ground-based observations, measurements of firn air, and balloon and aircraft flights were used to optimize the long-term trends, interhemispheric gradients, and photolytic fractionation, respectively, in the model. This optimization successfully reproduced realistic spatial and temporal variations of atmospheric N2O isotopocules throughout the atmosphere from the surface to the stratosphere. The very small gradients associated with vertical profiles through the troposphere and the latitudinal and vertical distributions within each hemisphere were also reasonably simulated. The results of the isotopic characterization of the global total sources were generally consistent with previous one-box model estimates, indicating that the observed atmospheric trend is the dominant factor controlling the source isotopic signature. However, hemispheric estimates were different from those generated by a previous two-box model study, mainly due to the model accounting for the interhemispheric transport and latitudinal and vertical distributions of tropospheric N2O isotopocules. Comparisons of time series of atmospheric N2O isotopocule ratios between our model and observational data from several laboratories revealed the need for a more systematic and elaborate intercalibration of the standard scales used in N2O isotopic measurements in order to capture a more complete and precise picture of the temporal and spatial variations in atmospheric N2O isotopocule ratios. This study highlights the possibility that inverse estimation of surface N2O fluxes, including the isotopic information as additional constraints, could be realized.

  16. Development of an atmospheric N2O isotopocule model and optimization procedure, and application to source estimation

    NASA Astrophysics Data System (ADS)

    Ishijima, K.; Takigawa, M.; Sudo, K.; Toyoda, S.; Yoshida, N.; Röckmann, T.; Kaiser, J.; Aoki, S.; Morimoto, S.; Sugawara, S.; Nakazawa, T.

    2015-12-01

    This work presents the development of an atmospheric N2O isotopocule model based on a chemistry-coupled atmospheric general circulation model (ACTM). We also describe a simple method to optimize the model and present its use in estimating the isotopic signatures of surface sources at the hemispheric scale. Data obtained from ground-based observations, measurements of firn air, and balloon and aircraft flights were used to optimize the long-term trends, interhemispheric gradients, and photolytic fractionation, respectively, in the model. This optimization successfully reproduced realistic spatial and temporal variations of atmospheric N2O isotopocules throughout the atmosphere from the surface to the stratosphere. The very small gradients associated with vertical profiles through the troposphere and the latitudinal and vertical distributions within each hemisphere were also reasonably simulated. The results of the isotopic characterization of the global total sources were generally consistent with previous one-box model estimates, indicating that the observed atmospheric trend is the dominant factor controlling the source isotopic signature. However, hemispheric estimates were different from those generated by a previous two-box model study, mainly due to the model accounting for the interhemispheric transport and latitudinal and vertical distributions of tropospheric N2O isotopocules. Comparisons of time series of atmospheric N2O isotopocule ratios between our model and observational data from several laboratories revealed the need for a more systematic and elaborate intercalibration of the standard scales used in N2O isotopic measurements in order to capture a more complete and precise picture of the temporal and spatial variations in atmospheric N2O isotopocule ratios. This study highlights the possibility that inverse estimation of surface N2O fluxes, including the isotopic information as additional constraints, could be realized.

  17. Time-series analyses of air pollution and mortality in the United States: a subsampling approach.

    PubMed

    Moolgavkar, Suresh H; McClellan, Roger O; Dewanji, Anup; Turim, Jay; Luebeck, E Georg; Edwards, Melanie

    2013-01-01

    Hierarchical Bayesian methods have been used in previous papers to estimate national mean effects of air pollutants on daily deaths in time-series analyses. We obtained maximum likelihood estimates of the common national effects of the criteria pollutants on mortality based on time-series data from ≤ 108 metropolitan areas in the United States. We used a subsampling bootstrap procedure to obtain the maximum likelihood estimates and confidence bounds for common national effects of the criteria pollutants, as measured by the percentage increase in daily mortality associated with a unit increase in daily 24-hr mean pollutant concentration on the previous day, while controlling for weather and temporal trends. We considered five pollutants [PM10, ozone (O3), carbon monoxide (CO), nitrogen dioxide (NO2), and sulfur dioxide (SO2)] in single- and multipollutant analyses. Flexible ambient concentration-response models for the pollutant effects were considered as well. We performed limited sensitivity analyses with different degrees of freedom for time trends. In single-pollutant models, we observed significant associations of daily deaths with all pollutants. The O3 coefficient was highly sensitive to the degree of smoothing of time trends. Among the gases, SO2 and NO2 were most strongly associated with mortality. The flexible ambient concentration-response curve for O3 showed evidence of nonlinearity and a threshold at about 30 ppb. Differences between the results of our analyses and those reported from using the Bayesian approach suggest that estimates of the quantitative impact of pollutants depend on the choice of statistical approach, although results are not directly comparable because they are based on different data. In addition, the estimate of the O3-mortality coefficient depends on the amount of smoothing of time trends.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Yuyu; Smith, Steven J.; Elvidge, Christopher

    Accurate information of urban areas at regional and global scales is important for both the science and policy-making communities. The Defense Meteorological Satellite Program/Operational Linescan System (DMSP/OLS) nighttime stable light data (NTL) provide a potential way to map urban area and its dynamics economically and timely. In this study, we developed a cluster-based method to estimate the optimal thresholds and map urban extents from the DMSP/OLS NTL data in five major steps, including data preprocessing, urban cluster segmentation, logistic model development, threshold estimation, and urban extent delineation. Different from previous fixed threshold method with over- and under-estimation issues, in ourmore » method the optimal thresholds are estimated based on cluster size and overall nightlight magnitude in the cluster, and they vary with clusters. Two large countries of United States and China with different urbanization patterns were selected to map urban extents using the proposed method. The result indicates that the urbanized area occupies about 2% of total land area in the US ranging from lower than 0.5% to higher than 10% at the state level, and less than 1% in China, ranging from lower than 0.1% to about 5% at the province level with some municipalities as high as 10%. The derived thresholds and urban extents were evaluated using high-resolution land cover data at the cluster and regional levels. It was found that our method can map urban area in both countries efficiently and accurately. Compared to previous threshold techniques, our method reduces the over- and under-estimation issues, when mapping urban extent over a large area. More important, our method shows its potential to map global urban extents and temporal dynamics using the DMSP/OLS NTL data in a timely, cost-effective way.« less

  19. Polygenic risk score analysis of pathologically confirmed Alzheimer disease.

    PubMed

    Escott-Price, Valentina; Myers, Amanda J; Huentelman, Matt; Hardy, John

    2017-08-01

    Previous estimates of the utility of polygenic risk score analysis for the prediction of Alzheimer disease have given area under the curve (AUC) estimates of <80%. However, these have been based on the genetic analysis of clinical case-control series. Here, we apply the same analytic approaches to a pathological case-control series and show a predictive AUC of 84%. We suggest that this analysis has clinical utility and that there is limited room for further improvement using genetic data. Ann Neurol 2017;82:311-314. © 2017 American Neurological Association.

  20. Automated assessment of noninvasive filling pressure using color Doppler M-mode echocardiography

    NASA Technical Reports Server (NTRS)

    Greenberg, N. L.; Firstenberg, M. S.; Cardon, L. A.; Zuckerman, J.; Levine, B. D.; Garcia, M. J.; Thomas, J. D.

    2001-01-01

    Assessment of left ventricular filling pressure usually requires invasive hemodynamic monitoring to follow the progression of disease or the response to therapy. Previous investigations have shown accurate estimation of wedge pressure using noninvasive Doppler information obtained from the ratio of the wave propagation slope from color M-mode (CMM) images and the peak early diastolic filling velocity from transmitral Doppler images. This study reports an automated algorithm that derives an estimate of wedge pressure based on the spatiotemporal velocity distribution available from digital CMM Doppler images of LV filling.

  1. Equation of State for RX-08-EL and RX-08-EP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, E.L.; Walton, J.

    1985-05-07

    JWL Equations of State (EOS's) have been estimated for RX-08-EL and RX-08-EP. The estimated JWL EOS parameters are listed. Previously, we derived a JWL EOS for RX-08-EN based on DYNA2D hydrodynamic code cylinder computations and comparisons with experimental cylinder test results are shown. The experimental cylinder shot results for RX-08-EL, shot K-473, were compared to the experimental cylinder shot results for RX-08-EN, shot K-463, as a reference. 10 figs., 6 tabs.

  2. PMICALC: an R code-based software for estimating post-mortem interval (PMI) compatible with Windows, Mac and Linux operating systems.

    PubMed

    Muñoz-Barús, José I; Rodríguez-Calvo, María Sol; Suárez-Peñaranda, José M; Vieira, Duarte N; Cadarso-Suárez, Carmen; Febrero-Bande, Manuel

    2010-01-30

    In legal medicine the correct determination of the time of death is of utmost importance. Recent advances in estimating post-mortem interval (PMI) have made use of vitreous humour chemistry in conjunction with Linear Regression, but the results are questionable. In this paper we present PMICALC, an R code-based freeware package which estimates PMI in cadavers of recent death by measuring the concentrations of potassium ([K+]), hypoxanthine ([Hx]) and urea ([U]) in the vitreous humor using two different regression models: Additive Models (AM) and Support Vector Machine (SVM), which offer more flexibility than the previously used Linear Regression. The results from both models are better than those published to date and can give numerical expression of PMI with confidence intervals and graphic support within 20 min. The program also takes into account the cause of death. 2009 Elsevier Ireland Ltd. All rights reserved.

  3. Radar Imaging of Non-Uniformly Rotating Targets via a Novel Approach for Multi-Component AM-FM Signal Parameter Estimation

    PubMed Central

    Wang, Yong

    2015-01-01

    A novel radar imaging approach for non-uniformly rotating targets is proposed in this study. It is assumed that the maneuverability of the non-cooperative target is severe, and the received signal in a range cell can be modeled as multi-component amplitude-modulated and frequency-modulated (AM-FM) signals after motion compensation. Then, the modified version of Chirplet decomposition (MCD) based on the integrated high order ambiguity function (IHAF) is presented for the parameter estimation of AM-FM signals, and the corresponding high quality instantaneous ISAR images can be obtained from the estimated parameters. Compared with the MCD algorithm based on the generalized cubic phase function (GCPF) in the authors’ previous paper, the novel algorithm presented in this paper is more accurate and efficient, and the results with simulated and real data demonstrate the superiority of the proposed method. PMID:25806870

  4. Combining Video, Audio and Lexical Indicators of Affect in Spontaneous Conversation via Particle Filtering

    PubMed Central

    Savran, Arman; Cao, Houwei; Shah, Miraj; Nenkova, Ani; Verma, Ragini

    2013-01-01

    We present experiments on fusing facial video, audio and lexical indicators for affect estimation during dyadic conversations. We use temporal statistics of texture descriptors extracted from facial video, a combination of various acoustic features, and lexical features to create regression based affect estimators for each modality. The single modality regressors are then combined using particle filtering, by treating these independent regression outputs as measurements of the affect states in a Bayesian filtering framework, where previous observations provide prediction about the current state by means of learned affect dynamics. Tested on the Audio-visual Emotion Recognition Challenge dataset, our single modality estimators achieve substantially higher scores than the official baseline method for every dimension of affect. Our filtering-based multi-modality fusion achieves correlation performance of 0.344 (baseline: 0.136) and 0.280 (baseline: 0.096) for the fully continuous and word level sub challenges, respectively. PMID:25300451

  5. Combining Video, Audio and Lexical Indicators of Affect in Spontaneous Conversation via Particle Filtering.

    PubMed

    Savran, Arman; Cao, Houwei; Shah, Miraj; Nenkova, Ani; Verma, Ragini

    2012-01-01

    We present experiments on fusing facial video, audio and lexical indicators for affect estimation during dyadic conversations. We use temporal statistics of texture descriptors extracted from facial video, a combination of various acoustic features, and lexical features to create regression based affect estimators for each modality. The single modality regressors are then combined using particle filtering, by treating these independent regression outputs as measurements of the affect states in a Bayesian filtering framework, where previous observations provide prediction about the current state by means of learned affect dynamics. Tested on the Audio-visual Emotion Recognition Challenge dataset, our single modality estimators achieve substantially higher scores than the official baseline method for every dimension of affect. Our filtering-based multi-modality fusion achieves correlation performance of 0.344 (baseline: 0.136) and 0.280 (baseline: 0.096) for the fully continuous and word level sub challenges, respectively.

  6. A reassessment of the suspended sediment load in the Madeira River basin from the Andes of Peru and Bolivia to the Amazon River in Brazil, based on 10 years of data from the HYBAM monitoring programme

    NASA Astrophysics Data System (ADS)

    Vauchel, Philippe; Santini, William; Guyot, Jean Loup; Moquet, Jean Sébastien; Martinez, Jean Michel; Espinoza, Jhan Carlo; Baby, Patrice; Fuertes, Oscar; Noriega, Luis; Puita, Oscar; Sondag, Francis; Fraizy, Pascal; Armijos, Elisa; Cochonneau, Gérard; Timouk, Franck; de Oliveira, Eurides; Filizola, Naziano; Molina, Jorge; Ronchail, Josyane

    2017-10-01

    The Madeira River is the second largest tributary of the Amazon River. It contributes approximately 13% of the Amazon River flow and it may contribute up to 50% of its sediment discharge to the Atlantic Ocean. Until now, the suspended sediment load of the Madeira River was not well known and was estimated in a broad range from 240 to 715 Mt yr-1. Since 2002, the HYBAM international network developed a new monitoring programme specially designed to provide more reliable data than in previous intents. It is based on the continuous monitoring of a set of 11 gauging stations in the Madeira River watershed from the Andes piedmont to the confluence with the Amazon River, and discrete sampling of the suspended sediment concentration every 7 or 10 days. This paper presents the results of the suspended sediment data obtained in the Madeira drainage basin during 2002-2011. The Madeira River suspended sediment load is estimated at 430 Mt yr-1 near its confluence with the Amazon River. The average production of the Madeira River Andean catchment is estimated at 640 Mt yr-1 (±30%), the corresponding sediment yield for the Andes is estimated at 3000 t km-2 yr-1 (±30%), and the average denudation rate is estimated at 1.20 mm yr-1 (±30%). Contrary to previous results that had mentioned high sedimentation rates in the Beni River floodplain, we detected no measurable sedimentation process in this part of the basin. On the Mamoré River basin, we observed heavy sediment deposition of approximately 210 Mt yr-1 that seem to confirm previous studies. But while these studies mentioned heavy sedimentation in the floodplain, we showed that sediment deposition occurred mainly in the Andean piedmont and immediate foreland in rivers (Parapeti, Grande, Pirai, Yapacani, Chimoré, Chaparé, Secure, Maniqui) with discharges that are not sufficiently large to transport their sediment load downstream in the lowlands.

  7. Generation of a new cystatin C-based estimating equation for glomerular filtration rate by use of 7 assays standardized to the international calibrator.

    PubMed

    Grubb, Anders; Horio, Masaru; Hansson, Lars-Olof; Björk, Jonas; Nyman, Ulf; Flodin, Mats; Larsson, Anders; Bökenkamp, Arend; Yasuda, Yoshinari; Blufpand, Hester; Lindström, Veronica; Zegers, Ingrid; Althaus, Harald; Blirup-Jensen, Søren; Itoh, Yoshi; Sjöström, Per; Nordin, Gunnar; Christensson, Anders; Klima, Horst; Sunde, Kathrin; Hjort-Christensen, Per; Armbruster, David; Ferrero, Carlo

    2014-07-01

    Many different cystatin C-based equations exist for estimating glomerular filtration rate. Major reasons for this are the previous lack of an international cystatin C calibrator and the nonequivalence of results from different cystatin C assays. Use of the recently introduced certified reference material, ERM-DA471/IFCC, and further work to achieve high agreement and equivalence of 7 commercially available cystatin C assays allowed a substantial decrease of the CV of the assays, as defined by their performance in an external quality assessment for clinical laboratory investigations. By use of 2 of these assays and a population of 4690 subjects, with large subpopulations of children and Asian and Caucasian adults, with their GFR determined by either renal or plasma inulin clearance or plasma iohexol clearance, we attempted to produce a virtually assay-independent simple cystatin C-based equation for estimation of GFR. We developed a simple cystatin C-based equation for estimation of GFR comprising only 2 variables, cystatin C concentration and age. No terms for race and sex are required for optimal diagnostic performance. The equation, [Formula: see text] is also biologically oriented, with 1 term for the theoretical renal clearance of small molecules and 1 constant for extrarenal clearance of cystatin C. A virtually assay-independent simple cystatin C-based and biologically oriented equation for estimation of GFR, without terms for sex and race, was produced. © 2014 The American Association for Clinical Chemistry.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hess, Peter

    An improved microscopic cleavage model, based on a Morse-type and Lennard-Jones-type interaction instead of the previously employed half-sine function, is used to determine the maximum cleavage strength for the brittle materials diamond, tungsten, molybdenum, silicon, GaAs, silica, and graphite. The results of both interaction potentials are in much better agreement with the theoretical strength values obtained by ab initio calculations for diamond, tungsten, molybdenum, and silicon than the previous model. Reasonable estimates of the intrinsic strength are presented for GaAs, silica, and graphite, where first principles values are not available.

  9. Free vibration of arches flexible in shear.

    NASA Technical Reports Server (NTRS)

    Austin, W. J.; Veletsos, A. S.

    1973-01-01

    An analysis reported by Veletsos et al. (1972) concerning the free vibrational characteristics of circular arches vibrating in their own planes is considered. The analysis was based on a theory which neglects the effects of rotatory inertia and shearing deformation. A supplementary investigation is conducted to assess the effects of the previously neglected factors and to identify the conditions under which these effects are of practical significance or may be neglected. A simple approximate procedure is developed for estimating the natural frequencies of arches, giving due consideration to the effects of the previously neglected factors.

  10. Full velocity difference model for a car-following theory.

    PubMed

    Jiang, R; Wu, Q; Zhu, Z

    2001-07-01

    In this paper, we present a full velocity difference model for a car-following theory based on the previous models in the literature. To our knowledge, the model is an improvement over the previous ones theoretically, because it considers more aspects in car-following process than others. This point is verified by numerical simulation. Then we investigate the property of the model using both analytic and numerical methods, and find that the model can describe the phase transition of traffic flow and estimate the evolution of traffic congestion.

  11. FY 17 Q1 Commercial integrated heat pump with thermal storage milestone report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abu-Heiba, Ahmad; Baxter, Van D.; Shen, Bo

    2017-01-01

    The commercial integrated heat pump with thermal storage (AS-IHP) offers significant energy saving over a baseline heat pump with electric water heater. The saving potential is maximized when the AS-IHP serves coincident high water heating and high space cooling demands. A previous energy performance analysis showed that the AS-IHP provides the highest benefit in the hot-humid and hot-dry/mixed dry climate regions. Analysis of technical potential energy savings for these climate zones based on the BTO Market calculator indicated that the following commercial building market segments had the highest water heating loads relative to space cooling and heating loads education, foodmore » service, health care, lodging, and mercantile/service. In this study, we focused on these building types to conservatively estimate the market potential of the AS-IHP. Our analysis estimates maximum annual shipments of ~522,000 units assuming 100% of the total market is captured. An early replacement market based on replacement of systems in target buildings between 15 and 35 years old was estimated at ~136,000 units. Technical potential energy savings are estimated at ~0.27 quad based on the maximum market estimate, equivalent to ~13.9 MM Ton CO2 emissions reduction.« less

  12. A comparison of moment-based methods of estimation for the log Pearson type 3 distribution

    NASA Astrophysics Data System (ADS)

    Koutrouvelis, I. A.; Canavos, G. C.

    2000-06-01

    The log Pearson type 3 distribution is a very important model in statistical hydrology, especially for modeling annual flood series. In this paper we compare the various methods based on moments for estimating quantiles of this distribution. Besides the methods of direct and mixed moments which were found most successful in previous studies and the well-known indirect method of moments, we develop generalized direct moments and generalized mixed moments methods and a new method of adaptive mixed moments. The last method chooses the orders of two moments for the original observations by utilizing information contained in the sample itself. The results of Monte Carlo experiments demonstrated the superiority of this method in estimating flood events of high return periods when a large sample is available and in estimating flood events of low return periods regardless of the sample size. In addition, a comparison of simulation and asymptotic results shows that the adaptive method may be used for the construction of meaningful confidence intervals for design events based on the asymptotic theory even with small samples. The simulation results also point to the specific members of the class of generalized moments estimates which maintain small values for bias and/or mean square error.

  13. Address-based versus random-digit-dial surveys: comparison of key health and risk indicators.

    PubMed

    Link, Michael W; Battaglia, Michael P; Frankel, Martin R; Osborn, Larry; Mokdad, Ali H

    2006-11-15

    Use of random-digit dialing (RDD) for conducting health surveys is increasingly problematic because of declining participation rates and eroding frame coverage. Alternative survey modes and sampling frames may improve response rates and increase the validity of survey estimates. In a 2005 pilot study conducted in six states as part of the Behavioral Risk Factor Surveillance System, the authors administered a mail survey to selected household members sampled from addresses in a US Postal Service database. The authors compared estimates based on data from the completed mail surveys (n = 3,010) with those from the Behavioral Risk Factor Surveillance System telephone surveys (n = 18,780). The mail survey data appeared reasonably complete, and estimates based on data from the two survey modes were largely equivalent. Differences found, such as differences in the estimated prevalences of binge drinking (mail = 20.3%, telephone = 13.1%) or behaviors linked to human immunodeficiency virus transmission (mail = 7.1%, telephone = 4.2%), were consistent with previous research showing that, for questions about sensitive behaviors, self-administered surveys generally produce higher estimates than interviewer-administered surveys. The mail survey also provided access to cell-phone-only households and households without telephones, which cannot be reached by means of standard RDD surveys.

  14. Model and parametric uncertainty in source-based kinematic models of earthquake ground motion

    USGS Publications Warehouse

    Hartzell, Stephen; Frankel, Arthur; Liu, Pengcheng; Zeng, Yuehua; Rahman, Shariftur

    2011-01-01

    Four independent ground-motion simulation codes are used to model the strong ground motion for three earthquakes: 1994 Mw 6.7 Northridge, 1989 Mw 6.9 Loma Prieta, and 1999 Mw 7.5 Izmit. These 12 sets of synthetics are used to make estimates of the variability in ground-motion predictions. In addition, ground-motion predictions over a grid of sites are used to estimate parametric uncertainty for changes in rupture velocity. We find that the combined model uncertainty and random variability of the simulations is in the same range as the variability of regional empirical ground-motion data sets. The majority of the standard deviations lie between 0.5 and 0.7 natural-log units for response spectra and 0.5 and 0.8 for Fourier spectra. The estimate of model epistemic uncertainty, based on the different model predictions, lies between 0.2 and 0.4, which is about one-half of the estimates for the standard deviation of the combined model uncertainty and random variability. Parametric uncertainty, based on variation of just the average rupture velocity, is shown to be consistent in amplitude with previous estimates, showing percentage changes in ground motion from 50% to 300% when rupture velocity changes from 2.5 to 2.9 km/s. In addition, there is some evidence that mean biases can be reduced by averaging ground-motion estimates from different methods.

  15. Estimating global nitrous oxide emissions by lichens and bryophytes with a process-based productivity model

    NASA Astrophysics Data System (ADS)

    Porada, Philipp; Pöschl, Ulrich; Kleidon, Axel; Beer, Christian; Weber, Bettina

    2017-03-01

    Nitrous oxide is a strong greenhouse gas and atmospheric ozone-depleting agent which is largely emitted by soils. Recently, lichens and bryophytes have also been shown to release significant amounts of nitrous oxide. This finding relies on ecosystem-scale estimates of net primary productivity of lichens and bryophytes, which are converted to nitrous oxide emissions by empirical relationships between productivity and respiration, as well as between respiration and nitrous oxide release. Here we obtain an alternative estimate of nitrous oxide emissions which is based on a global process-based non-vascular vegetation model of lichens and bryophytes. The model quantifies photosynthesis and respiration of lichens and bryophytes directly as a function of environmental conditions, such as light and temperature. Nitrous oxide emissions are then derived from simulated respiration assuming a fixed relationship between the two fluxes. This approach yields a global estimate of 0.27 (0.19-0.35) (Tg N2O) year-1 released by lichens and bryophytes. This is lower than previous estimates but corresponds to about 50 % of the atmospheric deposition of nitrous oxide into the oceans or 25 % of the atmospheric deposition on land. Uncertainty in our simulated estimate results from large variation in emission rates due to both physiological differences between species and spatial heterogeneity of climatic conditions. To constrain our predictions, combined online gas exchange measurements of respiration and nitrous oxide emissions may be helpful.

  16. Human papillomavirus (HPV) vaccination coverage in young Australian women is higher than previously estimated: independent estimates from a nationally representative mobile phone survey.

    PubMed

    Brotherton, Julia M L; Liu, Bette; Donovan, Basil; Kaldor, John M; Saville, Marion

    2014-01-23

    Accurate estimates of coverage are essential for estimating the population effectiveness of human papillomavirus (HPV) vaccination. Australia has a purpose built National HPV Vaccination Program Register for monitoring coverage, however notification of doses administered to young women in the community during the national catch-up program (2007-2009) was not compulsory. In 2011, we undertook a population-based mobile phone survey of young women to independently estimate HPV vaccination coverage. Randomly generated mobile phone numbers were dialed to recruit women aged 22-30 (age eligible for HPV vaccination) to complete a computer assisted telephone interview. Consent was sought to validate self reported HPV vaccination status against the national register. Coverage rates were calculated based on self report and weighted to the age and state of residence structure of the Australian female population. These were compared with coverage estimates from the register using Australian Bureau of Statistics estimated resident populations as the denominator. Among the 1379 participants, the national estimate for self reported HPV vaccination coverage for doses 1/2/3, respectively, weighted for age and state of residence, was 64/59/53%. This compares with coverage of 55/45/32% and 49/40/28% based on register records, using 2007 and 2011 population data as the denominators respectively. Some significant differences in coverage between the states were identified. 20% (223) of women returned a consent form allowing validation of doses against the register and provider records: among these women 85.6% (538) of self reported doses were confirmed. We confirmed that coverage rates for young women vaccinated in the community (at age 18-26 years) are underestimated by the national register and that under-notification is greater for second and third doses. Using 2011 population estimates, rather than estimates contemporaneous with the program rollout, reduces register-based coverage estimates further because of large population increases due to immigration since the program. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. High-resolution estimates of Nubia-Somalia plate motion since 20 Ma from reconstructions of the Southwest Indian Ridge, Red Sea and Gulf of Aden

    NASA Astrophysics Data System (ADS)

    DeMets, C.; Merkouriev, S.

    2016-10-01

    Large gaps and inconsistencies remain in published estimates of Nubia-Somalia plate motion based on reconstructions of seafloor spreading data around Africa. Herein, we use newly available reconstructions of the Southwest Indian Ridge at ˜1-Myr intervals since 20 Ma to estimate Nubia-Somalia plate motion farther back in time than previously achieved and with an unprecedented degree of temporal resolution. At the northern end of the East African rift, our new estimates of Nubia-Somalia motion for six times from 0.78 Ma to 5.2 Ma differ by only 2 per cent from the rift-normal component of motion that is extrapolated from a recently estimated GPS angular velocity. The rate of rift-normal extension thus appears to have remained steady since at least 5.2 Ma. Our new rotations indicate that the two plates have moved relative to each other since at least 16 Ma and possibly longer. Motion has either been steady since at least 16 Ma or accelerated modestly between 6 and 5.2 Ma. Our Nubia-Somalia rotations predict 42.5 ± 3.8 km of rift-normal extension since 10.6 Ma across the well-studied, northern segment of the Main Ethiopian Rift, consistent with 40-50 km estimates for extension since 10.6 Myr based on seismological surveys of this narrow part of the plate boundary. Nubia-Somalia rotations are also derived by combining newly estimated Somalia-Arabia rotations that reconstruct the post-20-Ma opening of the Gulf of Aden with Nubia-Arabia rotations estimated via a probabilistic analysis of plausible opening scenarios for the Red Sea. These rotations predict Nubia-Somalia motion since 5.2 Myr that is consistent with that determined from Southwest Indian Ridge data and also predict 40 ± 3 km of rift-normal extension since 10.6 Ma across the Main Ethiopian Rift, consistent with our 42.5 ± 3.8 km Southwest Indian Ridge estimate. Our new rotations exclude at high confidence level previous estimates of 12 ± 13 and 123 ± 14 km for rift-normal extensions across the Main Ethiopian Rift since 10.6 Ma based on reconstructions of Chron 5n.2 along the Southwest Indian Ridge. Sparse coverage of magnetic reversals older than 16 Ma along the western third of the Southwest Indian Ridge precludes reliable determinations of Nubia-Somalia plate motion before 16 Ma, leaving unanswered the key question of when the motion between the two plates began.

  18. Estimation of the Basic Reproductive Ratio for Dengue Fever at the Take-Off Period of Dengue Infection.

    PubMed

    Jafaruddin; Indratno, Sapto W; Nuraini, Nuning; Supriatna, Asep K; Soewono, Edy

    2015-01-01

    Estimating the basic reproductive ratio ℛ 0 of dengue fever has continued to be an ever-increasing challenge among epidemiologists. In this paper we propose two different constructions to estimate ℛ 0 which is derived from a dynamical system of host-vector dengue transmission model. The construction is based on the original assumption that in the early states of an epidemic the infected human compartment increases exponentially at the same rate as the infected mosquito compartment (previous work). In the first proposed construction, we modify previous works by assuming that the rates of infection for mosquito and human compartments might be different. In the second construction, we add an improvement by including more realistic conditions in which the dynamics of an infected human compartments are intervened by the dynamics of an infected mosquito compartment, and vice versa. We apply our construction to the real dengue epidemic data from SB Hospital, Bandung, Indonesia, during the period of outbreak Nov. 25, 2008-Dec. 2012. We also propose two scenarios to determine the take-off rate of infection at the beginning of a dengue epidemic for construction of the estimates of ℛ 0: scenario I from equation of new cases of dengue with respect to time (daily) and scenario II from equation of new cases of dengue with respect to cumulative number of new cases of dengue. The results show that our first construction of ℛ 0 accommodates the take-off rate differences between mosquitoes and humans. Our second construction of the ℛ 0 estimation takes into account the presence of infective mosquitoes in the early growth rate of infective humans and vice versa. We conclude that the second approach is more realistic, compared with our first approach and the previous work.

  19. Methods for the quantitative comparison of molecular estimates of clade age and the fossil record.

    PubMed

    Clarke, Julia A; Boyd, Clint A

    2015-01-01

    Approaches quantifying the relative congruence, or incongruence, of molecular divergence estimates and the fossil record have been limited. Previously proposed methods are largely node specific, assessing incongruence at particular nodes for which both fossil data and molecular divergence estimates are available. These existing metrics, and other methods that quantify incongruence across topologies including entirely extinct clades, have so far not taken into account uncertainty surrounding both the divergence estimates and the ages of fossils. They have also treated molecular divergence estimates younger than previously assessed fossil minimum estimates of clade age as if they were the same as cases in which they were older. However, these cases are not the same. Recovered divergence dates younger than compared oldest known occurrences require prior hypotheses regarding the phylogenetic position of the compared fossil record and standard assumptions about the relative timing of morphological and molecular change to be incorrect. Older molecular dates, by contrast, are consistent with an incomplete fossil record and do not require prior assessments of the fossil record to be unreliable in some way. Here, we compare previous approaches and introduce two new descriptive metrics. Both metrics explicitly incorporate information on uncertainty by utilizing the 95% confidence intervals on estimated divergence dates and data on stratigraphic uncertainty concerning the age of the compared fossils. Metric scores are maximized when these ranges are overlapping. MDI (minimum divergence incongruence) discriminates between situations where molecular estimates are younger or older than known fossils reporting both absolute fit values and a number score for incompatible nodes. DIG range (divergence implied gap range) allows quantification of the minimum increase in implied missing fossil record induced by enforcing a given set of molecular-based estimates. These metrics are used together to describe the relationship between time trees and a set of fossil data, which we recommend be phylogenetically vetted and referred on the basis of apomorphy. Differences from previously proposed metrics and the utility of MDI and DIG range are illustrated in three empirical case studies from angiosperms, ostracods, and birds. These case studies also illustrate the ways in which MDI and DIG range may be used to assess time trees resultant from analyses varying in calibration regime, divergence dating approach or molecular sequence data analyzed. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Assessing assay agreement estimation for multiple left-censored data: a multiple imputation approach.

    PubMed

    Lapidus, Nathanael; Chevret, Sylvie; Resche-Rigon, Matthieu

    2014-12-30

    Agreement between two assays is usually based on the concordance correlation coefficient (CCC), estimated from the means, standard deviations, and correlation coefficient of these assays. However, such data will often suffer from left-censoring because of lower limits of detection of these assays. To handle such data, we propose to extend a multiple imputation approach by chained equations (MICE) developed in a close setting of one left-censored assay. The performance of this two-step approach is compared with that of a previously published maximum likelihood estimation through a simulation study. Results show close estimates of the CCC by both methods, although the coverage is improved by our MICE proposal. An application to cytomegalovirus quantification data is provided. Copyright © 2014 John Wiley & Sons, Ltd.

  1. Dentalmaps: Automatic Dental Delineation for Radiotherapy Planning in Head-and-Neck Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thariat, Juliette, E-mail: jthariat@hotmail.com; Ramus, Liliane; INRIA

    Purpose: To propose an automatic atlas-based segmentation framework of the dental structures, called Dentalmaps, and to assess its accuracy and relevance to guide dental care in the context of intensity-modulated radiotherapy. Methods and Materials: A multi-atlas-based segmentation, less sensitive to artifacts than previously published head-and-neck segmentation methods, was used. The manual segmentations of a 21-patient database were first deformed onto the query using nonlinear registrations with the training images and then fused to estimate the consensus segmentation of the query. Results: The framework was evaluated with a leave-one-out protocol. The maximum doses estimated using manual contours were considered as groundmore » truth and compared with the maximum doses estimated using automatic contours. The dose estimation error was within 2-Gy accuracy in 75% of cases (with a median of 0.9 Gy), whereas it was within 2-Gy accuracy in 30% of cases only with the visual estimation method without any contour, which is the routine practice procedure. Conclusions: Dose estimates using this framework were more accurate than visual estimates without dental contour. Dentalmaps represents a useful documentation and communication tool between radiation oncologists and dentists in routine practice. Prospective multicenter assessment is underway on patients extrinsic to the database.« less

  2. A history estimate and evolutionary analysis of rabies virus variants in China.

    PubMed

    Ming, Pinggang; Yan, Jiaxin; Rayner, Simon; Meng, Shengli; Xu, Gelin; Tang, Qing; Wu, Jie; Luo, Jing; Yang, Xiaoming

    2010-03-01

    To investigate the evolutionary dynamics of rabies virus (RABV) in China, we collected and sequenced 55 isolates sampled from 14 Chinese provinces over the last 40 years and performed a coalescent-based analysis of the G gene. This revealed that the RABV currently circulating in China is composed of three main groups. Bayesian coalescent analysis estimated the date of the most recent common ancestor for the current RABV Chinese strains to be 1412 (with a 95 % confidence interval of 1006-1736). The estimated mean substitution rate for the G gene sequences (3.961x10(-4) substitutions per site per year) was in accordance with previous reports for RABV.

  3. Instrumental variables estimates of peer effects in social networks.

    PubMed

    An, Weihua

    2015-03-01

    Estimating peer effects with observational data is very difficult because of contextual confounding, peer selection, simultaneity bias, and measurement error, etc. In this paper, I show that instrumental variables (IVs) can help to address these problems in order to provide causal estimates of peer effects. Based on data collected from over 4000 students in six middle schools in China, I use the IV methods to estimate peer effects on smoking. My design-based IV approach differs from previous ones in that it helps to construct potentially strong IVs and to directly test possible violation of exogeneity of the IVs. I show that measurement error in smoking can lead to both under- and imprecise estimations of peer effects. Based on a refined measure of smoking, I find consistent evidence for peer effects on smoking. If a student's best friend smoked within the past 30 days, the student was about one fifth (as indicated by the OLS estimate) or 40 percentage points (as indicated by the IV estimate) more likely to smoke in the same time period. The findings are robust to a variety of robustness checks. I also show that sharing cigarettes may be a mechanism for peer effects on smoking. A 10% increase in the number of cigarettes smoked by a student's best friend is associated with about 4% increase in the number of cigarettes smoked by the student in the same time period. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. A Practical Strategy for sEMG-Based Knee Joint Moment Estimation During Gait and Its Validation in Individuals With Cerebral Palsy

    PubMed Central

    Kwon, Suncheol; Stanley, Christopher J.; Kim, Jung; Kim, Jonghyun; Damiano, Diane L.

    2013-01-01

    Individuals with cerebral palsy have neurological deficits that may interfere with motor function and lead to abnormal walking patterns. It is important to know the joint moment generated by the patient’s muscles during walking in order to assist the suboptimal gait patterns. In this paper, we describe a practical strategy for estimating the internal moment of a knee joint from surface electromyography (sEMG) and knee joint angle measurements. This strategy requires only isokinetic knee flexion and extension tests to obtain a relationship between the sEMG and the knee internal moment, and it does not necessitate comprehensive laboratory calibration, which typically requires a 3-D motion capture system and ground reaction force plates. Four estimation models were considered based on different assumptions about the functions of the relevant muscles during the isokinetic tests and the stance phase of walking. The performance of the four models was evaluated by comparing the estimated moments with the gold standard internal moment calculated from inverse dynamics. The results indicate that an optimal estimation model can be chosen based on the degree of cocontraction. The estimation error of the chosen model is acceptable (normalized root-mean-squared error: 0.15–0.29, R: 0.71–0.93) compared to previous studies (Doorenbosch and Harlaar, 2003; Doorenbosch and Harlaar, 2004; Doorenbosch, Joosten, and Harlaar, 2005), and this strategy provides a simple and effective solution for estimating knee joint moment from sEMG. PMID:22410952

  5. Estimation of geographic variation in human papillomavirus vaccine uptake in men and women: an online survey using facebook recruitment.

    PubMed

    Nelson, Erik J; Hughes, John; Oakes, J Michael; Pankow, James S; Kulasingam, Shalini L

    2014-09-01

    Federally funded surveys of human papillomavirus (HPV) vaccine uptake are important for pinpointing geographically based health disparities. Although national and state level data are available, local (ie, county and postal code level) data are not due to small sample sizes, confidentiality concerns, and cost. Local level HPV vaccine uptake data may be feasible to obtain by targeting specific geographic areas through social media advertising and recruitment strategies, in combination with online surveys. Our goal was to use Facebook-based recruitment and online surveys to estimate local variation in HPV vaccine uptake among young men and women in Minnesota. From November 2012 to January 2013, men and women were recruited via a targeted Facebook advertisement campaign to complete an online survey about HPV vaccination practices. The Facebook advertisements were targeted to recruit men and women by location (25 mile radius of Minneapolis, Minnesota, United States), age (18-30 years), and language (English). Of the 2079 men and women who responded to the Facebook advertisements and visited the study website, 1003 (48.2%) enrolled in the study and completed the survey. The average advertising cost per completed survey was US $1.36. Among those who reported their postal code, 90.6% (881/972) of the participants lived within the previously defined geographic study area. Receipt of 1 dose or more of HPV vaccine was reported by 65.6% women (351/535), and 13.0% (45/347) of men. These results differ from previously reported Minnesota state level estimates (53.8% for young women and 20.8% for young men) and from national estimates (34.5% for women and 2.3% for men). This study shows that recruiting a representative sample of young men and women based on county and postal code location to complete a survey on HPV vaccination uptake via the Internet is a cost-effective and feasible strategy. This study also highlights the need for local estimates to assess the variation in HPV vaccine uptake, as these estimates differ considerably from those obtained using survey data that are aggregated to the state or federal level.

  6. Estimation of Geographic Variation in Human Papillomavirus Vaccine Uptake in Men and Women: An Online Survey Using Facebook Recruitment

    PubMed Central

    Hughes, John; Oakes, J Michael; Pankow, James S; Kulasingam, Shalini L

    2014-01-01

    Background Federally funded surveys of human papillomavirus (HPV) vaccine uptake are important for pinpointing geographically based health disparities. Although national and state level data are available, local (ie, county and postal code level) data are not due to small sample sizes, confidentiality concerns, and cost. Local level HPV vaccine uptake data may be feasible to obtain by targeting specific geographic areas through social media advertising and recruitment strategies, in combination with online surveys. Objective Our goal was to use Facebook-based recruitment and online surveys to estimate local variation in HPV vaccine uptake among young men and women in Minnesota. Methods From November 2012 to January 2013, men and women were recruited via a targeted Facebook advertisement campaign to complete an online survey about HPV vaccination practices. The Facebook advertisements were targeted to recruit men and women by location (25 mile radius of Minneapolis, Minnesota, United States), age (18-30 years), and language (English). Results Of the 2079 men and women who responded to the Facebook advertisements and visited the study website, 1003 (48.2%) enrolled in the study and completed the survey. The average advertising cost per completed survey was US $1.36. Among those who reported their postal code, 90.6% (881/972) of the participants lived within the previously defined geographic study area. Receipt of 1 dose or more of HPV vaccine was reported by 65.6% women (351/535), and 13.0% (45/347) of men. These results differ from previously reported Minnesota state level estimates (53.8% for young women and 20.8% for young men) and from national estimates (34.5% for women and 2.3% for men). Conclusions This study shows that recruiting a representative sample of young men and women based on county and postal code location to complete a survey on HPV vaccination uptake via the Internet is a cost-effective and feasible strategy. This study also highlights the need for local estimates to assess the variation in HPV vaccine uptake, as these estimates differ considerably from those obtained using survey data that are aggregated to the state or federal level. PMID:25231937

  7. Interim 2001-based national population projections for the United Kingdom and constituent countries.

    PubMed

    Shaw, Chris

    2003-01-01

    This article describes new 2001-based national population projections which were carried out following the publication in September 2002 of the first results of the 2001 Census. These "interim" projections, carried out by the Government Actuary in consultation with the Registrars General, take preliminary account of the results of the Census which showed that the base population used in previous projections was overestimated. The interim projections also incorporate a reduced assumption of net international migration to the United Kingdom, informed by the first results of the 2001 Census and taking account of more recent migration information. The population of the United Kingdom is now projected to increase from an estimated 58.8 million in 2001 to reach 63.2 million by 2026. The projected population at 2026 is about 1.8 million (2.8 per cent) lower than in the previous (2000-based) projections.

  8. From reading numbers to seeing ratios: a benefit of icons for risk comprehension.

    PubMed

    Tubau, Elisabet; Rodríguez-Ferreiro, Javier; Barberia, Itxaso; Colomé, Àngels

    2018-06-21

    Promoting a better understanding of statistical data is becoming increasingly important for improving risk comprehension and decision-making. In this regard, previous studies on Bayesian problem solving have shown that iconic representations help infer frequencies in sets and subsets. Nevertheless, the mechanisms by which icons enhance performance remain unclear. Here, we tested the hypothesis that the benefit offered by icon arrays lies in a better alignment between presented and requested relationships, which should facilitate the comprehension of the requested ratio beyond the represented quantities. To this end, we analyzed individual risk estimates based on data presented either in standard verbal presentations (percentages and natural frequency formats) or as icon arrays. Compared to the other formats, icons led to estimates that were more accurate, and importantly, promoted the use of equivalent expressions for the requested probability. Furthermore, whereas the accuracy of the estimates based on verbal formats depended on their alignment with the text, all the estimates based on icons were equally accurate. Therefore, these results support the proposal that icons enhance the comprehension of the ratio and its mapping onto the requested probability and point to relational misalignment as potential interference for text-based Bayesian reasoning. The present findings also argue against an intrinsic difficulty with understanding single-event probabilities.

  9. Fitting power-laws in empirical data with estimators that work for all exponents

    PubMed Central

    Hanel, Rudolf; Corominas-Murtra, Bernat; Liu, Bo; Thurner, Stefan

    2017-01-01

    Most standard methods based on maximum likelihood (ML) estimates of power-law exponents can only be reliably used to identify exponents smaller than minus one. The argument that power laws are otherwise not normalizable, depends on the underlying sample space the data is drawn from, and is true only for sample spaces that are unbounded from above. Power-laws obtained from bounded sample spaces (as is the case for practically all data related problems) are always free of such limitations and maximum likelihood estimates can be obtained for arbitrary powers without restrictions. Here we first derive the appropriate ML estimator for arbitrary exponents of power-law distributions on bounded discrete sample spaces. We then show that an almost identical estimator also works perfectly for continuous data. We implemented this ML estimator and discuss its performance with previous attempts. We present a general recipe of how to use these estimators and present the associated computer codes. PMID:28245249

  10. Elimination of Emergency Department Medication Errors Due To Estimated Weights.

    PubMed

    Greenwalt, Mary; Griffen, David; Wilkerson, Jim

    2017-01-01

    From 7/2014 through 6/2015, 10 emergency department (ED) medication dosing errors were reported through the electronic incident reporting system of an urban academic medical center. Analysis of these medication errors identified inaccurate estimated weight on patients as the root cause. The goal of this project was to reduce weight-based dosing medication errors due to inaccurate estimated weights on patients presenting to the ED. Chart review revealed that 13.8% of estimated weights documented on admitted ED patients varied more than 10% from subsequent actual admission weights recorded. A random sample of 100 charts containing estimated weights revealed 2 previously unreported significant medication dosage errors (.02 significant error rate). Key improvements included removing barriers to weighing ED patients, storytelling to engage staff and change culture, and removal of the estimated weight documentation field from the ED electronic health record (EHR) forms. With these improvements estimated weights on ED patients, and the resulting medication errors, were eliminated.

  11. Monte Carlo Perturbation Theory Estimates of Sensitivities to System Dimensions

    DOE PAGES

    Burke, Timothy P.; Kiedrowski, Brian C.

    2017-12-11

    Here, Monte Carlo methods are developed using adjoint-based perturbation theory and the differential operator method to compute the sensitivities of the k-eigenvalue, linear functions of the flux (reaction rates), and bilinear functions of the forward and adjoint flux (kinetics parameters) to system dimensions for uniform expansions or contractions. The calculation of sensitivities to system dimensions requires computing scattering and fission sources at material interfaces using collisions occurring at the interface—which is a set of events with infinitesimal probability. Kernel density estimators are used to estimate the source at interfaces using collisions occurring near the interface. The methods for computing sensitivitiesmore » of linear and bilinear ratios are derived using the differential operator method and adjoint-based perturbation theory and are shown to be equivalent to methods previously developed using a collision history–based approach. The methods for determining sensitivities to system dimensions are tested on a series of fast, intermediate, and thermal critical benchmarks as well as a pressurized water reactor benchmark problem with iterated fission probability used for adjoint-weighting. The estimators are shown to agree within 5% and 3σ of reference solutions obtained using direct perturbations with central differences for the majority of test problems.« less

  12. Evaluating the phylogenetic signal limit from mitogenomes, slow evolving nuclear genes, and the concatenation approach. New insights into the Lacertini radiation using fast evolving nuclear genes and species trees.

    PubMed

    Mendes, Joana; Harris, D James; Carranza, Salvador; Salvi, Daniele

    2016-07-01

    Estimating the phylogeny of lacertid lizards, and particularly the tribe Lacertini has been challenging, possibly due to the fast radiation of this group resulting in a hard polytomy. However this is still an open question, as concatenated data primarily from mitochondrial markers have been used so far whereas in a recent phylogeny based on a compilation of these data within a squamate supermatrix the basal polytomy seems to be resolved. In this study, we estimate phylogenetic relationships between all Lacertini genera using for the first time DNA sequences from five fast evolving nuclear genes (acm4, mc1r, pdc, βfib and reln) and two mitochondrial genes (nd4 and 12S). We generated a total of 529 sequences from 88 species and used Maximum Likelihood and Bayesian Inference methods based on concatenated multilocus dataset as well as a coalescent-based species tree approach with the aim of (i) shedding light on the basal relationships of Lacertini (ii) assessing the monophyly of genera which were previously questioned, and (iii) discussing differences between estimates from this and previous studies based on different markers, and phylogenetic methods. Results uncovered (i) a new phylogenetic clade formed by the monotypic genera Archaeolacerta, Zootoca, Teira and Scelarcis; and (ii) support for the monophyly of the Algyroides clade, with two sister species pairs represented by western (A. marchi and A. fitzingeri) and eastern (A. nigropunctatus and A. moreoticus) lineages. In both cases the members of these groups show peculiar morphology and very different geographical distributions, suggesting that they are relictual groups that were once diverse and widespread. They probably originated about 11-13 million years ago during early events of speciation in the tribe, and the split between their members is estimated to be only slightly older. This scenario may explain why mitochondrial markers (possibly saturated at higher divergence levels) or slower nuclear markers used in previous studies (likely lacking enough phylogenetic signal) failed to recover these relationships. Finally, the phylogenetic position of most remaining genera was unresolved, corroborating the hypothesis of a hard polytomy in the Lacertini phylogeny due to a fast radiation. This is in agreement with all previous studies but in sharp contrast with a recent squamate megaphylogeny. We show that the supermatrix approach may provide high support for incorrect nodes that are not supported either by original sequence data or by new data from this study. This finding suggests caution when using megaphylogenies to integrate inter-generic relationships in comparative ecological and evolutionary studies. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Method and System for Temporal Filtering in Video Compression Systems

    NASA Technical Reports Server (NTRS)

    Lu, Ligang; He, Drake; Jagmohan, Ashish; Sheinin, Vadim

    2011-01-01

    Three related innovations combine improved non-linear motion estimation, video coding, and video compression. The first system comprises a method in which side information is generated using an adaptive, non-linear motion model. This method enables extrapolating and interpolating a visual signal, including determining the first motion vector between the first pixel position in a first image to a second pixel position in a second image; determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image; determining a third motion vector between the first pixel position in the first image and the second pixel position in the second image, the second pixel position in the second image, and the third pixel position in the third image using a non-linear model; and determining a position of the fourth pixel in a fourth image based upon the third motion vector. For the video compression element, the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a decoder. The encoder converts the source frame into a space-frequency representation, estimates the conditional statistics of at least one vector of space-frequency coefficients with similar frequencies, and is conditioned on previously encoded data. It estimates an encoding rate based on the conditional statistics and applies a Slepian-Wolf code with the computed encoding rate. The method for decoding includes generating a side-information vector of frequency coefficients based on previously decoded source data and encoder statistics and previous reconstructions of the source frequency vector. It also performs Slepian-Wolf decoding of a source frequency vector based on the generated side-information and the Slepian-Wolf code bits. The video coding element includes receiving a first reference frame having a first pixel value at a first pixel position, a second reference frame having a second pixel value at a second pixel position, and a third reference frame having a third pixel value at a third pixel position. It determines a first motion vector between the first pixel position and the second pixel position, a second motion vector between the second pixel position and the third pixel position, and a fourth pixel value for a fourth frame based upon a linear or nonlinear combination of the first pixel value, the second pixel value, and the third pixel value. A stationary filtering process determines the estimated pixel values. The parameters of the filter may be predetermined constants.

  14. VizieR Online Data Catalog: 5 Galactic GC proper motions from Gaia DR1 (Watkins+, 2017)

    NASA Astrophysics Data System (ADS)

    Watkins, L. L.; van der Marel, R. P.

    2017-11-01

    We present a pilot study of Galactic globular cluster (GC) proper motion (PM) determinations using Gaia data. We search for GC stars in the Tycho-Gaia Astrometric Solution (TGAS) catalog from Gaia Data Release 1 (DR1), and identify five members of NGC 104 (47 Tucanae), one member of NGC 5272 (M3), five members of NGC 6121 (M4), seven members of NGC 6397, and two members of NGC 6656 (M22). By taking a weighted average of member stars, fully accounting for the correlations between parameters, we estimate the parallax (and, hence, distance) and PM of the GCs. This provides a homogeneous PM study of multiple GCs based on an astrometric catalog with small and well-controlled systematic errors and yields random PM errors similar to existing measurements. Detailed comparison to the available Hubble Space Telescope (HST) measurements generally shows excellent agreement, validating the astrometric quality of both TGAS and HST. By contrast, comparison to ground-based measurements shows that some of those must have systematic errors exceeding the random errors. Our parallax estimates have uncertainties an order of magnitude larger than previous studies, but nevertheless imply distances consistent with previous estimates. By combining our PM measurements with literature positions, distances, and radial velocities, we measure Galactocentric space motions for the clusters and find that these also agree well with previous analyses. Our analysis provides a framework for determining more accurate distances and PMs of Galactic GCs using future Gaia data releases. This will provide crucial constraints on the near end of the cosmic distance ladder and provide accurate GC orbital histories. (4 data files).

  15. Regional Characterization of Tokyo Metoropolitan area using a highly-dense seismic netwok(MeSO-net)

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Nakagawa, S.; Sakai, S.; Panayotopoulos, Y.; Ishikawa, M.; Ishibe, T.; Kimura, H.; Honda, R.

    2014-12-01

    We have developed a dense seismic network, MeSO-net (Metropolitan Seismic Observation network), since 2007 in the greater Tokyo urban region under the Special Project for Earthquake Disaster Mitigation in Tokyo Metropolitan Area (FY2007-FY2011) and Special Project for Reducing Vulnerability for Urban Mega Earthquake Disasters (FY2012-FY2016)( Hirata et al., 2009). So far we have acquired more than 120TB continuous seismic data form MeSO-net which consists of about 300 seismic stations. Using MeSO-net data, we obtain clear P- and S- wave velocity tomograms (Nakagawa et al., 2010) and Qp, Qs tomograms (Panayotopoulos et al., 2014) which show a clear image of Philippine Sea Plate (PSP) and PAcific Plate (PAP). A depth to the top of PSP, 20 to 30 km beneath northern part of Tokyo bay, is about 10 km shallower than previous estimates based on the distribution of seismicity (Ishida, 1992). This shallower plate geometry changes estimations of strong ground motion for seismic hazards analysis within the Tokyo region. Based on elastic wave velocities of rocks and minerals, we interpreted the tomographic images as petrologic images. Tomographic images revealed the presence of two stepwise velocity increase of the top layer of the subducting PSP slab. Because strength of the serpentinized peridotite is not large enough for brittle fracture, if the area is smaller than previously estimated, a possible area of the large thrust fault on the upper surface of PSP can be larger than previously thought. Change of seismicity rate after the 2011 Tohoku-oki earthquake suggests change of stressing rate in greater Tokyo. Quantitative analysis of MeSO-net data shows significant increase of rate of earthquakes that have a fault orientation favorable to increasing Coulomb stress after the Tohoku-oki event.

  16. Tycho- Gaia Astrometric Solution Parallaxes and Proper Motions for Five Galactic Globular Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watkins, Laura L.; Van der Marel, Roeland P., E-mail: lwatkins@stsci.edu

    2017-04-20

    We present a pilot study of Galactic globular cluster (GC) proper motion (PM) determinations using Gaia data. We search for GC stars in the Tycho- Gaia Astrometric Solution (TGAS) catalog from Gaia Data Release 1 (DR1), and identify five members of NGC 104 (47 Tucanae), one member of NGC 5272 (M3), five members of NGC 6121 (M4), seven members of NGC 6397, and two members of NGC 6656 (M22). By taking a weighted average of member stars, fully accounting for the correlations between parameters, we estimate the parallax (and, hence, distance) and PM of the GCs. This provides a homogeneousmore » PM study of multiple GCs based on an astrometric catalog with small and well-controlled systematic errors and yields random PM errors similar to existing measurements. Detailed comparison to the available Hubble Space Telescope ( HST ) measurements generally shows excellent agreement, validating the astrometric quality of both TGAS and HST . By contrast, comparison to ground-based measurements shows that some of those must have systematic errors exceeding the random errors. Our parallax estimates have uncertainties an order of magnitude larger than previous studies, but nevertheless imply distances consistent with previous estimates. By combining our PM measurements with literature positions, distances, and radial velocities, we measure Galactocentric space motions for the clusters and find that these also agree well with previous analyses. Our analysis provides a framework for determining more accurate distances and PMs of Galactic GCs using future Gaia data releases. This will provide crucial constraints on the near end of the cosmic distance ladder and provide accurate GC orbital histories.« less

  17. Equations based on anthropometry to predict body fat measured by absorptiometry in schoolchildren and adolescents.

    PubMed

    Ortiz-Hernández, Luis; Vega López, A Valeria; Ramos-Ibáñez, Norma; Cázares Lara, L Joana; Medina Gómez, R Joab; Pérez-Salgado, Diana

    To develop and validate equations to estimate the percentage of body fat of children and adolescents from Mexico using anthropometric measurements. A cross-sectional study was carried out with 601 children and adolescents from Mexico aged 5-19 years. The participants were randomly divided into the following two groups: the development sample (n=398) and the validation sample (n=203). The validity of previously published equations (e.g., Slaughter) was also assessed. The percentage of body fat was estimated by dual-energy X-ray absorptiometry. The anthropometric measurements included height, sitting height, weight, waist and arm circumferences, skinfolds (triceps, biceps, subscapular, supra-iliac, and calf), and elbow and bitrochanteric breadth. Linear regression models were estimated with the percentage of body fat as the dependent variable and the anthropometric measurements as the independent variables. Equations were created based on combinations of six to nine anthropometric variables and had coefficients of determination (r 2 ) equal to or higher than 92.4% for boys and 85.8% for girls. In the validation sample, the developed equations had high r 2 values (≥85.6% in boys and ≥78.1% in girls) in all age groups, low standard errors (SE≤3.05% in boys and ≤3.52% in girls), and the intercepts were not different from the origin (p>0.050). Using the previously published equations, the coefficients of determination were lower, and/or the intercepts were different from the origin. The equations developed in this study can be used to assess the percentage of body fat of Mexican schoolchildren and adolescents, as they demonstrate greater validity and lower error compared with previously published equations. Copyright © 2017 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.

  18. Estimating release of carbon from 1990 and 1991 forest fires in Alaska

    NASA Technical Reports Server (NTRS)

    Kaisischke, Eric S.; French, Nancy H. F.; Bourgeau-Chavez, Laura L.; Christensen, N. L., Jr.

    1995-01-01

    An improved method to estimate the amounts of carbon released during fires in the boreal forest zone of Alaska in 1990 and 1991 is described. This method divides the state into 64 distinct physiographic regions and estimates areal extent of five different land covers: two forest types, peat land, tundra, and nonvegetated. The areal extent of each cover type was estimated from a review of topographic maps of each region and observations on the distribution of foreat types within the state. Using previous observations and theoretical models for the two forest types found in interior Alaska, models of biomass accumulation as a function of stand age were developed. Stand age distributions for each region were determined using a statistical distribution based on fire frequency, which was from available long-term historical records. Estimates of the degree of biomass combusted were based on recent field observations as well as research reported in the literature. The location and areal extent of fires in this region for 1990 and 1991 were based on both field observations and analysis of satellite (advanced very high resolution radiometer (AVHRR)) data sets. Estimates of average carbon release for the two study years ranged between 2.54 and 3.00 kg/sq m, which are 2.2 to 2.6 times greater than estimates used in other studies of carbon release through biomass burning in boreal forests. Total average annual carbon release for the two years ranged between 0.012 and 0.018 Pg C/yr, with the lower value resulting from the AVHRR estimates of fire location and area.

  19. Automated nodule location and size estimation using a multi-scale Laplacian of Gaussian filtering approach.

    PubMed

    Jirapatnakul, Artit C; Fotin, Sergei V; Reeves, Anthony P; Biancardi, Alberto M; Yankelevitz, David F; Henschke, Claudia I

    2009-01-01

    Estimation of nodule location and size is an important pre-processing step in some nodule segmentation algorithms to determine the size and location of the region of interest. Ideally, such estimation methods will consistently find the same nodule location regardless of where the the seed point (provided either manually or by a nodule detection algorithm) is placed relative to the "true" center of the nodule, and the size should be a reasonable estimate of the true nodule size. We developed a method that estimates nodule location and size using multi-scale Laplacian of Gaussian (LoG) filtering. Nodule candidates near a given seed point are found by searching for blob-like regions with high filter response. The candidates are then pruned according to filter response and location, and the remaining candidates are sorted by size and the largest candidate selected. This method was compared to a previously published template-based method. The methods were evaluated on the basis of stability of the estimated nodule location to changes in the initial seed point and how well the size estimates agreed with volumes determined by a semi-automated nodule segmentation method. The LoG method exhibited better stability to changes in the seed point, with 93% of nodules having the same estimated location even when the seed point was altered, compared to only 52% of nodules for the template-based method. Both methods also showed good agreement with sizes determined by a nodule segmentation method, with an average relative size difference of 5% and -5% for the LoG and template-based methods respectively.

  20. Theoretical foundations for quantitative paleogenetics. III - The molecular divergence of nucleic acids and proteins for the case of genetic events of unequal probability

    NASA Technical Reports Server (NTRS)

    Holmquist, R.; Pearl, D.

    1980-01-01

    Theoretical equations are derived for molecular divergence with respect to gene and protein structure in the presence of genetic events with unequal probabilities: amino acid and base compositions, the frequencies of nucleotide replacements, the usage of degenerate codons, the distribution of fixed base replacements within codons and the distribution of fixed base replacements among codons. Results are presented in the form of tables relating the probabilities of given numbers of codon base changes with respect to the original codon for the alpha hemoglobin, beta hemoglobin, myoglobin, cytochrome c and parvalbumin group gene families. Application of the calculations to the rabbit alpha and beta hemoglobin mRNAs and proteins indicates that the genes are separated by about 425 fixed based replacements distributed over 114 codon sites, which is a factor of two greater than previous estimates. The theoretical results also suggest that many more base replacements are required to effect a given gene or protein structural change than previously believed.

  1. A Population-based survey of the prevalence of HIV, syphilis, hepatitis B and hepatitis C infections and associated risk factors among young women in Vitória, Brazil

    PubMed Central

    Miranda, Angelica Espinosa; Figueiredo, Nínive Camilo; Schmidt, Renylena; Page-Shafer, Kimberly

    2017-01-01

    Objective To estimate the prevalence of HIV, hepatitis B (HBV) and C (HCV) and syphilis infections and associated risk exposures in a population-based sample of young women in Vitória, Brazil. Methods From March to December 2006, a cross-sectional sample of women aged 18 to 29 years was recruited into a single stage, population-based study. Serological markers of HIV, HBV, HCV, and syphilis infections and associated risk exposures were assessed. Results Of 1,200 eligible women, 1,029 (85.8%) enrolled. Median age was 23 (interquartile range [IQR] 20, 26) years; 32.2% had ≤ 8 years of education. The survey weighted prevalence estimates were: HIV, 0.6% (95% CI), 0.1%, 1.1%); anti-HBc, 4.2% (3.0%, 5.4%); HBsAg, 0.9% (0.4%, 1.6%); anti-HCV, 0.6% (0.1%, 1.1%) and syphilis 1.2% (0.5%, 1.9%). Overall, 6.1% had at least one positive serological marker for any of the tested infection. A majority (87.9%) was sexually active, of whom 12.1% reported a previously diagnosed sexually transmitted infection (STI) and 1.4% a history of commercial sex work. Variables independently associated with any positive serological test included: older age (≥25 vs. <25 years), low monthly income (≤ 4× vs. >4× minimum wage), previously diagnosed STI, ≥ 1 sexual partner, and any illicit drug use. Conclusions These are the first population-based estimates of the prevalence of exposure to these infectious diseases and related risks in young women, a population for whom there is a scarcity of data in Brazil. PMID:18401700

  2. Vision based control of unmanned aerial vehicles with applications to an autonomous four-rotor helicopter, quadrotor

    NASA Astrophysics Data System (ADS)

    Altug, Erdinc

    Our work proposes a vision-based stabilization and output tracking control method for a model helicopter. This is a part of our effort to produce a rotorcraft based autonomous Unmanned Aerial Vehicle (UAV). Due to the desired maneuvering ability, a four-rotor helicopter has been chosen as the testbed. On previous research on flying vehicles, vision is usually used as a secondary sensor. Unlike previous research, our goal is to use visual feedback as the main sensor, which is not only responsible for detecting where the ground objects are but also for helicopter localization. A novel two-camera method has been introduced for estimating the full six degrees of freedom (DOF) pose of the helicopter. This two-camera system consists of a pan-tilt ground camera and an onboard camera. The pose estimation algorithm is compared through simulation to other methods, such as four-point, and stereo method and is shown to be less sensitive to feature detection errors. Helicopters are highly unstable flying vehicles; although this is good for agility, it makes the control harder. To build an autonomous helicopter, two methods of control are studied---one using a series of mode-based, feedback linearizing controllers and the other using a back-stepping control law. Various simulations with 2D and 3D models demonstrate the implementation of these controllers. We also show global convergence of the 3D quadrotor controller even with large calibration errors or presence of large errors on the image plane. Finally, we present initial flight experiments where the proposed pose estimation algorithm and non-linear control techniques have been implemented on a remote-controlled helicopter. The helicopter was restricted with a tether to vertical, yaw motions and limited x and y translations.

  3. Missing observations in multiyear rotation sampling designs

    NASA Technical Reports Server (NTRS)

    Gbur, E. E.; Sielken, R. L., Jr. (Principal Investigator)

    1982-01-01

    Because Multiyear estimation of at-harvest stratum crop proportions is more efficient than single year estimation, the behavior of multiyear estimators in the presence of missing acquisitions was studied. Only the (worst) case when a segment proportion cannot be estimated for the entire year is considered. The effect of these missing segments on the variance of the at-harvest stratum crop proportion estimator is considered when missing segments are not replaced, and when missing segments are replaced by segments not sampled in previous years. The principle recommendations are to replace missing segments according to some specified strategy, and to use a sequential procedure for selecting a sampling design; i.e., choose an optimal two year design and then, based on the observed two year design after segment losses have been taken into account, choose the best possible three year design having the observed two year parent design.

  4. Correlation of S-Band Weather Radar Reflectivity and ACTS Propagation Data in Florida

    NASA Technical Reports Server (NTRS)

    Wolfe, Eric E.; Flikkema, Paul G.; Henning, Rudolf E.

    1997-01-01

    Previous work has shown that Ka-band attenuation due to rainfall and corresponding S-band reflectivity are highly correlated. This paper reports on work whose goal is to determine the feasibility of estimation and, by extension, prediction of one parameter from the other using the Florida ACTS propagation terminal (APT) and the nearby WSR-88D S-band Doppler weather radar facility operated by the National Weather Service. This work is distinguished from previous efforts in this area by (1) the use of a single-polarized radar, preventing estimation of the drop size distribution (e.g., with dual polarization) and (2) the fact that the radar and APT sites are not co-located. Our approach consists of locating the radar volume elements along the satellite slant path and then, from measured reflectivity, estimating the specific attenuation for each associated path segment. The sum of these contributions yields an estimation of the millimeter-wave attenuation on the space-ground link. Seven days of data from both systems are analyzed using this procedure. The results indicate that definite correlation of S-band reflectivity and Ka-band attenuation exists even under the restriciton of this experiment. Based on these results, it appears possible to estimate Ka-band attenuation using widely available operational weather radar data. Conversely, it may be possible to augment current radar reflectivity data and coverage with low-cost attenuation or sky temperature data to improve the estimation of rain rates.

  5. Localization of transient gravitational wave sources: beyond triangulation

    NASA Astrophysics Data System (ADS)

    Fairhurst, Stephen

    2018-05-01

    Rapid, accurate localization of gravitational wave transient events has proved critical to successful electromagnetic followup. In previous papers we have shown that localization estimates can be obtained through triangulation based on timing information at the detector sites. In practice, detailed parameter estimation routines use additional information and provide better localization than is possible based on timing information alone. In this paper, we extend the timing based localization approximation to incorporate consistency of observed signals with two gravitational wave polarizations, and an astrophysically motivated distribution of sources. Both of these provide significant improvements to source localization, allowing many sources to be restricted to a single sky region, with an area 40% smaller than predicted by timing information alone. Furthermore, we show that the vast majority of sources will be reconstructed to be circularly polarized or, equivalently, indistinguishable from face-on.

  6. A more powerful exact test of noninferiority from binary matched-pairs data.

    PubMed

    Lloyd, Chris J; Moldovan, Max V

    2008-08-15

    Assessing the therapeutic noninferiority of one medical treatment compared with another is often based on the difference in response rates from a matched binary pairs design. This paper develops a new exact unconditional test for noninferiority that is more powerful than available alternatives. There are two new elements presented in this paper. First, we introduce the likelihood ratio statistic as an alternative to the previously proposed score statistic of Nam (Biometrics 1997; 53:1422-1430). Second, we eliminate the nuisance parameter by estimation followed by maximization as an alternative to the partial maximization of Berger and Boos (Am. Stat. Assoc. 1994; 89:1012-1016) or traditional full maximization. Based on an extensive numerical study, we recommend tests based on the score statistic, the nuisance parameter being controlled by estimation followed by maximization. 2008 John Wiley & Sons, Ltd

  7. Human Reliability Assessments: Using the Past (Shuttle) to Predict the Future (ORION)

    NASA Technical Reports Server (NTRS)

    Mott, Diana L.; Bigler, Mark A.

    2017-01-01

    NASA uses two HRA assessment methodologies. The first is a simplified method which is based on how much time is available to complete the action, with consideration included for environmental and personal factors that could influence the human's reliability. This method is expected to provide a conservative value or placeholder as a preliminary estimate. This preliminary estimate is used to determine which placeholder needs a more detailed assessment. The second methodology is used to develop a more detailed human reliability assessment on the performance of critical human actions. This assessment needs to consider more than the time available, this would include factors such as: the importance of the action, the context, environmental factors, potential human stresses, previous experience, training, physical design interfaces, available procedures/checklists and internal human stresses. The more detailed assessment is still expected to be more realistic than that based primarily on time available. When performing an HRA on a system or process that has an operational history, we have information specific to the task based on this history and experience. In the case of a PRA model that is based on a new design and has no operational history, providing a "reasonable" assessment of potential crew actions becomes more problematic. In order to determine what is expected of future operational parameters, the experience from individuals who had relevant experience and were familiar with the system and process previously implemented by NASA was used to provide the "best" available data. Personnel from Flight Operations, Flight Directors, Launch Test Directors, Control Room Console Operators and Astronauts were all interviewed to provide a comprehensive picture of previous NASA operations. Verification of the assumptions and expectations expressed in the assessments will be needed when the procedures, flight rules and operational requirements are developed and then finalized.

  8. Peak flow regression equations For small, ungaged streams in Maine: Comparing map-based to field-based variables

    USGS Publications Warehouse

    Lombard, Pamela J.; Hodgkins, Glenn A.

    2015-01-01

    Regression equations to estimate peak streamflows with 1- to 500-year recurrence intervals (annual exceedance probabilities from 99 to 0.2 percent, respectively) were developed for small, ungaged streams in Maine. Equations presented here are the best available equations for estimating peak flows at ungaged basins in Maine with drainage areas from 0.3 to 12 square miles (mi2). Previously developed equations continue to be the best available equations for estimating peak flows for basin areas greater than 12 mi2. New equations presented here are based on streamflow records at 40 U.S. Geological Survey streamgages with a minimum of 10 years of recorded peak flows between 1963 and 2012. Ordinary least-squares regression techniques were used to determine the best explanatory variables for the regression equations. Traditional map-based explanatory variables were compared to variables requiring field measurements. Two field-based variables—culvert rust lines and bankfull channel widths—either were not commonly found or did not explain enough of the variability in the peak flows to warrant inclusion in the equations. The best explanatory variables were drainage area and percent basin wetlands; values for these variables were determined with a geographic information system. Generalized least-squares regression was used with these two variables to determine the equation coefficients and estimates of accuracy for the final equations.

  9. Determining the Best Method for Estimating the Observed Level of Maximum Detrainment Based on Radar Reflectivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carletta, Nicholas D.; Mullendore, Gretchen L.; Starzec, Mariusz

    Convective mass transport is the transport of mass from near the surface up to the upper troposphere and lower stratosphere (UTLS) by a deep convective updraft. This transport can alter the chemical makeup and water vapor balance of the UTLS, which affects cloud formation and the radiative properties of the atmosphere. It is therefore important to understand the exact altitudes at which mass is detrained from convection. The purpose of this study was to improve upon previously published methodologies for estimating the level of maximum detrainment (LMD) within convection using data from a single ground-based radar. Four methods were usedmore » to identify the LMD and validated against dual-Doppler derived vertical mass divergence fields for six cases with a variety of storm types. The best method for locating the LMD was determined to be the method that used a reflectivity texture technique to determine convective cores and a multi-layer echo identification to determine anvil locations. Although an improvement over previously published methods, the new methodology still produced unreliable results in certain regimes. The methodology worked best when applied to mature updrafts, as the anvil needs time to grow to a detectable size. Thus, radar reflectivity is found to be valuable in estimating the LMD, but storm maturity must also be considered for best results.« less

  10. Ozone Production and Control Strategies for Southern Taiwan

    NASA Astrophysics Data System (ADS)

    Shiu, C.; Liu, S.; Chang, C.; Chen, J.; Chou, C. C.; Lin, C.

    2006-12-01

    An observation-based modeling (OBM) approach is used to estimate the ozone production efficiency and production rate of O3 (P(O3)) in southern Taiwan. The approach can also provide an indirect estimate of the concentration of OH. Measured concentrations of two aromatic hydrocarbons, i.e. ethylbenzene/m,p-xylene, are used to estimate the degree of photochemical processing and the amounts of photochemically consumed NOx and NMHCs. In addition, a one-dimensional (1d) photochemical model is used to compare with the OBM results. The average ozone production efficiency during the field campaign in Kaohsiung-Pingtung area in Fall 2003 is found to be about 5, comparable to previous works. The relationship of P(O3) with NOx is examined in detail and compared to previous studies. The derived OH concentrations from this approach are in fair agreement with values calculated from the 1d photochemical model. The relationship of total oxidants (e.g. O3+NO2) versus initial NOx and NMHCs suggests that reducing NMHCs are more effective in controlling total oxidants than reducing NOx. For O3 control, reducing NMHC is even more effective than NOx due to the NO titration effect. This observation-based approach provides a good alternative for understanding the production of ozone and formulating ozone control strategy in urban and suburban environment without measurements of peroxy radicals.

  11. Cost effectiveness of a community-based crisis intervention program for people bereaved by suicide.

    PubMed

    Comans, Tracy; Visser, Victoria; Scuffham, Paul

    2013-01-01

    Postvention services aim to ameliorate distress and reduce future incidences of suicide. The StandBy Response Service is one such service operating in Australia for those bereaved through suicide. Few previous studies have reported estimates or evaluations of the economic impact and outcomes associated with the implementation of bereavement/grief interventions. To estimate the cost-effectiveness of a postvention service from a societal perspective. A Markov model was constructed to estimate the health outcomes, quality-adjusted life years, and associated costs such as medical costs and time off work. Data were obtained from a prospective cross-sectional study comparing previous clients of the StandBy service with a control group of people bereaved by suicide who had not had contact with StandBy. Costs and outcomes were measured at 1 year after suicide bereavement and an incremental cost-effectiveness ratio was calculated. The base case found that the StandBy service dominated usual care with a cost saving from providing the StandBy service of AUS $803 and an increase in quality-adjusted life years of 0.02. Probabilistic sensitivity analysis indicates there is an 81% chance the service would be cost-effective given a range of possible scenarios. Postvention services are a cost-effective strategy and may even be cost-saving if all costs to society from suicide are taken into account.

  12. Early Examples from the Integrated Multi-Satellite Retrievals for GPM (IMERG)

    NASA Astrophysics Data System (ADS)

    Huffman, George; Bolvin, David; Braithwaite, Daniel; Hsu, Kuolin; Joyce, Robert; Kidd, Christopher; Sorooshian, Soroosh; Xie, Pingping

    2014-05-01

    The U.S. GPM Science Team's Day-1 algorithm for computing combined precipitation estimates as part of GPM is the Integrated Multi-satellitE Retrievals for GPM (IMERG). The goal is to compute the best time series of (nearly) global precipitation from "all" precipitation-relevant satellites and global surface precipitation gauge analyses. IMERG is being developed as a unified U.S. algorithm drawing on strengths in the three contributing groups, whose previous work includes: 1) the TRMM Multi-satellite Precipitation Analysis (TMPA); 2) the CPC Morphing algorithm with Kalman Filtering (K-CMORPH); and 3) the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks using a Cloud Classification System (PERSIANN-CCS). We review the IMERG design and development, plans for testing, and current status. Some of the lessons learned in running and reprocessing the previous data sets include the importance of quality-controlling input data sets, strategies for coping with transitions in the various input data sets, and practical approaches to retrospective analysis of multiple output products (namely the real- and post-real-time data streams). IMERG output will be illustrated using early test data, including the variety of supporting fields, such as the merged-microwave and infrared estimates, and the precipitation type. We end by considering recent changes in input data specifications, the transition from TRMM-based calibration to GPM-based, and further "Day 2" development.

  13. Basics of Bayesian methods.

    PubMed

    Ghosh, Sujit K

    2010-01-01

    Bayesian methods are rapidly becoming popular tools for making statistical inference in various fields of science including biology, engineering, finance, and genetics. One of the key aspects of Bayesian inferential method is its logical foundation that provides a coherent framework to utilize not only empirical but also scientific information available to a researcher. Prior knowledge arising from scientific background, expert judgment, or previously collected data is used to build a prior distribution which is then combined with current data via the likelihood function to characterize the current state of knowledge using the so-called posterior distribution. Bayesian methods allow the use of models of complex physical phenomena that were previously too difficult to estimate (e.g., using asymptotic approximations). Bayesian methods offer a means of more fully understanding issues that are central to many practical problems by allowing researchers to build integrated models based on hierarchical conditional distributions that can be estimated even with limited amounts of data. Furthermore, advances in numerical integration methods, particularly those based on Monte Carlo methods, have made it possible to compute the optimal Bayes estimators. However, there is a reasonably wide gap between the background of the empirically trained scientists and the full weight of Bayesian statistical inference. Hence, one of the goals of this chapter is to bridge the gap by offering elementary to advanced concepts that emphasize linkages between standard approaches and full probability modeling via Bayesian methods.

  14. A neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine

    NASA Astrophysics Data System (ADS)

    Guo, T. H.; Musgrave, J.

    1992-11-01

    In order to properly utilize the available fuel and oxidizer of a liquid propellant rocket engine, the mixture ratio is closed loop controlled during main stage (65 percent - 109 percent power) operation. However, because of the lack of flight-capable instrumentation for measuring mixture ratio, the value of mixture ratio in the control loop is estimated using available sensor measurements such as the combustion chamber pressure and the volumetric flow, and the temperature and pressure at the exit duct on the low pressure fuel pump. This estimation scheme has two limitations. First, the estimation formula is based on an empirical curve fitting which is accurate only within a narrow operating range. Second, the mixture ratio estimate relies on a few sensor measurements and loss of any of these measurements will make the estimate invalid. In this paper, we propose a neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine. The estimator is an extension of a previously developed neural network based sensor failure detection and recovery algorithm (sensor validation). This neural network uses an auto associative structure which utilizes the redundant information of dissimilar sensors to detect inconsistent measurements. Two approaches have been identified for synthesizing mixture ratio from measurement data using a neural network. The first approach uses an auto associative neural network for sensor validation which is modified to include the mixture ratio as an additional output. The second uses a new network for the mixture ratio estimation in addition to the sensor validation network. Although mixture ratio is not directly measured in flight, it is generally available in simulation and in test bed firing data from facility measurements of fuel and oxidizer volumetric flows. The pros and cons of these two approaches will be discussed in terms of robustness to sensor failures and accuracy of the estimate during typical transients using simulation data.

  15. A neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine

    NASA Technical Reports Server (NTRS)

    Guo, T. H.; Musgrave, J.

    1992-01-01

    In order to properly utilize the available fuel and oxidizer of a liquid propellant rocket engine, the mixture ratio is closed loop controlled during main stage (65 percent - 109 percent power) operation. However, because of the lack of flight-capable instrumentation for measuring mixture ratio, the value of mixture ratio in the control loop is estimated using available sensor measurements such as the combustion chamber pressure and the volumetric flow, and the temperature and pressure at the exit duct on the low pressure fuel pump. This estimation scheme has two limitations. First, the estimation formula is based on an empirical curve fitting which is accurate only within a narrow operating range. Second, the mixture ratio estimate relies on a few sensor measurements and loss of any of these measurements will make the estimate invalid. In this paper, we propose a neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine. The estimator is an extension of a previously developed neural network based sensor failure detection and recovery algorithm (sensor validation). This neural network uses an auto associative structure which utilizes the redundant information of dissimilar sensors to detect inconsistent measurements. Two approaches have been identified for synthesizing mixture ratio from measurement data using a neural network. The first approach uses an auto associative neural network for sensor validation which is modified to include the mixture ratio as an additional output. The second uses a new network for the mixture ratio estimation in addition to the sensor validation network. Although mixture ratio is not directly measured in flight, it is generally available in simulation and in test bed firing data from facility measurements of fuel and oxidizer volumetric flows. The pros and cons of these two approaches will be discussed in terms of robustness to sensor failures and accuracy of the estimate during typical transients using simulation data.

  16. Spatial Topography of Individual-Specific Cortical Networks Predicts Human Cognition, Personality, and Emotion.

    PubMed

    Kong, Ru; Li, Jingwei; Orban, Csaba; Sabuncu, Mert R; Liu, Hesheng; Schaefer, Alexander; Sun, Nanbo; Zuo, Xi-Nian; Holmes, Avram J; Eickhoff, Simon B; Yeo, B T Thomas

    2018-06-06

    Resting-state functional magnetic resonance imaging (rs-fMRI) offers the opportunity to delineate individual-specific brain networks. A major question is whether individual-specific network topography (i.e., location and spatial arrangement) is behaviorally relevant. Here, we propose a multi-session hierarchical Bayesian model (MS-HBM) for estimating individual-specific cortical networks and investigate whether individual-specific network topography can predict human behavior. The multiple layers of the MS-HBM explicitly differentiate intra-subject (within-subject) from inter-subject (between-subject) network variability. By ignoring intra-subject variability, previous network mappings might confuse intra-subject variability for inter-subject differences. Compared with other approaches, MS-HBM parcellations generalized better to new rs-fMRI and task-fMRI data from the same subjects. More specifically, MS-HBM parcellations estimated from a single rs-fMRI session (10 min) showed comparable generalizability as parcellations estimated by 2 state-of-the-art methods using 5 sessions (50 min). We also showed that behavioral phenotypes across cognition, personality, and emotion could be predicted by individual-specific network topography with modest accuracy, comparable to previous reports predicting phenotypes based on connectivity strength. Network topography estimated by MS-HBM was more effective for behavioral prediction than network size, as well as network topography estimated by other parcellation approaches. Thus, similar to connectivity strength, individual-specific network topography might also serve as a fingerprint of human behavior.

  17. Estimation of maximal oxygen uptake by bioelectrical impedance analysis.

    PubMed

    Stahn, Alexander; Terblanche, Elmarie; Grunert, Sven; Strobel, Günther

    2006-02-01

    Previous non-exercise models for the prediction of maximal oxygen uptake VO(2max) have failed to accurately discriminate cardiorespiratory fitness within large cohorts. The aim of the present study was to evaluate the feasibility of a completely indirect method for predicting VO(2max) that was based on bioelectrical impedance analysis (BIA) in 66 young, healthy fit men and women. Multiple, stepwise regression analysis was used to determine the usefulness of BIA and additional covariates to estimate VO(2max) (ml min(-1)). BIA was highly correlated to VO(2max) (r = 0.914; P < 0.001) and entered the regression equation first. The inclusion of gender and a physical activity rating further improved the model which accounted for 88% of the variance in VO(2max) and resulted in a relative standard error of the estimate (SEE) of 7.2%. Substantial agreement between the methods was confirmed by the fact that nearly all the differences were within +/-2 SD. Furthermore, in contrast to previously published non-exercise models, no trend of a reduction in prediction accuracy with increasing VO(2max) values was apparent. It was concluded that a non-exercise model based on BIA might be a rapid and useful technique to estimate VO(2max), when a direct test does not seem feasible. However, though the present results are useful to determine the viability of the method, further refinement of the BIA approach and its validation in a large, diverse population is needed before it can be applied to the clinical and epidemiological settings.

  18. Plate tectonic controls on atmospheric CO2 levels since the Triassic.

    PubMed

    Van Der Meer, Douwe G; Zeebe, Richard E; van Hinsbergen, Douwe J J; Sluijs, Appy; Spakman, Wim; Torsvik, Trond H

    2014-03-25

    Climate trends on timescales of 10s to 100s of millions of years are controlled by changes in solar luminosity, continent distribution, and atmosphere composition. Plate tectonics affect geography, but also atmosphere composition through volcanic degassing of CO2 at subduction zones and midocean ridges. So far, such degassing estimates were based on reconstructions of ocean floor production for the last 150 My and indirectly, through sea level inversion before 150 My. Here we quantitatively estimate CO2 degassing by reconstructing lithosphere subduction evolution, using recent advances in combining global plate reconstructions and present-day structure of the mantle. First, we estimate that since the Triassic (250-200 My) until the present, the total paleosubduction-zone length reached up to ∼200% of the present-day value. Comparing our subduction-zone lengths with previously reconstructed ocean-crust production rates over the past 140 My suggests average global subduction rates have been constant, ∼6 cm/y: Higher ocean-crust production is associated with longer total subduction length. We compute a strontium isotope record based on subduction-zone length, which agrees well with geological records supporting the validity of our approach: The total subduction-zone length is proportional to the summed arc and ridge volcanic CO2 production and thereby to global volcanic degassing at plate boundaries. We therefore use our degassing curve as input for the GEOCARBSULF model to estimate atmospheric CO2 levels since the Triassic. Our calculated CO2 levels for the mid Mesozoic differ from previous modeling results and are more consistent with available proxy data.

  19. Plate tectonic controls on atmospheric CO2 levels since the Triassic

    PubMed Central

    Van Der Meer, Douwe G.; Zeebe, Richard E.; van Hinsbergen, Douwe J. J.; Sluijs, Appy; Spakman, Wim; Torsvik, Trond H.

    2014-01-01

    Climate trends on timescales of 10s to 100s of millions of years are controlled by changes in solar luminosity, continent distribution, and atmosphere composition. Plate tectonics affect geography, but also atmosphere composition through volcanic degassing of CO2 at subduction zones and midocean ridges. So far, such degassing estimates were based on reconstructions of ocean floor production for the last 150 My and indirectly, through sea level inversion before 150 My. Here we quantitatively estimate CO2 degassing by reconstructing lithosphere subduction evolution, using recent advances in combining global plate reconstructions and present-day structure of the mantle. First, we estimate that since the Triassic (250–200 My) until the present, the total paleosubduction-zone length reached up to ∼200% of the present-day value. Comparing our subduction-zone lengths with previously reconstructed ocean-crust production rates over the past 140 My suggests average global subduction rates have been constant, ∼6 cm/y: Higher ocean-crust production is associated with longer total subduction length. We compute a strontium isotope record based on subduction-zone length, which agrees well with geological records supporting the validity of our approach: The total subduction-zone length is proportional to the summed arc and ridge volcanic CO2 production and thereby to global volcanic degassing at plate boundaries. We therefore use our degassing curve as input for the GEOCARBSULF model to estimate atmospheric CO2 levels since the Triassic. Our calculated CO2 levels for the mid Mesozoic differ from previous modeling results and are more consistent with available proxy data. PMID:24616495

  20. Ka-Band Wide-Bandgap Solid-State Power Amplifier: Hardware Validation

    NASA Technical Reports Server (NTRS)

    Epp, L.; Khan, P.; Silva, A.

    2005-01-01

    Motivated by recent advances in wide-bandgap (WBG) gallium nitride (GaN) semiconductor technology, there is considerable interest in developing efficient solid-state power amplifiers (SSPAs) as an alternative to the traveling-wave tube amplifier (TWTA) for space applications. This article documents proof-of-concept hardware used to validate power-combining technologies that may enable a 120-W, 40 percent power-added efficiency (PAE) SSPA. Results in previous articles [1-3] indicate that architectures based on at least three power combiner designs are likely to enable the target SSPA. Previous architecture performance analyses and estimates indicate that the proposed architectures can power combine 16 to 32 individual monolithic microwave integrated circuits (MMICs) with >80 percent combining efficiency. This combining efficiency would correspond to MMIC requirements of 5- to 10-W output power and >48 percent PAE. In order to validate the performance estimates of the three proposed architectures, measurements of proof-of-concept hardware are reported here.

  1. A new maximum-likelihood change estimator for two-pass SAR coherent change detection

    DOE PAGES

    Wahl, Daniel E.; Yocky, David A.; Jakowatz, Jr., Charles V.; ...

    2016-01-11

    In previous research, two-pass repeat-geometry synthetic aperture radar (SAR) coherent change detection (CCD) predominantly utilized the sample degree of coherence as a measure of the temporal change occurring between two complex-valued image collects. Previous coherence-based CCD approaches tend to show temporal change when there is none in areas of the image that have a low clutter-to-noise power ratio. Instead of employing the sample coherence magnitude as a change metric, in this paper, we derive a new maximum-likelihood (ML) temporal change estimate—the complex reflectance change detection (CRCD) metric to be used for SAR coherent temporal change detection. The new CRCD estimatormore » is a surprisingly simple expression, easy to implement, and optimal in the ML sense. As a result, this new estimate produces improved results in the coherent pair collects that we have tested.« less

  2. Estimating soil matric potential in Owens Valley, California

    USGS Publications Warehouse

    Sorenson, Stephen K.; Miller, Reuben F.; Welch, Michael R.; Groeneveld, David P.; Branson, Farrel A.

    1989-01-01

    Much of the floor of Owens Valley, California, is covered with alkaline scrub and alkaline meadow plant communities, whose existence is dependent partly on precipitation and partly on water infiltrated into the rooting zone from the shallow water table. The extent to which these plant communities are capable of adapting to and surviving fluctuations in the water table depends on physiological adaptations of the plants and on the water content, matric potential characteristics of the soils. Two methods were used to estimate soil matric potential in test sites in Owens Valley. The first, the filter-paper method, uses water content of filter papers equilibrated to water content of soil samples taken with a hand auger. The previously published calibration relations used to estimate soil matric potential from the water content of the filter papers were modified on the basis of current laboratory data. The other method of estimating soil matric potential was a modeling approach based on data from this and previous investigations. These data indicate that the base-10 logarithm of soil matric potential is a linear function of gravimetric soil water content for a particular soil. The slope and intercepts of this function vary with the texture and saturation capacity of the soil. Estimates of soil water characteristic curves were made at two sites by averaging the gravimetric soil water content and soil matric potential values from multiple samples at 0.1-m depth intervals derived by using the hand auger and filter-paper method and entering these values in the soil water model. The characteristic curves then were used to estimate soil matric potential from estimates of volumetric soil water content derived from neutron-probe readings. Evaluation of the modeling technique at two study sites indicated that estimates of soil matric potential within 0.5 pF units of the soil matric potential value derived by using the filter-paper method could be obtained 90 to 95 percent of the time in soils where water content was less than field capacity. The greatest errors occurred at depths where there was a distinct transition between soils of different textures.

  3. An Estimate of Avian Mortality at Communication Towers in the United States and Canada

    PubMed Central

    Longcore, Travis; Rich, Catherine; Mineau, Pierre; MacDonald, Beau; Bert, Daniel G.; Sullivan, Lauren M.; Mutrie, Erin; Gauthreaux, Sidney A.; Avery, Michael L.; Crawford, Robert L.; Manville, Albert M.; Travis, Emilie R.; Drake, David

    2012-01-01

    Avian mortality at communication towers in the continental United States and Canada is an issue of pressing conservation concern. Previous estimates of this mortality have been based on limited data and have not included Canada. We compiled a database of communication towers in the continental United States and Canada and estimated avian mortality by tower with a regression relating avian mortality to tower height. This equation was derived from 38 tower studies for which mortality data were available and corrected for sampling effort, search efficiency, and scavenging where appropriate. Although most studies document mortality at guyed towers with steady-burning lights, we accounted for lower mortality at towers without guy wires or steady-burning lights by adjusting estimates based on published studies. The resulting estimate of mortality at towers is 6.8 million birds per year in the United States and Canada. Bootstrapped subsampling indicated that the regression was robust to the choice of studies included and a comparison of multiple regression models showed that incorporating sampling, scavenging, and search efficiency adjustments improved model fit. Estimating total avian mortality is only a first step in developing an assessment of the biological significance of mortality at communication towers for individual species or groups of species. Nevertheless, our estimate can be used to evaluate this source of mortality, develop subsequent per-species mortality estimates, and motivate policy action. PMID:22558082

  4. Estimated monthly percentile discharges at ungaged sites in the Upper Yellowstone River Basin in Montana

    USGS Publications Warehouse

    Parrett, Charles; Hull, J.A.

    1986-01-01

    Once-monthly streamflow measurements were used to estimate selected percentile discharges on flow-duration curves of monthly mean discharge for 40 ungaged stream sites in the upper Yellowstone River basin in Montana. The estimation technique was a modification of the concurrent-discharge method previously described and used by H.C. Riggs to estimate annual mean discharge. The modified technique is based on the relationship of various mean seasonal discharges to the required discharges on the flow-duration curves. The mean seasonal discharges are estimated from the monthly streamflow measurements, and the percentile discharges are calculated from regression equations. The regression equations, developed from streamflow record at nine gaging stations, indicated a significant log-linear relationship between mean seasonal discharge and various percentile discharges. The technique was tested at two discontinued streamflow-gaging stations; the differences between estimated monthly discharges and those determined from the discharge record ranged from -31 to +27 percent at one site and from -14 to +85 percent at the other. The estimates at one site were unbiased, and the estimates at the other site were consistently larger than the recorded values. Based on the test results, the probable average error of the technique was + or - 30 percent for the 21 sites measured during the first year of the program and + or - 50 percent for the 19 sites measured during the second year. (USGS)

  5. Measurement of acoustic velocity components in a turbulent flow using LDV and high-repetition rate PIV

    NASA Astrophysics Data System (ADS)

    Léon, Olivier; Piot, Estelle; Sebbane, Delphine; Simon, Frank

    2017-06-01

    The present study provides theoretical details and experimental validation results to the approach proposed by Minotti et al. (Aerosp Sci Technol 12(5):398-407, 2008) for measuring amplitudes and phases of acoustic velocity components (AVC) that are waveform parameters of each component of velocity induced by an acoustic wave, in fully turbulent duct flows carrying multi-tone acoustic waves. Theoretical results support that the turbulence rejection method proposed, based on the estimation of cross power spectra between velocity measurements and a reference signal such as a wall pressure measurement, provides asymptotically efficient estimators with respect to the number of samples. Furthermore, it is shown that the estimator uncertainties can be simply estimated, accounting for the characteristics of the measured flow turbulence spectra. Two laser-based measurement campaigns were conducted in order to validate the acoustic velocity estimation approach and the uncertainty estimates derived. While in previous studies estimates were obtained using laser Doppler velocimetry (LDV), it is demonstrated that high-repetition rate particle image velocimetry (PIV) can also be successfully employed. The two measurement techniques provide very similar acoustic velocity amplitude and phase estimates for the cases investigated, that are of practical interest for acoustic liner studies. In a broader sense, this approach may be beneficial for non-intrusive sound emission studies in wind tunnel testings.

  6. An estimate of avian mortality at communication towers in the United States and Canada.

    PubMed

    Longcore, Travis; Rich, Catherine; Mineau, Pierre; MacDonald, Beau; Bert, Daniel G; Sullivan, Lauren M; Mutrie, Erin; Gauthreaux, Sidney A; Avery, Michael L; Crawford, Robert L; Manville, Albert M; Travis, Emilie R; Drake, David

    2012-01-01

    Avian mortality at communication towers in the continental United States and Canada is an issue of pressing conservation concern. Previous estimates of this mortality have been based on limited data and have not included Canada. We compiled a database of communication towers in the continental United States and Canada and estimated avian mortality by tower with a regression relating avian mortality to tower height. This equation was derived from 38 tower studies for which mortality data were available and corrected for sampling effort, search efficiency, and scavenging where appropriate. Although most studies document mortality at guyed towers with steady-burning lights, we accounted for lower mortality at towers without guy wires or steady-burning lights by adjusting estimates based on published studies. The resulting estimate of mortality at towers is 6.8 million birds per year in the United States and Canada. Bootstrapped subsampling indicated that the regression was robust to the choice of studies included and a comparison of multiple regression models showed that incorporating sampling, scavenging, and search efficiency adjustments improved model fit. Estimating total avian mortality is only a first step in developing an assessment of the biological significance of mortality at communication towers for individual species or groups of species. Nevertheless, our estimate can be used to evaluate this source of mortality, develop subsequent per-species mortality estimates, and motivate policy action.

  7. What is the lifetime risk of developing cancer?: the effect of adjusting for multiple primaries

    PubMed Central

    Sasieni, P D; Shelton, J; Ormiston-Smith, N; Thomson, C S; Silcocks, P B

    2011-01-01

    Background: The ‘lifetime risk' of cancer is generally estimated by combining current incidence rates with current all-cause mortality (‘current probability' method) rather than by describing the experience of a birth cohort. As individuals may get more than one type of cancer, what is generally estimated is the average (mean) number of cancers over a lifetime. This is not the same as the probability of getting cancer. Methods: We describe a method for estimating lifetime risk that corrects for the inclusion of multiple primary cancers in the incidence rates routinely published by cancer registries. The new method applies cancer incidence rates to the estimated probability of being alive without a previous cancer. The new method is illustrated using data from the Scottish Cancer Registry and is compared with ‘gold-standard' estimates that use (unpublished) data on first primaries. Results: The effect of this correction is to make the estimated ‘lifetime risk' smaller. The new estimates are extremely similar to those obtained using incidence based on first primaries. The usual ‘current probability' method considerably overestimates the lifetime risk of all cancers combined, although the correction for any single cancer site is minimal. Conclusion: Estimation of the lifetime risk of cancer should either be based on first primaries or should use the new method. PMID:21772332

  8. mBEEF-vdW: Robust fitting of error estimation density functionals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes

    Here, we propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator overmore » the training datasets. Using this estimator, we show that the robust loss function leads to a 10% improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.« less

  9. mBEEF-vdW: Robust fitting of error estimation density functionals

    DOE PAGES

    Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes; ...

    2016-06-15

    Here, we propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator overmore » the training datasets. Using this estimator, we show that the robust loss function leads to a 10% improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.« less

  10. Greenhouse gas emissions from the waste sector in Argentina in business-as-usual and mitigation scenarios.

    PubMed

    Santalla, Estela; Córdoba, Verónica; Blanco, Gabriel

    2013-08-01

    The objective of this work was the application of 2006 Intergovernmental Panel on Climate Change (IPCC) Guidelines for the estimation of methane and nitrous oxide emissions from the waste sector in Argentina as a preliminary exercise for greenhouse gas (GHG) inventory development and to compare with previous inventories based on 1996 IPCC Guidelines. Emissions projections to 2030 were evaluated under two scenarios--business as usual (BAU), and mitigation--and the calculations were done by using the ad hoc developed IPCC software. According to local activity data, in the business-as-usual scenario, methane emissions from solid waste disposal will increase by 73% by 2030 with respect to the emissions of year 2000. In the mitigation scenario, based on the recorded trend of methane captured in landfills, a decrease of 50% from the BAU scenario should be achieved by 2030. In the BAU scenario, GHG emissions from domestic wastewater will increase 63% from 2000 to 2030. Methane emissions from industrial wastewater, calculated from activity data of dairy, swine, slaughterhouse, citric, sugar, and wine sectors, will increase by 58% from 2000 to 2030 while methane emissions from domestic will increase 74% in the same period. Results show that GHG emissions calculated from 2006 IPCC Guidelines resulted in lower levels than those reported in previous national inventories for solid waste disposal and domestic wastewater categories, while levels were 18% higher for industrial wastewater. The implementation of the 2006 IPCC Guidelines for National Greenhouse Inventories is now considering by the UNFCCC for non-Annex I countries in order to enhance the compilation of inventories based on comparable good practice methods. This work constitutes the first GHG emissions estimation from the waste sector of Argentina applying the 2006 IPCC Guidelines and the ad doc developed software. It will contribute to identifying the main differences between the models applied in the estimation of methane emissions on the key categories of waste emission sources and to comparing results with previous inventories based on 1996 IPCC Guidelines.

  11. Spatial-altitudinal and temporal variation of Degree Day Factors (DDFs) in the Upper Indus Basin

    NASA Astrophysics Data System (ADS)

    Khan, Asif; Attaullah, Haleema; Masud, Tabinda; Khan, Mujahid

    2017-04-01

    Melt contribution from snow and ice in the Hindukush-Karakoram-Himalayan (HKH) region could account for more than 80% of annual river flows in the Upper Indus Basin (UIB). Increase or decrease in precipitation, energy input and glacier reserves can significantly affect water resources of this region. Therefore improved hydrological modelling and accurate future water resources prediction are vital for food production and hydro-power generation for millions of people living downstream, and are intensively needed. In mountain regions Degree Day Factors (DDFs) significantly vary on spatial and altitudinal basis, and are primary inputs of temperature-based hydrological modelling. However previous studies have used different DDFs as calibration parameters without due attention to the physical meaning of the values employed, and these estimates possess significant variability and uncertainty. This study provides estimates of DDFs for various altitudinal zones in the UIB at sub-basin level. Snow, clean ice and ice with debris cover bear different melt rates (or DDFs), therefore areally-averaged DDFs based on snow, clean and debris-covered ice classes in various altitudinal zones have been estimated for all sub-basins of the UIB. Zonal estimates of DDFs in the current study are significantly different from earlier adopted DDFs, hence suggest a revisit of previous hydrological modelling studies. DDFs presented in current study have been validated by using Snowmelt Runoff Model (SRM) in various sub-basins with good Nash Sutcliffe coefficients (R2 > 0.85) and low volumetric errors (Dv<10%). DDFs and methods provided in the current study can be used in future improved hydrological modelling and to provide accurate predictions of future river flows changes. The methodology used for estimation of DDFs is robust, and can be adopted to produce such estimates in other regions of the, particularly in the nearby other HKH basins.

  12. MicroRNA content in milk exosomes as a phenotypic indicator of Staphylococcus aureus infection in the bovine mammary gland

    USDA-ARS?s Scientific Manuscript database

    Previous gene mapping research to understand the host genetic response to mammary infection based on somatic cell score has been unsuccessful due to the poor correlation of this confounding trait with mastitis, a disease costing the dairy industry an estimated $2 billion in annual costs. Recently, ...

  13. Using satellite-based estimates of evapotranspiration and groundwater changes to determine anthropogenic water fluxes in land surface models

    USDA-ARS?s Scientific Manuscript database

    Irrigation is a widely used water management practice that is often poorly parameterized in land surface and climate models. Previous studies have addressed this issue via use of irrigation area, applied water inventory data, or soil moisture content. These approaches have a variety of drawbacks i...

  14. Discrete return lidar-based prediction of leaf area index in two conifer forests

    Treesearch

    Jennifer L. R. Jensen; Karen S. Humes; Lee A. Vierling; Andrew T. Hudak

    2008-01-01

    Leaf area index (LAI) is a key forest structural characteristic that serves as a primary control for exchanges of mass and energy within a vegetated ecosystem. Most previous attempts to estimate LAI from remotely sensed data have relied on empirical relationships between field-measured observations and various spectral vegetation indices (SVIs) derived from optical...

  15. Wildfire risk and housing prices: a case study from Colorado Springs.

    Treesearch

    G.H. Donovan; P.A. Champ; D.T. Butry

    2007-01-01

    Unlike other natural hazards such as floods, hurricanes, and earthquakes, wildfire risk has not previously been examined using a hedonic property value model. In this article, we estimate a hedonic model based on parcel-level wildfire risk ratings from Colorado Springs. We found that providing homeowners with specific information about the wildfire risk rating of their...

  16. The Tax-Credit Scholarship Audit: Do Publicly Funded Private School Choice Programs Save Money?

    ERIC Educational Resources Information Center

    Lueken, Martin F.

    2016-01-01

    This report follows up on previous work that examined the fiscal effects of private school voucher programs. It estimates the total fiscal effects of tax-credit scholarship programs--another type of private school choice program--on state governments, state and local taxpayers, and school districts combined. Based on a range of assumptions, these…

  17. Variability in "DIBELS Next" Progress Monitoring Measures for Students at Risk for Reading Difficulties

    ERIC Educational Resources Information Center

    O'Keeffe, Breda V.; Bundock, Kaitlin; Kladis, Kristin L.; Yan, Rui; Nelson, Kat

    2017-01-01

    Previous research on curriculum-based measurement of oral reading fluency (CBM ORF) found high levels of variability around the estimates of students' fluency; however, little research has studied the issue of variability specifically with well-designed passage sets and a sample of students who scored below benchmark for the purpose of progress…

  18. Using exposure prediction tools to link exposure and dosimetry for risk-based decisions: A case study with phthalates.

    PubMed

    Moreau, Marjory; Leonard, Jeremy; Phillips, Katherine A; Campbell, Jerry; Pendse, Salil N; Nicolas, Chantel; Phillips, Martin; Yoon, Miyoung; Tan, Yu-Mei; Smith, Sherrie; Pudukodu, Harish; Isaacs, Kristin; Clewell, Harvey

    2017-10-01

    A few different exposure prediction tools were evaluated for use in the new in vitro-based safety assessment paradigm using di-2-ethylhexyl phthalate (DEHP) and dibutyl phthalate (DnBP) as case compounds. Daily intake of each phthalate was estimated using both high-throughput (HT) prediction models such as the HT Stochastic Human Exposure and Dose Simulation model (SHEDS-HT) and the ExpoCast heuristic model and non-HT approaches based on chemical specific exposure estimations in the environment in conjunction with human exposure factors. Reverse dosimetry was performed using a published physiologically based pharmacokinetic (PBPK) model for phthalates and their metabolites to provide a comparison point. Daily intakes of DEHP and DnBP were estimated based on the urinary concentrations of their respective monoesters, mono-2-ethylhexyl phthalate (MEHP) and monobutyl phthalate (MnBP), reported in NHANES (2011-2012). The PBPK-reverse dosimetry estimated daily intakes at the 50th and 95th percentiles were 0.68 and 9.58 μg/kg/d and 0.089 and 0.68 μg/kg/d for DEHP and DnBP, respectively. For DEHP, the estimated median from PBPK-reverse dosimetry was about 3.6-fold higher than the ExpoCast estimate (0.68 and 0.18 μg/kg/d, respectively). For DnBP, the estimated median was similar to that predicted by ExpoCast (0.089 and 0.094 μg/kg/d, respectively). The SHEDS-HT prediction of DnBP intake from consumer product pathways alone was higher at 0.67 μg/kg/d. The PBPK-reverse dosimetry-estimated median intake of DEHP and DnBP was comparable to values previously reported for US populations. These comparisons provide insights into establishing criteria for selecting appropriate exposure prediction tools for use in an integrated modeling platform to link exposure to health effects. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Large capacity temporary visual memory.

    PubMed

    Endress, Ansgar D; Potter, Mary C

    2014-04-01

    Visual working memory (WM) capacity is thought to be limited to 3 or 4 items. However, many cognitive activities seem to require larger temporary memory stores. Here, we provide evidence for a temporary memory store with much larger capacity than past WM capacity estimates. Further, based on previous WM research, we show that a single factor--proactive interference--is sufficient to bring capacity estimates down to the range of previous WM capacity estimates. Participants saw a rapid serial visual presentation of 5-21 pictures of familiar objects or words presented at rates of 4/s or 8/s, respectively, and thus too fast for strategies such as rehearsal. Recognition memory was tested with a single probe item. When new items were used on all trials, no fixed memory capacities were observed, with estimates of up to 9.1 retained pictures for 21-item lists, and up to 30.0 retained pictures for 100-item lists, and no clear upper bound to how many items could be retained. Further, memory items were not stored in a temporally stable form of memory but decayed almost completely after a few minutes. In contrast, when, as in most WM experiments, a small set of items was reused across all trials, thus creating proactive interference among items, capacity remained in the range reported in previous WM experiments. These results show that humans have a large-capacity temporary memory store in the absence of proactive interference, and raise the question of whether temporary memory in everyday cognitive processing is severely limited, as in WM experiments, or has the much larger capacity found in the present experiments.

  20. A Particle Batch Smoother Approach to Snow Water Equivalent Estimation

    NASA Technical Reports Server (NTRS)

    Margulis, Steven A.; Girotto, Manuela; Cortes, Gonzalo; Durand, Michael

    2015-01-01

    This paper presents a newly proposed data assimilation method for historical snow water equivalent SWE estimation using remotely sensed fractional snow-covered area fSCA. The newly proposed approach consists of a particle batch smoother (PBS), which is compared to a previously applied Kalman-based ensemble batch smoother (EnBS) approach. The methods were applied over the 27-yr Landsat 5 record at snow pillow and snow course in situ verification sites in the American River basin in the Sierra Nevada (United States). This basin is more densely vegetated and thus more challenging for SWE estimation than the previous applications of the EnBS. Both data assimilation methods provided significant improvement over the prior (modeling only) estimates, with both able to significantly reduce prior SWE biases. The prior RMSE values at the snow pillow and snow course sites were reduced by 68%-82% and 60%-68%, respectively, when applying the data assimilation methods. This result is encouraging for a basin like the American where the moderate to high forest cover will necessarily obscure more of the snow-covered ground surface than in previously examined, less-vegetated basins. The PBS generally outperformed the EnBS: for snow pillows the PBSRMSE was approx.54%of that seen in the EnBS, while for snow courses the PBSRMSE was approx.79%of the EnBS. Sensitivity tests show relative insensitivity for both the PBS and EnBS results to ensemble size and fSCA measurement error, but a higher sensitivity for the EnBS to the mean prior precipitation input, especially in the case where significant prior biases exist.

  1. High-precision Location, Yield and Tectonic Release of North Korea's 3 September 2017 Nuclear Test

    NASA Astrophysics Data System (ADS)

    Yao, J.; Tian, D.; Wen, L.

    2017-12-01

    On 3 September 2017, the Democratic People's Republic of Korea (North Korea) announced that it had successfully conducted a thermonuclear (hydrogen bomb) test. The nuclear test was collaborated by reports of a seismic event with a magnitude ranging from 6.1 to 6.3 by many governmental and international agencies, although its thermonuclear nature remains to be confirmed. In this study, by combining modern methods of high-precision relocation and satellite imagery, and using the knowledge of a previous test (North Korea's 9 September 2016 test) as reference, we determine the location and yield of North Korea's 2017 test. The location of the 2017 test is determined by deriving relative location between North Korea's 2017 and 2016 nuclear tests and using the previously determined location of the 2016 nuclear test by our group, while its yield is estimated based on the relative amplitude ratios of the Lg waves recorded for both events, the previously determined Lg-magnitude of the 2016 test and burial depth inferred from satellite imagery. The 2017 nuclear test is determined to be located at (41° 17' 53.52″ N, 129° 4' 27.12″ E) with a geographic precision of 100 m, and its yield is estimated to be 108±48 kt. The 2017 nuclear test and its four previous tests since 2009 are located several hundred meters apart, beneath the same mountain Mantap. We also evaluate the tectonic release by the 2017 nuclear test and discuss its implications for the yield estimation of the test.

  2. Estimating Genetic Ancestry Proportions from Faces

    PubMed Central

    Klimentidis, Yann C.; Shriver, Mark D.

    2009-01-01

    Ethnicity can be a means by which people identify themselves and others. This type of identification mediates many kinds of social interactions and may reflect adaptations to a long history of group living in humans. Recent admixture in the US between groups from different continents, and the historically strong emphasis on phenotypic differences between members of these groups, presents an opportunity to examine the degree of concordance between estimates of group membership based on genetic markers and on visually-based estimates of facial features. We first measured the degree of Native American, European, African and East Asian genetic admixture in a sample of 14 self-identified Hispanic individuals, chosen to cover a broad range of Native American and European genetic admixture proportions. We showed frontal and side-view photographs of the 14 individuals to 241 subjects living in New Mexico, and asked them to estimate the degree of NA admixture for each individual. We assess the overall concordance for each observer based on an aggregated measure of the difference between the observer and the genetic estimates. We find that observers reach a significantly higher degree of concordance than expected by chance, and that the degree of concordance as well as the direction of the discrepancy in estimates differs based on the ethnicity of the observer, but not on the observers' age or sex. This study highlights the potentially high degree of discordance between physical appearance and genetic measures of ethnicity, as well as how perceptions of ethnic affiliation are context-specific. We compare our findings to those of previous studies and discuss their implications. PMID:19223962

  3. Estimation of precipitable water vapour using kinematic GNSS precise point positioning over an altitude range of 1 km

    NASA Astrophysics Data System (ADS)

    Webb, S. R.; Penna, N. T.; Clarke, P. J.; Webster, S.; Martin, I.

    2013-12-01

    The estimation of total precipitable water vapour (PWV) using kinematic GNSS has been investigated since around 2001, aiming to extend the use of static ground-based GNSS, from which PWV estimates are now operationally assimilated into numerical weather prediction models. To date, kinematic GNSS PWV studies suggest a PWV measurement agreement with radiosondes of 2-3 mm, almost commensurate with static GNSS measurement accuracy, but only shipborne experiments have so far been carried out. As a first step towards extending such sea level-based studies to platforms that operate at a range of altitudes, such as airplanes or land based vehicles, the kinematic GNSS estimation of PWV over an exactly repeated trajectory is considered. A data set was collected from a GNSS receiver and antenna mounted on a carriage of the Snowdon Mountain Railway, UK, which continually ascends and descends through 950 m of vertical relief. Static GNSS reference receivers were installed at the top and bottom of the altitude profile, and derived zenith wet delay (ZWD) was interpolated to the altitude of the train to provide reference values together with profile estimates from the 100 m resolution runs of the Met Office's Unified Model. We demonstrate similar GNSS accuracies as obtained from previous shipborne studies, namely a double difference relative kinematic GNSS ZWD accuracy within 14 mm, and a kinematic GNSS precise point positioning ZWD accuracy within 15 mm. The latter is a more typical airborne PWV estimation scenario i.e. without the reliance on ground-based GNSS reference stations. We show that the kinematic GPS-only precise point positioning ZWD estimation is enhanced by also incorporating GLONASS observations.

  4. [Effects of soil data and map scale on assessment of total phosphorus storage in upland soils.

    PubMed

    Li, Heng Rong; Zhang, Li Ming; Li, Xiao di; Yu, Dong Sheng; Shi, Xue Zheng; Xing, Shi He; Chen, Han Yue

    2016-06-01

    Accurate assessment of total phosphorus storage in farmland soils is of great significance to sustainable agricultural and non-point source pollution control. However, previous studies haven't considered the estimation errors from mapping scales and various databases with different sources of soil profile data. In this study, a total of 393×10 4 hm 2 of upland in the 29 counties (or cities) of North Jiangsu was cited as a case for study. Analysis was performed of how the four sources of soil profile data, namely, "Soils of County", "Soils of Prefecture", "Soils of Province" and "Soils of China", and the six scales, i.e. 1:50000, 1:250000, 1:500000, 1:1000000, 1:4000000 and1:10000000, used in the 24 soil databases established for the four soil journals, affected assessment of soil total phosphorus. Compared with the most detailed 1:50000 soil database established with 983 upland soil profiles, relative deviation of the estimates of soil total phosphorus density (STPD) and soil total phosphorus storage (STPS) from the other soil databases varied from 4.8% to 48.9% and from 1.6% to 48.4%, respectively. The estimated STPD and STPS based on the 1:50000 database of "Soils of County" and most of the estimates based on the databases of each scale in "Soils of County" and "Soils of Prefecture" were different, with the significance levels of P<0.001 or P<0.05. Extremely significant differences (P<0.001) existed between the estimates based on the 1:50000 database of "Soils of County" and the estimates based on the databases of each scale in "Soils of Province" and "Soils of China". This study demonstrated the significance of appropriate soil data sources and appropriate mapping scales in estimating STPS.

  5. Dietary Compositions and Their Seasonal Shifts in Japanese Resident Birds, Estimated from the Analysis of Volunteer Monitoring Data

    PubMed Central

    Yoshikawa, Tetsuro; Osada, Yutaka

    2015-01-01

    Determining the composition of a bird’s diet and its seasonal shifts are fundamental for understanding the ecology and ecological functions of a species. Various methods have been used to estimate the dietary compositions of birds, which have their own advantages and disadvantages. In this study, we examined the possibility of using long-term volunteer monitoring data as the source of dietary information for 15 resident bird species in Kanagawa Prefecture, Japan. The data were collected from field observations reported by volunteers of regional naturalist groups. Based on these monitoring data, we calculated the monthly dietary composition of each bird species directly, and we also estimated unidentified items within the reported foraging episodes using Bayesian models that contained additional information regarding foraging locations. Next, to examine the validity of the estimated dietary compositions, we compared them with the dietary information for focal birds based on stomach analysis methods, collected from past literatures. The dietary trends estimated from the monitoring data were largely consistent with the general food habits determined from the previous studies of focal birds. Thus, the estimates based on the volunteer monitoring data successfully detected noticeable seasonal shifts in many of the birds from plant materials to animal diets during spring—summer. Comparisons with stomach analysis data supported the qualitative validity of the monitoring-based dietary information and the effectiveness of the Bayesian models for improving the estimates. This comparison suggests that one advantage of using monitoring data is its ability to detect dietary items such as fleshy fruits, flower nectar, and vertebrates. These results emphasize the potential importance of observation data collecting and mining by citizens, especially free descriptive observation data, for use in bird ecology studies. PMID:25723544

  6. Robust estimation of the proportion of treatment effect explained by surrogate marker information.

    PubMed

    Parast, Layla; McDermott, Mary M; Tian, Lu

    2016-05-10

    In randomized treatment studies where the primary outcome requires long follow-up of patients and/or expensive or invasive obtainment procedures, the availability of a surrogate marker that could be used to estimate the treatment effect and could potentially be observed earlier than the primary outcome would allow researchers to make conclusions regarding the treatment effect with less required follow-up time and resources. The Prentice criterion for a valid surrogate marker requires that a test for treatment effect on the surrogate marker also be a valid test for treatment effect on the primary outcome of interest. Based on this criterion, methods have been developed to define and estimate the proportion of treatment effect on the primary outcome that is explained by the treatment effect on the surrogate marker. These methods aim to identify useful statistical surrogates that capture a large proportion of the treatment effect. However, current methods to estimate this proportion usually require restrictive model assumptions that may not hold in practice and thus may lead to biased estimates of this quantity. In this paper, we propose a nonparametric procedure to estimate the proportion of treatment effect on the primary outcome that is explained by the treatment effect on a potential surrogate marker and extend this procedure to a setting with multiple surrogate markers. We compare our approach with previously proposed model-based approaches and propose a variance estimation procedure based on a perturbation-resampling method. Simulation studies demonstrate that the procedure performs well in finite samples and outperforms model-based procedures when the specified models are not correct. We illustrate our proposed procedure using a data set from a randomized study investigating a group-mediated cognitive behavioral intervention for peripheral artery disease participants. Copyright © 2015 John Wiley & Sons, Ltd.

  7. Development and validation of a new population-based simulation model of osteoarthritis in New Zealand.

    PubMed

    Wilson, R; Abbott, J H

    2018-04-01

    To describe the construction and preliminary validation of a new population-based microsimulation model developed to analyse the health and economic burden and cost-effectiveness of treatments for knee osteoarthritis (OA) in New Zealand (NZ). We developed the New Zealand Management of Osteoarthritis (NZ-MOA) model, a discrete-time state-transition microsimulation model of the natural history of radiographic knee OA. In this article, we report on the model structure, derivation of input data, validation of baseline model parameters against external data sources, and validation of model outputs by comparison of the predicted population health loss with previous estimates. The NZ-MOA model simulates both the structural progression of radiographic knee OA and the stochastic development of multiple disease symptoms. Input parameters were sourced from NZ population-based data where possible, and from international sources where NZ-specific data were not available. The predicted distributions of structural OA severity and health utility detriments associated with OA were externally validated against other sources of evidence, and uncertainty resulting from key input parameters was quantified. The resulting lifetime and current population health-loss burden was consistent with estimates of previous studies. The new NZ-MOA model provides reliable estimates of the health loss associated with knee OA in the NZ population. The model structure is suitable for analysis of the effects of a range of potential treatments, and will be used in future work to evaluate the cost-effectiveness of recommended interventions within the NZ healthcare system. Copyright © 2018 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  8. Effect of non-normality on test statistics for one-way independent groups designs.

    PubMed

    Cribbie, Robert A; Fiksenbaum, Lisa; Keselman, H J; Wilcox, Rand R

    2012-02-01

    The data obtained from one-way independent groups designs is typically non-normal in form and rarely equally variable across treatment populations (i.e., population variances are heterogeneous). Consequently, the classical test statistic that is used to assess statistical significance (i.e., the analysis of variance F test) typically provides invalid results (e.g., too many Type I errors, reduced power). For this reason, there has been considerable interest in finding a test statistic that is appropriate under conditions of non-normality and variance heterogeneity. Previously recommended procedures for analysing such data include the James test, the Welch test applied either to the usual least squares estimators of central tendency and variability, or the Welch test with robust estimators (i.e., trimmed means and Winsorized variances). A new statistic proposed by Krishnamoorthy, Lu, and Mathew, intended to deal with heterogeneous variances, though not non-normality, uses a parametric bootstrap procedure. In their investigation of the parametric bootstrap test, the authors examined its operating characteristics under limited conditions and did not compare it to the Welch test based on robust estimators. Thus, we investigated how the parametric bootstrap procedure and a modified parametric bootstrap procedure based on trimmed means perform relative to previously recommended procedures when data are non-normal and heterogeneous. The results indicated that the tests based on trimmed means offer the best Type I error control and power when variances are unequal and at least some of the distribution shapes are non-normal. © 2011 The British Psychological Society.

  9. Blood viscosity monitoring during cardiopulmonary bypass based on pressure-flow characteristics of a Newtonian fluid.

    PubMed

    Okahara, Shigeyuki; Zu Soh; Takahashi, Shinya; Sueda, Taijiro; Tsuji, Toshio

    2016-08-01

    We proposed a blood viscosity estimation method based on pressure-flow characteristics of oxygenators used during cardiopulmonary bypass (CPB) in a previous study that showed the estimated viscosity to correlate well with the measured viscosity. However, the determination of the parameters included in the method required the use of blood, thereby leading to high cost of calibration. Therefore, in this study we propose a new method to monitor blood viscosity, which approximates the pressure-flow characteristics of blood considered as a non-Newtonian fluid with characteristics of a Newtonian fluid by using the parameters derived from glycerin solution to enable ease of acquisition. Because parameters used in the estimation method are based on fluid types, bovine blood parameters were used to calculate estimated viscosity (ηe), and glycerin parameters were used to estimate deemed viscosity (ηdeem). Three samples of whole bovine blood with different hematocrit levels (21.8%, 31.0%, and 39.8%) were prepared and perfused into the oxygenator. As the temperature changed from 37 °C to 27 °C, the oxygenator mean inlet pressure and outlet pressure were recorded for flows of 2 L/min and 4 L/min, and the viscosity was estimated. The value of deemed viscosity calculated with the glycerin parameters was lower than estimated viscosity calculated with bovine blood parameters by 20-33% at 21.8% hematocrit, 12-27% at 31.0% hematocrit, and 10-15% at 39.8% hematocrit. Furthermore, deemed viscosity was lower than estimated viscosity by 10-30% at 2 L/min and 30-40% at 4 L/min. Nevertheless, estimated and deemed viscosities varied with a similar slope. Therefore, this shows that deemed viscosity achieved using glycerin parameters may be capable of successfully monitoring relative viscosity changes of blood in a perfusing oxygenator.

  10. ERF1_2 -- Enhanced River Reach File 2.0

    USGS Publications Warehouse

    Nolan, Jacqueline V.; Brakebill, John W.; Alexander, Richard B.; Schwarz, Gregory E.

    2003-01-01

    The digital segmented network based on watershed boundaries, ERF1_2, includes enhancements to the U.S. Environmental Protection Agency's River Reach File 1 (RF1) (USEPA, 1996; DeWald and others, 1985) to support national and regional-scale surface water-quality modeling. Alexander and others (1999) developed ERF1, which assessed the hydrologic integrity of the digital reach traces and calculated the mean water time-of-travel in river reaches and reservoirs. ERF1_2 serves as the foundation for SPARROW (Spatially Referenced Regressions (of nutrient transport) on Watershed) modeling. Within the context of a Geographic Information System, SPARROW estimates the proportion of watersheds in the conterminous U.S. with outflow concentrations of several nutrients, including total nitrogen and total phosphorus, (Smith, R.A., Schwarz, G.E., and Alexander, R.B., 1997). This version of the network expands on ERF1 (Version 1.2; Alexander, et al., 1999) and includes the incremental and total drainage area derived from 1-kilometer (km) elevation data for North America. Previous estimates of the water time-of-travel were recomputed for reaches with water-quality monitoring sites that included two reaches. The mean flow and velocity estimates for these split reaches are based on previous estimation methods (Alexander et al., 1999) and are unchanged in ERF1_2. Drainage area calculations provide data used to estimate the contribution of a given nutrient to the outflow. Data estimates depend on the accuracy of node connectivity. Reaches split at water-quality or pesticide-monitoring sites indicate the source point for estimating the contribution and transport of nutrients and their loads throughout the watersheds. The ERF1_2 coverage extends the earlier drainage area founded on the 1-kilometer data for North America (Verdin, 1996; Verdin and Jenson, 1996). A 1-kilometer raster grid of ERF1_2 projected to Lambert Azimuthal Equal Area, NAD 27 Datum (Snyder, 1987), was merged with the HYDRO1K flow direction data set (Verdin and Jenson, 1996) to generate a DEM-based watershed grid, ERF1_2WS_LG. The watershed boundaries are maintained in a raster (grid cell) format as well as a vector (polygon) format for subsequent model analysis. Both the coverage, ERF1_2, and the grid, ERF1_2WS_LG, are available at: URL:http://water.usgs.gov/lookup/getspatial?erf1_2

  11. A regional high-resolution carbon flux inversion of North America for 2004

    NASA Astrophysics Data System (ADS)

    Schuh, A. E.; Denning, A. S.; Corbin, K. D.; Baker, I. T.; Uliasz, M.; Parazoo, N.; Andrews, A. E.; Worthy, D. E. J.

    2010-05-01

    Resolving the discrepancies between NEE estimates based upon (1) ground studies and (2) atmospheric inversion results, demands increasingly sophisticated techniques. In this paper we present a high-resolution inversion based upon a regional meteorology model (RAMS) and an underlying biosphere (SiB3) model, both running on an identical 40 km grid over most of North America. Current operational systems like CarbonTracker as well as many previous global inversions including the Transcom suite of inversions have utilized inversion regions formed by collapsing biome-similar grid cells into larger aggregated regions. An extreme example of this might be where corrections to NEE imposed on forested regions on the east coast of the United States might be the same as that imposed on forests on the west coast of the United States while, in reality, there likely exist subtle differences in the two areas, both natural and anthropogenic. Our current inversion framework utilizes a combination of previously employed inversion techniques while allowing carbon flux corrections to be biome independent. Temporally and spatially high-resolution results utilizing biome-independent corrections provide insight into carbon dynamics in North America. In particular, we analyze hourly CO2 mixing ratio data from a sparse network of eight towers in North America for 2004. A prior estimate of carbon fluxes due to Gross Primary Productivity (GPP) and Ecosystem Respiration (ER) is constructed from the SiB3 biosphere model on a 40 km grid. A combination of transport from the RAMS and the Parameterized Chemical Transport Model (PCTM) models is used to forge a connection between upwind biosphere fluxes and downwind observed CO2 mixing ratio data. A Kalman filter procedure is used to estimate weekly corrections to biosphere fluxes based upon observed CO2. RMSE-weighted annual NEE estimates, over an ensemble of potential inversion parameter sets, show a mean estimate 0.57 Pg/yr sink in North America. We perform the inversion with two independently derived boundary inflow conditions and calculate jackknife-based statistics to test the robustness of the model results. We then compare final results to estimates obtained from the CarbonTracker inversion system and at the Southern Great Plains flux site. Results are promising, showing the ability to correct carbon fluxes from the biosphere models over annual and seasonal time scales, as well as over the different GPP and ER components. Additionally, the correlation of an estimated sink of carbon in the South Central United States with regional anomalously high precipitation in an area of managed agricultural and forest lands provides interesting hypotheses for future work.

  12. Progress report on daily flow-routing simulation for the Carson River, California and Nevada

    USGS Publications Warehouse

    Hess, G.W.

    1996-01-01

    A physically based flow-routing model using Hydrological Simulation Program-FORTRAN (HSPF) was constructed for modeling streamflow in the Carson River at daily time intervals as part of the Truckee-Carson Program of the U.S. Geological Survey (USGS). Daily streamflow data for water years 1978-92 for the mainstem river, tributaries, and irrigation ditches from the East Fork Carson River near Markleeville and West Fork Carson River at Woodfords down to the mainstem Carson River at Fort Churchill upstream from Lahontan Reservoir were obtained from several agencies and were compiled into a comprehensive data base. No previous physically based flow-routing model of the Carson River has incorporated multi-agency streamflow data into a single data base and simulated flow at a daily time interval. Where streamflow data were unavailable or incomplete, hydrologic techniques were used to estimate some flows. For modeling purposes, the Carson River was divided into six segments, which correspond to those used in the Alpine Decree that governs water rights along the river. Hydraulic characteristics were defined for 48 individual stream reaches based on cross-sectional survey data obtained from field surveys and previous studies. Simulation results from the model were compared with available observed and estimated streamflow data. Model testing demonstrated that hydraulic characteristics of the Carson River are adequately represented in the models for a range of flow regimes. Differences between simulated and observed streamflow result mostly from inadequate data characterizing inflow and outflow from the river. Because irrigation return flows are largely unknown, irrigation return flow percentages were used as a calibration parameter to minimize differences between observed and simulated streamflows. Observed and simulated streamflow were compared for daily periods for the full modeled length of the Carson River and for two major subreaches modeled with more detailed input data. Hydrographs and statistics presented in this report describe these differences. A sensitivity analysis of four estimated components of the hydrologic system evaluated which components were significant in the model. Estimated ungaged tributary streamflow is not a significant component of the model during low runoff, but is significant during high runoff. The sensitivity analysis indicates that changes in the estimated irrigation diversion and estimated return flow creates a noticeable change in the statistics. The modeling for this study is preliminary. Results of the model are constrained by current availability and accuracy of observed hydrologic data. Several inflows and outflows of the Carson River are not described by time-series data and therefore are not represented in the model.

  13. Bayesian Small Area Estimates of Diabetes Incidence by United States County, 2009

    PubMed Central

    Barker, Lawrence E.; Thompson, Theodore J.; Kirtland, Karen A; Boyle, James P; Geiss, Linda S; McCauley, Mary M.; Albright, Ann L.

    2015-01-01

    In the United States, diabetes is common and costly. Programs to prevent new cases of diabetes are often carried out at the level of the county, a unit of local government. Thus, efficient targeting of such programs requires county-level estimates of diabetes incidence–the fraction of the non-diabetic population who received their diagnosis of diabetes during the past 12 months. Previously, only estimates of prevalence–the overall fraction of population who have the disease–have been available at the county level. Counties with high prevalence might or might not be the same as counties with high incidence, due to spatial variation in mortality and relocation of persons with incident diabetes to another county. Existing methods cannot be used to estimate county-level diabetes incidence, because the fraction of the population who receive a diabetes diagnosis in any year is too small. Here, we extend previously developed methods of Bayesian small-area estimation of prevalence, using diffuse priors, to estimate diabetes incidence for all U.S. counties based on data from a survey designed to yield state-level estimates. We found high incidence in the southeastern United States, the Appalachian region, and in scattered counties throughout the western U.S. Our methods might be applicable in other circumstances in which all cases of a rare condition also must be cases of a more common condition (in this analysis, “newly diagnosed cases of diabetes” and “cases of diabetes”). If appropriate data are available, our methods can be used to estimate proportion of the population with the rare condition at greater geographic specificity than the data source was designed to provide. PMID:26279666

  14. Prediction and assimilation of surf-zone processes using a Bayesian network: Part II: Inverse models

    USGS Publications Warehouse

    Plant, Nathaniel G.; Holland, K. Todd

    2011-01-01

    A Bayesian network model has been developed to simulate a relatively simple problem of wave propagation in the surf zone (detailed in Part I). Here, we demonstrate that this Bayesian model can provide both inverse modeling and data-assimilation solutions for predicting offshore wave heights and depth estimates given limited wave-height and depth information from an onshore location. The inverse method is extended to allow data assimilation using observational inputs that are not compatible with deterministic solutions of the problem. These inputs include sand bar positions (instead of bathymetry) and estimates of the intensity of wave breaking (instead of wave-height observations). Our results indicate that wave breaking information is essential to reduce prediction errors. In many practical situations, this information could be provided from a shore-based observer or from remote-sensing systems. We show that various combinations of the assimilated inputs significantly reduce the uncertainty in the estimates of water depths and wave heights in the model domain. Application of the Bayesian network model to new field data demonstrated significant predictive skill (R2 = 0.7) for the inverse estimate of a month-long time series of offshore wave heights. The Bayesian inverse results include uncertainty estimates that were shown to be most accurate when given uncertainty in the inputs (e.g., depth and tuning parameters). Furthermore, the inverse modeling was extended to directly estimate tuning parameters associated with the underlying wave-process model. The inverse estimates of the model parameters not only showed an offshore wave height dependence consistent with results of previous studies but the uncertainty estimates of the tuning parameters also explain previously reported variations in the model parameters.

  15. Strain, curvature, and twist measurements in digital holographic interferometry using pseudo-Wigner-Ville distribution based method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajshekhar, G.; Gorthi, Sai Siva; Rastogi, Pramod

    2009-09-15

    Measurement of strain, curvature, and twist of a deformed object play an important role in deformation analysis. Strain depends on the first order displacement derivative, whereas curvature and twist are determined by second order displacement derivatives. This paper proposes a pseudo-Wigner-Ville distribution based method for measurement of strain, curvature, and twist in digital holographic interferometry where the object deformation or displacement is encoded as interference phase. In the proposed method, the phase derivative is estimated by peak detection of pseudo-Wigner-Ville distribution evaluated along each row/column of the reconstructed interference field. A complex exponential signal with unit amplitude and the phasemore » derivative estimate as the argument is then generated and the pseudo-Wigner-Ville distribution along each row/column of this signal is evaluated. The curvature is estimated by using peak tracking strategy for the new distribution. For estimation of twist, the pseudo-Wigner-Ville distribution is evaluated along each column/row (i.e., in alternate direction with respect to the previous one) for the generated complex exponential signal and the corresponding peak detection gives the twist estimate.« less

  16. Bootstrap imputation with a disease probability model minimized bias from misclassification due to administrative database codes.

    PubMed

    van Walraven, Carl

    2017-04-01

    Diagnostic codes used in administrative databases cause bias due to misclassification of patient disease status. It is unclear which methods minimize this bias. Serum creatinine measures were used to determine severe renal failure status in 50,074 hospitalized patients. The true prevalence of severe renal failure and its association with covariates were measured. These were compared to results for which renal failure status was determined using surrogate measures including the following: (1) diagnostic codes; (2) categorization of probability estimates of renal failure determined from a previously validated model; or (3) bootstrap methods imputation of disease status using model-derived probability estimates. Bias in estimates of severe renal failure prevalence and its association with covariates were minimal when bootstrap methods were used to impute renal failure status from model-based probability estimates. In contrast, biases were extensive when renal failure status was determined using codes or methods in which model-based condition probability was categorized. Bias due to misclassification from inaccurate diagnostic codes can be minimized using bootstrap methods to impute condition status using multivariable model-derived probability estimates. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. The Volume of Earth's Lakes

    NASA Astrophysics Data System (ADS)

    Cael, B. B.

    How much water do lakes on Earth hold? Global lake volume estimates are scarce, highly variable, and poorly documented. We develop a mechanistic null model for estimating global lake mean depth and volume based on a statistical topographic approach to Earth's surface. The volume-area scaling prediction is accurate and consistent within and across lake datasets spanning diverse regions. We applied these relationships to a global lake area census to estimate global lake volume and depth. The volume of Earth's lakes is 199,000 km3 (95% confidence interval 196,000-202,000 km3) . This volume is in the range of historical estimates (166,000-280,000 km3) , but the overall mean depth of 41.8 m (95% CI 41.2-42.4 m) is significantly lower than previous estimates (62 - 151 m). These results highlight and constrain the relative scarcity of lake waters in the hydrosphere and have implications for the role of lakes in global biogeochemical cycles. We also evaluate the size (area) distribution of lakes on Earth compared to expectations from percolation theory. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. 2388357.

  18. Accuracy of latent-variable estimation in Bayesian semi-supervised learning.

    PubMed

    Yamazaki, Keisuke

    2015-09-01

    Hierarchical probabilistic models, such as Gaussian mixture models, are widely used for unsupervised learning tasks. These models consist of observable and latent variables, which represent the observable data and the underlying data-generation process, respectively. Unsupervised learning tasks, such as cluster analysis, are regarded as estimations of latent variables based on the observable ones. The estimation of latent variables in semi-supervised learning, where some labels are observed, will be more precise than that in unsupervised, and one of the concerns is to clarify the effect of the labeled data. However, there has not been sufficient theoretical analysis of the accuracy of the estimation of latent variables. In a previous study, a distribution-based error function was formulated, and its asymptotic form was calculated for unsupervised learning with generative models. It has been shown that, for the estimation of latent variables, the Bayes method is more accurate than the maximum-likelihood method. The present paper reveals the asymptotic forms of the error function in Bayesian semi-supervised learning for both discriminative and generative models. The results show that the generative model, which uses all of the given data, performs better when the model is well specified. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. WE-G-204-07: Automated Characterization of Perceptual Quality of Clinical Chest Radiographs: Improvements in Lung, Spine, and Hardware Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wells, J; Zhang, L; Samei, E

    Purpose: To develop and validate more robust methods for automated lung, spine, and hardware detection in AP/PA chest images. This work is part of a continuing effort to automatically characterize the perceptual image quality of clinical radiographs. [Y. Lin et al. Med. Phys. 39, 7019–7031 (2012)] Methods: Our previous implementation of lung/spine identification was applicable to only one vendor. A more generalized routine was devised based on three primary components: lung boundary detection, fuzzy c-means (FCM) clustering, and a clinically-derived lung pixel probability map. Boundary detection was used to constrain the lung segmentations. FCM clustering produced grayscale- and neighborhood-based pixelmore » classification probabilities which are weighted by the clinically-derived probability maps to generate a final lung segmentation. Lung centerlines were set along the left-right lung midpoints. Spine centerlines were estimated as a weighted average of body contour, lateral lung contour, and intensity-based centerline estimates. Centerline estimation was tested on 900 clinical AP/PA chest radiographs which included inpatient/outpatient, upright/bedside, men/women, and adult/pediatric images from multiple imaging systems. Our previous implementation further did not account for the presence of medical hardware (pacemakers, wires, implants, staples, stents, etc.) potentially biasing image quality analysis. A hardware detection algorithm was developed using a gradient-based thresholding method. The training and testing paradigm used a set of 48 images from which 1920 51×51 pixel{sup 2} ROIs with and 1920 ROIs without hardware were manually selected. Results: Acceptable lung centerlines were generated in 98.7% of radiographs while spine centerlines were acceptable in 99.1% of radiographs. Following threshold optimization, the hardware detection software yielded average true positive and true negative rates of 92.7% and 96.9%, respectively. Conclusion: Updated segmentation and centerline estimation methods in addition to new gradient-based hardware detection software provide improved data integrity control and error-checking for automated clinical chest image quality characterization across multiple radiography systems.« less

  20. Research of Water Level Prediction for a Continuous Flood due to Typhoons Based on a Machine Learning Method

    NASA Astrophysics Data System (ADS)

    Nakatsugawa, M.; Kobayashi, Y.; Okazaki, R.; Taniguchi, Y.

    2017-12-01

    This research aims to improve accuracy of water level prediction calculations for more effective river management. In August 2016, Hokkaido was visited by four typhoons, whose heavy rainfall caused severe flooding. In the Tokoro river basin of Eastern Hokkaido, the water level (WL) at the Kamikawazoe gauging station, which is at the lower reaches exceeded the design high-water level and the water rose to the highest level on record. To predict such flood conditions and mitigate disaster damage, it is necessary to improve the accuracy of prediction as well as to prolong the lead time (LT) required for disaster mitigation measures such as flood-fighting activities and evacuation actions by residents. There is the need to predict the river water level around the peak stage earlier and more accurately. Previous research dealing with WL prediction had proposed a method in which the WL at the lower reaches is estimated by the correlation with the WL at the upper reaches (hereinafter: "the water level correlation method"). Additionally, a runoff model-based method has been generally used in which the discharge is estimated by giving rainfall prediction data to a runoff model such as a storage function model and then the WL is estimated from that discharge by using a WL discharge rating curve (H-Q curve). In this research, an attempt was made to predict WL by applying the Random Forest (RF) method, which is a machine learning method that can estimate the contribution of explanatory variables. Furthermore, from the practical point of view, we investigated the prediction of WL based on a multiple correlation (MC) method involving factors using explanatory variables with high contribution in the RF method, and we examined the proper selection of explanatory variables and the extension of LT. The following results were found: 1) Based on the RF method tuned up by learning from previous floods, the WL for the abnormal flood case of August 2016 was properly predicted with a lead time of 6 h. 2) Based on the contribution of explanatory variables, factors were selected for the MC method. In this way, plausible prediction results were obtained.

Top