Sample records for previously estimated values

  1. Estimation of Anaerobic Debromination Rate Constants of PBDE Pathways Using an Anaerobic Dehalogenation Model.

    PubMed

    Karakas, Filiz; Imamoglu, Ipek

    2017-04-01

    This study aims to estimate anaerobic debromination rate constants (k m ) of PBDE pathways using previously reported laboratory soil data. k m values of pathways are estimated by modifying a previously developed model as Anaerobic Dehalogenation Model. Debromination activities published in the literature in terms of bromine substitutions as well as specific microorganisms and their combinations are used for identification of pathways. The range of estimated k m values is between 0.0003 and 0.0241 d -1 . The median and maximum of k m values are found to be comparable to the few available biologically confirmed rate constants published in the literature. The estimated k m values can be used as input to numerical fate and transport models for a better and more detailed investigation of the fate of individual PBDEs in contaminated sediments. Various remediation scenarios such as monitored natural attenuation or bioremediation with bioaugmentation can be handled in a more quantitative manner with the help of k m estimated in this study.

  2. Global Admittance Estimates of Elastic and Crustal Thickness of Venus: Results from Top, Hot Spot, and Bottom Loading Models

    NASA Technical Reports Server (NTRS)

    Smrekar, S. E.; Anderson, F. S.

    2005-01-01

    We have calculated admittance spectra using the spatio-spectral method [14] for Venus by moving the central location of the spectrum over a 1 grid, create 360x180 admittance spectra. We invert the observed admittance using top-loading (TL), hot spot (HS), and bottom loading (BL) models, resulting in elastic, crustal, and lithospheric thickness estimates (Te, Zc, and Zl) [0]. The result is a global map for interpreting subsurface structure. Estimated values of Te and Zc concur with previous TL local admittance results, but BL estimates indicate larger values than previously suspected.

  3. Dynamic estimator for determining operating conditions in an internal combustion engine

    DOEpatents

    Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob

    2016-01-05

    Methods and systems are provided for estimating engine performance information for a combustion cycle of an internal combustion engine. Estimated performance information for a previous combustion cycle is retrieved from memory. The estimated performance information includes an estimated value of at least one engine performance variable. Actuator settings applied to engine actuators are also received. The performance information for the current combustion cycle is then estimated based, at least in part, on the estimated performance information for the previous combustion cycle and the actuator settings applied during the previous combustion cycle. The estimated performance information for the current combustion cycle is then stored to the memory to be used in estimating performance information for a subsequent combustion cycle.

  4. Summary and evaluation of hydraulic property data available for the Hanford Site upper basalt confined aquifer system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spane, F.A. Jr.; Vermeul, V.R.

    Pacific Northwest Laboratory, as part of the Hanford Site Ground-Water Surveillance Project, examines the potential for offsite migration of contamination within the upper basalt confined aquifer system. For the past 40 years, hydrologic testing of the upper basalt confined aquifer has been conducted by a number of Hanford Site programs. Hydraulic property estimates are important for evaluating aquifer flow characteristics (i.e., ground-water flow patterns, flow velocity, transport travel time). Presented are the first comprehensive Hanford Site-wide summary of hydraulic properties for the upper basalt confined aquifer system (i.e., the upper Saddle Mountains Basalt). Available hydrologic test data were reevaluated usingmore » recently developed diagnostic test analysis methods. A comparison of calculated transmissivity estimates indicates that, for most test results, a general correspondence within a factor of two between reanalysis and previously reported test values was obtained. For a majority of the tests, previously reported values are greater than reanalysis estimates. This overestimation is attributed to a number of factors, including, in many cases, a misapplication of nonleaky confined aquifer analysis methods in previous analysis reports to tests that exhibit leaky confined aquifer response behavior. Results of the test analyses indicate a similar range for transmissivity values for the various hydro-geologic units making up the upper basalt confined aquifer. Approximately 90% of the calculated transmissivity values for upper basalt confined aquifer hydrogeologic units occur within the range of 10{sup 0} to 10{sup 2} m{sup 2}/d, with 65% of the calculated estimate values occurring between 10{sup 1} to 10{sup 2} m{sup 2}d. These summary findings are consistent with the general range of values previously reported for basalt interflow contact zones and sedimentary interbeds within the Saddle Mountains Basalt.« less

  5. Rescaling quality of life values from discrete choice experiments for use as QALYs: a cautionary tale

    PubMed Central

    Flynn, Terry N; Louviere, Jordan J; Marley, Anthony AJ; Coast, Joanna; Peters, Tim J

    2008-01-01

    Background Researchers are increasingly investigating the potential for ordinal tasks such as ranking and discrete choice experiments to estimate QALY health state values. However, the assumptions of random utility theory, which underpin the statistical models used to provide these estimates, have received insufficient attention. In particular, the assumptions made about the decisions between living states and the death state are not satisfied, at least for some people. Estimated values are likely to be incorrectly anchored with respect to death (zero) in such circumstances. Methods Data from the Investigating Choice Experiments for the preferences of older people CAPability instrument (ICECAP) valuation exercise were analysed. The values (previously anchored to the worst possible state) were rescaled using an ordinal model proposed previously to estimate QALY-like values. Bootstrapping was conducted to vary artificially the proportion of people who conformed to the conventional random utility model underpinning the analyses. Results Only 26% of respondents conformed unequivocally to the assumptions of conventional random utility theory. At least 14% of respondents unequivocally violated the assumptions. Varying the relative proportions of conforming respondents in sensitivity analyses led to large changes in the estimated QALY values, particularly for lower-valued states. As a result these values could be either positive (considered to be better than death) or negative (considered to be worse than death). Conclusion Use of a statistical model such as conditional (multinomial) regression to anchor quality of life values from ordinal data to death is inappropriate in the presence of respondents who do not conform to the assumptions of conventional random utility theory. This is clearest when estimating values for that group of respondents observed in valuation samples who refuse to consider any living state to be worse than death: in such circumstances the model cannot be estimated. Only a valuation task requiring respondents to make choices in which both length and quality of life vary can produce estimates that properly reflect the preferences of all respondents. PMID:18945358

  6. The Implications of Summer Learning Loss for Value-Added Estimates of Teacher Effectiveness

    ERIC Educational Resources Information Center

    Gershenson, Seth; Hayes, Michael S.

    2018-01-01

    School districts across the United States increasingly use value-added models (VAMs) to evaluate teachers. In practice, VAMs typically rely on lagged test scores from the previous academic year, which necessarily conflate summer with school-year learning and potentially bias estimates of teacher effectiveness. We investigate the practical…

  7. Decentralization and the Composition of Public Expenditures

    DTIC Science & Technology

    2012-01-01

    decentralized governance and expenditure composition by means of a distance-sensitive representative agent model. Then we estimate the impact of fiscal...countries with regards to population age structure. We are not certain of what effects that may have in our estimates , but previous studies have found find...variables to estimate a scalar value for g(xβ), which then is multiplied to each variables coefficient. For this, we choose the mean values of the

  8. Estimating SPT-N Value Based on Soil Resistivity using Hybrid ANN-PSO Algorithm

    NASA Astrophysics Data System (ADS)

    Nur Asmawisham Alel, Mohd; Ruben Anak Upom, Mark; Asnida Abdullah, Rini; Hazreek Zainal Abidin, Mohd

    2018-04-01

    Standard Penetration Resistance (N value) is used in many empirical geotechnical engineering formulas. Meanwhile, soil resistivity is a measure of soil’s resistance to electrical flow. For a particular site, usually, only a limited N value data are available. In contrast, resistivity data can be obtained extensively. Moreover, previous studies showed evidence of a correlation between N value and resistivity value. Yet, no existing method is able to interpret resistivity data for estimation of N value. Thus, the aim is to develop a method for estimating N-value using resistivity data. This study proposes a hybrid Artificial Neural Network-Particle Swarm Optimization (ANN-PSO) method to estimate N value using resistivity data. Five different ANN-PSO models based on five boreholes were developed and analyzed. The performance metrics used were the coefficient of determination, R2 and mean absolute error, MAE. Analysis of result found that this method can estimate N value (R2 best=0.85 and MAEbest=0.54) given that the constraint, Δ {\\bar{l}}ref, is satisfied. The results suggest that ANN-PSO method can be used to estimate N value with good accuracy.

  9. Brain Tissue Compartment Density Estimated Using Diffusion-Weighted MRI Yields Tissue Parameters Consistent With Histology

    PubMed Central

    Sepehrband, Farshid; Clark, Kristi A.; Ullmann, Jeremy F.P.; Kurniawan, Nyoman D.; Leanage, Gayeshika; Reutens, David C.; Yang, Zhengyi

    2015-01-01

    We examined whether quantitative density measures of cerebral tissue consistent with histology can be obtained from diffusion magnetic resonance imaging (MRI). By incorporating prior knowledge of myelin and cell membrane densities, absolute tissue density values were estimated from relative intra-cellular and intra-neurite density values obtained from diffusion MRI. The NODDI (neurite orientation distribution and density imaging) technique, which can be applied clinically, was used. Myelin density estimates were compared with the results of electron and light microscopy in ex vivo mouse brain and with published density estimates in a healthy human brain. In ex vivo mouse brain, estimated myelin densities in different sub-regions of the mouse corpus callosum were almost identical to values obtained from electron microscopy (Diffusion MRI: 42±6%, 36±4% and 43±5%; electron microscopy: 41±10%, 36±8% and 44±12% in genu, body and splenium, respectively). In the human brain, good agreement was observed between estimated fiber density measurements and previously reported values based on electron microscopy. Estimated density values were unaffected by crossing fibers. PMID:26096639

  10. Valuing Insect Pollination Services with Cost of Replacement

    PubMed Central

    Allsopp, Mike H.; de Lange, Willem J.; Veldtman, Ruan

    2008-01-01

    Value estimates of ecosystem goods and services are useful to justify the allocation of resources towards conservation, but inconclusive estimates risk unsustainable resource allocations. Here we present replacement costs as a more accurate value estimate of insect pollination as an ecosystem service, although this method could also be applied to other services. The importance of insect pollination to agriculture is unequivocal. However, whether this service is largely provided by wild pollinators (genuine ecosystem service) or managed pollinators (commercial service), and which of these requires immediate action amidst reports of pollinator decline, remains contested. If crop pollination is used to argue for biodiversity conservation, clear distinction should be made between values of managed- and wild pollination services. Current methods either under-estimate or over-estimate the pollination service value, and make use of criticised general insect and managed pollinator dependence factors. We apply the theoretical concept of ascribing a value to a service by calculating the cost to replace it, as a novel way of valuing wild and managed pollination services. Adjusted insect and managed pollinator dependence factors were used to estimate the cost of replacing insect- and managed pollination services for the Western Cape deciduous fruit industry of South Africa. Using pollen dusting and hand pollination as suitable replacements, we value pollination services significantly higher than current market prices for commercial pollination, although lower than traditional proportional estimates. The complexity associated with inclusive value estimation of pollination services required several defendable assumptions, but made estimates more inclusive than previous attempts. Consequently this study provides the basis for continued improvement in context specific pollination service value estimates. PMID:18781196

  11. The Dynamics of Glomerular Ultrafiltration in the Rat

    PubMed Central

    Brenner, Barry M.; Troy, Julia L.; Daugharty, Terrance M.

    1971-01-01

    Using a unique strain of Wistar rats endowed with glomeruli situated directly on the renal cortical surface, we measured glomerular capillary pressures using servo-nulling micropipette transducer techniques. Pressures in 12 glomerular capillaries from 7 rats averaged 60 cm H2O, or approximately 50% of mean systemic arterial values. Wave form characteristics for these glomerular capillaries were found to be remarkably similar to those of the central aorta. From similarly direct estimates of hydrostatic pressures in proximal tubules, and colloid osmotic pressures in systemic and efferent arteriolar plasmas, the net driving force for ultrafiltration was calculated. The average value of 14 cm H2O is lower by some two-thirds than the majority of estimates reported previously based on indirect techniques. Single nephron GFR (glomerular filtration rate) was also measured in these rats, thereby permitting calculation of the glomerular capillary ultrafiltration coefficient. The average value of 0.044 nl sec−1 cm H2O−1 glomerulus−1 is at least fourfold greater than previous estimates derived from indirect observations. PMID:5097578

  12. Asteroid mass estimation with Markov-chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Siltala, L.; Granvik, M.

    2017-09-01

    We have developed a new Markov-chain Monte Carlo-based algorithm for asteroid mass estimation based on mutual encounters and tested it for several different asteroids. Our results are in line with previous literature values but suggest that uncertainties of prior estimates may be misleading as a consequence of using linearized methods.

  13. Data challenges in estimating the capacity value of solar photovoltaics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gami, Dhruv; Sioshansi, Ramteen; Denholm, Paul

    We examine the robustness of solar capacity-value estimates to three important data issues. The first is the sensitivity to using hourly averaged as opposed to subhourly solar-insolation data. The second is the sensitivity to errors in recording and interpreting load data. The third is the sensitivity to using modeled as opposed to measured solar-insolation data. We demonstrate that capacity-value estimates of solar are sensitive to all three of these factors, with potentially large errors in the capacity-value estimate in a particular year. If multiple years of data are available, the biases introduced by using hourly averaged solar-insolation can be smoothedmore » out. Multiple years of data will not necessarily address the other data-related issues that we examine. Our analysis calls into question the accuracy of a number of solar capacity-value estimates relying exclusively on modeled solar-insolation data that are reported in the literature (including our own previous works). Lastly, our analysis also suggests that multiple years’ historical data should be used for remunerating solar generators for their capacity value in organized wholesale electricity markets.« less

  14. Data Challenges in Estimating the Capacity Value of Solar Photovoltaics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gami, Dhruv; Sioshansi, Ramteen; Denholm, Paul

    We examine the robustness of solar capacity-value estimates to three important data issues. The first is the sensitivity to using hourly averaged as opposed to subhourly solar-insolation data. The second is the sensitivity to errors in recording and interpreting load data. The third is the sensitivity to using modeled as opposed to measured solar-insolation data. We demonstrate that capacity-value estimates of solar are sensitive to all three of these factors, with potentially large errors in the capacity-value estimate in a particular year. If multiple years of data are available, the biases introduced by using hourly averaged solar-insolation can be smoothedmore » out. Multiple years of data will not necessarily address the other data-related issues that we examine. Our analysis calls into question the accuracy of a number of solar capacity-value estimates relying exclusively on modeled solar-insolation data that are reported in the literature (including our own previous works). Our analysis also suggests that multiple years' historical data should be used for remunerating solar generators for their capacity value in organized wholesale electricity markets.« less

  15. Data challenges in estimating the capacity value of solar photovoltaics

    DOE PAGES

    Gami, Dhruv; Sioshansi, Ramteen; Denholm, Paul

    2017-04-30

    We examine the robustness of solar capacity-value estimates to three important data issues. The first is the sensitivity to using hourly averaged as opposed to subhourly solar-insolation data. The second is the sensitivity to errors in recording and interpreting load data. The third is the sensitivity to using modeled as opposed to measured solar-insolation data. We demonstrate that capacity-value estimates of solar are sensitive to all three of these factors, with potentially large errors in the capacity-value estimate in a particular year. If multiple years of data are available, the biases introduced by using hourly averaged solar-insolation can be smoothedmore » out. Multiple years of data will not necessarily address the other data-related issues that we examine. Our analysis calls into question the accuracy of a number of solar capacity-value estimates relying exclusively on modeled solar-insolation data that are reported in the literature (including our own previous works). Lastly, our analysis also suggests that multiple years’ historical data should be used for remunerating solar generators for their capacity value in organized wholesale electricity markets.« less

  16. Improved Estimates of Thermodynamic Parameters

    NASA Technical Reports Server (NTRS)

    Lawson, D. D.

    1982-01-01

    Techniques refined for estimating heat of vaporization and other parameters from molecular structure. Using parabolic equation with three adjustable parameters, heat of vaporization can be used to estimate boiling point, and vice versa. Boiling points and vapor pressures for some nonpolar liquids were estimated by improved method and compared with previously reported values. Technique for estimating thermodynamic parameters should make it easier for engineers to choose among candidate heat-exchange fluids for thermochemical cycles.

  17. Ensemble-Based Parameter Estimation in a Coupled GCM Using the Adaptive Spatial Average Method

    DOE PAGES

    Liu, Y.; Liu, Z.; Zhang, S.; ...

    2014-05-29

    Ensemble-based parameter estimation for a climate model is emerging as an important topic in climate research. And for a complex system such as a coupled ocean–atmosphere general circulation model, the sensitivity and response of a model variable to a model parameter could vary spatially and temporally. An adaptive spatial average (ASA) algorithm is proposed to increase the efficiency of parameter estimation. Refined from a previous spatial average method, the ASA uses the ensemble spread as the criterion for selecting “good” values from the spatially varying posterior estimated parameter values; these good values are then averaged to give the final globalmore » uniform posterior parameter. In comparison with existing methods, the ASA parameter estimation has a superior performance: faster convergence and enhanced signal-to-noise ratio.« less

  18. Application of an automatic approach to calibrate the NEMURO nutrient-phytoplankton-zooplankton food web model in the Oyashio region

    NASA Astrophysics Data System (ADS)

    Ito, Shin-ichi; Yoshie, Naoki; Okunishi, Takeshi; Ono, Tsuneo; Okazaki, Yuji; Kuwata, Akira; Hashioka, Taketo; Rose, Kenneth A.; Megrey, Bernard A.; Kishi, Michio J.; Nakamachi, Miwa; Shimizu, Yugo; Kakehi, Shigeho; Saito, Hiroaki; Takahashi, Kazutaka; Tadokoro, Kazuaki; Kusaka, Akira; Kasai, Hiromi

    2010-10-01

    The Oyashio region in the western North Pacific supports high biological productivity and has been well monitored. We applied the NEMURO (North Pacific Ecosystem Model for Understanding Regional Oceanography) model to simulate the nutrients, phytoplankton, and zooplankton dynamics. Determination of parameters values is very important, yet ad hoc calibration methods are often used. We used the automatic calibration software PEST (model-independent Parameter ESTimation), which has been used previously with NEMURO but in a system without ontogenetic vertical migration of the large zooplankton functional group. Determining the performance of PEST with vertical migration, and obtaining a set of realistic parameter values for the Oyashio, will likely be useful in future applications of NEMURO. Five identical twin simulation experiments were performed with the one-box version of NEMURO. The experiments differed in whether monthly snapshot or averaged state variables were used, in whether state variables were model functional groups or were aggregated (total phytoplankton, small plus large zooplankton), and in whether vertical migration of large zooplankton was included or not. We then applied NEMURO to monthly climatological field data covering 1 year for the Oyashio, and compared model fits and parameter values between PEST-determined estimates and values used in previous applications to the Oyashio region that relied on ad hoc calibration. We substituted the PEST and ad hoc calibrated parameter values into a 3-D version of NEMURO for the western North Pacific, and compared the two sets of spatial maps of chlorophyll- a with satellite-derived data. The identical twin experiments demonstrated that PEST could recover the known model parameter values when vertical migration was included, and that over-fitting can occur as a result of slight differences in the values of the state variables. PEST recovered known parameter values when using monthly snapshots of aggregated state variables, but estimated a different set of parameters with monthly averaged values. Both sets of parameters resulted in good fits of the model to the simulated data. Disaggregating the variables provided to PEST into functional groups did not solve the over-fitting problem, and including vertical migration seemed to amplify the problem. When we used the climatological field data, simulated values with PEST-estimated parameters were closer to these field data than with the previously determined ad hoc set of parameter values. When these same PEST and ad hoc sets of parameter values were substituted into 3-D-NEMURO (without vertical migration), the PEST-estimated parameter values generated spatial maps that were similar to the satellite data for the Kuroshio Extension during January and March and for the subarctic ocean from May to November. With non-linear problems, such as vertical migration, PEST should be used with caution because parameter estimates can be sensitive to how the data are prepared and to the values used for the searching parameters of PEST. We recommend the usage of PEST, or other parameter optimization methods, to generate first-order parameter estimates for simulating specific systems and for insertion into 2-D and 3-D models. The parameter estimates that are generated are useful, and the inconsistencies between simulated values and the available field data provide valuable information on model behavior and the dynamics of the ecosystem.

  19. Estimating quality weights for EQ-5D health states with the time trade-off method in South Korea.

    PubMed

    Jo, Min-Woo; Yun, Sung-Cheol; Lee, Sang-Il

    2008-12-01

    To estimate quality weights of EQ-5D health states with the time trade-off (TTO) method in the general population of South Korea. A total of 500 respondents valued 42 hypothetical EQ-5D health states using the TTO and visual analog scale. The quality weights for all EQ-5D health states were estimated by a random effects model and compared with those from studies in other countries. Overall estimated quality weights for all EQ-5D health states from this study were highly correlated with those from previous studies, but quality weights of individual states were substantially different from those of their corresponding states in other studies. The Korean value set differed from value sets from other countries. Special caution is needed when a value set from one country is applied to another with a different culture.

  20. Method for detection and correction of errors in speech pitch period estimates

    NASA Technical Reports Server (NTRS)

    Bhaskar, Udaya (Inventor)

    1989-01-01

    A method of detecting and correcting received values of a pitch period estimate of a speech signal for use in a speech coder or the like. An average is calculated of the nonzero values of received pitch period estimate since the previous reset. If a current pitch period estimate is within a range of 0.75 to 1.25 times the average, it is assumed correct, while if not, a correction process is carried out. If correction is required successively for more than a preset number of times, which will most likely occur when the speaker changes, the average is discarded and a new average calculated.

  1. 3D motion and strain estimation of the heart: initial clinical findings

    NASA Astrophysics Data System (ADS)

    Barbosa, Daniel; Hristova, Krassimira; Loeckx, Dirk; Rademakers, Frank; Claus, Piet; D'hooge, Jan

    2010-03-01

    The quantitative assessment of regional myocardial function remains an important goal in clinical cardiology. As such, tissue Doppler imaging and speckle tracking based methods have been introduced to estimate local myocardial strain. Recently, volumetric ultrasound has become more readily available, allowing therefore the 3D estimation of motion and myocardial deformation. Our lab has previously presented a method based on spatio-temporal elastic registration of ultrasound volumes to estimate myocardial motion and deformation in 3D, overcoming the spatial limitations of the existing methods. This method was optimized on simulated data sets in previous work and is currently tested in a clinical setting. In this manuscript, 10 healthy volunteers, 10 patient with myocardial infarction and 10 patients with arterial hypertension were included. The cardiac strain values extracted with the proposed method were compared with the ones estimated with 1D tissue Doppler imaging and 2D speckle tracking in all patient groups. Although the absolute values of the 3D strain components assessed by this new methodology were not identical to the reference methods, the relationship between the different patient groups was similar.

  2. Improvement of determinating seafloor benchmark position with large-scale horizontal heterogeneity in the ocean area

    NASA Astrophysics Data System (ADS)

    Uemura, Y.; Tadokoro, K.; Matsuhiro, K.; Ikuta, R.

    2015-12-01

    The most critical issue in reducing the accuracy of seafloor positioning system, GPS/Acoustic technique, is large-scale thermal gradient of sound-speed structure [Muto et al., 2008] due to the ocean current. For example, Kuroshio Current, near our observation station, forms this structure. To improve the accuracy of seafloor benchmark position (SBP), we need to directly measure the structure frequently, or estimate it from travel time residual. The former, we repeatedly measure the sound-speed at Kuroshio axis using Underway CTD and try to apply analysis method of seafloor positioning [Yasuda et al., 2015 AGU meeting]. The latter, however, we cannot estimate the structure using travel time residual until now. Accordingly, in this study, we focus on azimuthal dependence of Estimated Mean Sound-Speed (EMSS). EMSS is defined as distance between vessel position and estimated SBP divided by travel time. If thermal gradient exists and SBP is true, EMSS should have azimuthal dependence with the assumption of horizontal layered sound-speed structure in our previous analysis method. We use the data at KMC located on the central part of Nankai Trough, Japan on Jan. 28, 2015, because on that day KMC was on the north edge of Kuroshio, where we expect that thermal gradient exists. In our analysis method, the hyper parameter (μ value) weights travel time residual and rate of change of sound speed structure. However, EMSS derived from μ value determined by Ikuta et al. [2008] does not have azimuthal dependence, that is, we cannot estimate thermal gradient. Thus, we expect SBP has a large bias. Therefore, in this study, we use another μ value and examine whether EMSS has azimuthal dependence or not. With the μ value of this study, which is 1 order of magnitude smaller than the previous value, EMSS has azimuthal dependence that is consistent with observation day's thermal gradient. This result shows that we can estimate the thermal gradient adequately. This SBP displaces 25.6 cm to the north and 11.8 cm to the east compared to previous SBP. This displacement reduces the bias of SBP and RMS of horizontal component in time series to 1/3. Therefore, determination of SBP is suitable when the thermal gradient exists on observation day and EMSS has azimuthal dependence for redetermination of μ value.

  3. A 'demand side' estimate of the dollar value of the cannabis black market in New Zealand.

    PubMed

    Wilkins, Chris; Bhatta, Krishna; Casswell, Sally

    2002-06-01

    The dollar value of an illicit drug market is an important statistic in drug policy analysis. It can be used to illustrate the scale of the trade in a drug; evaluate its impact on a local community or nation; provide an indication of the level of criminality related to a drug; and can inform discussions of future drug policy options. This paper calculates the first ever demand side estimates of the New Zealand cannabis black market. The estimates produced are calculated using cannabis consumption data from the Alcohol & Public Health Research Unit's (APHRU) 1998 National Drug Survey. The wholesale value of the market is estimated to be 81.3-104.6 million dollars a year, and the retail value of the market is estimated to be 131.3-168.9 million dollars a year. These demand side estimates are much lower than the existing supply side estimates of the market calculated using police seizures of cannabis plants. The retail figure is four times lower than the lowest national supply side estimate (636 million dollars) and seven times lower than the highest national supply side estimate (1.27 billion dollars). The demand side estimates suggest a much smaller cannabis economy to fuel organized criminal activity in New Zealand than previous estimates implied.

  4. Measuring global monopole velocities, one by one

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopez-Eiguren, Asier; Urrestilla, Jon; Achúcarro, Ana, E-mail: asier.lopez@ehu.eus, E-mail: jon.urrestilla@ehu.eus, E-mail: achucar@lorentz.leidenuniv.nl

    We present an estimation of the average velocity of a network of global monopoles in a cosmological setting using large numerical simulations. In order to obtain the value of the velocity, we improve some already known methods, and present a new one. This new method estimates individual global monopole velocities in a network, by means of detecting each monopole position in the lattice and following the path described by each one of them. Using our new estimate we can settle an open question previously posed in the literature: velocity-dependent one-scale (VOS) models for global monopoles predict two branches of scalingmore » solutions, one with monopoles moving at subluminal speeds and one with monopoles moving at luminal speeds. Previous attempts to estimate monopole velocities had large uncertainties and were not able to settle that question. Our simulations find no evidence of a luminal branch. We also estimate the values of the parameters of the VOS model. With our new method we can also study the microphysics of the complicated dynamics of individual monopoles. Finally we use our large simulation volume to compare the results from the different estimator methods, as well as to asses the validity of the numerical approximations made.« less

  5. Overconfidence in Interval Estimates: What Does Expertise Buy You?

    ERIC Educational Resources Information Center

    McKenzie, Craig R. M.; Liersch, Michael J.; Yaniv, Ilan

    2008-01-01

    People's 90% subjective confidence intervals typically contain the true value about 50% of the time, indicating extreme overconfidence. Previous results have been mixed regarding whether experts are as overconfident as novices. Experiment 1 examined interval estimates from information technology (IT) professionals and UC San Diego (UCSD) students…

  6. Turning Fiction Into Non-fiction for Signal-to-Noise Ratio Estimation -- The Time-Multiplexed and Adaptive Split-Symbol Moments Estimator

    NASA Astrophysics Data System (ADS)

    Simon, M.; Dolinar, S.

    2005-08-01

    A means is proposed for realizing the generalized split-symbol moments estimator (SSME) of signal-to-noise ratio (SNR), i.e., one whose implementation on the average allows for a number of subdivisions (observables), 2L, per symbol beyond the conventional value of two, with other than an integer value of L. In theory, the generalized SSME was previously shown to yield optimum performance for a given true SNR, R, when L=R/sqrt(2) and thus, in general, the resulting estimator was referred to as the fictitious SSME. Here we present a time-multiplexed version of the SSME that allows it to achieve its optimum value of L as above (to the extent that it can be computed as the average of a sum of integers) at each value of SNR and as such turns fiction into non-fiction. Also proposed is an adaptive algorithm that allows the SSME to rapidly converge to its optimum value of L when in fact one has no a priori information about the true value of SNR.

  7. Genetic Correlations Between Carcass Traits And Molecular Breeding Values In Angus Cattle

    USDA-ARS?s Scientific Manuscript database

    This research elucidated genetic relationships between carcass traits, ultrasound indicator traits, and their respective molecular breeding values (MBV). Animals whose MBV data were used to estimate (co)variance components were not previously used in development of the MBV. Results are presented fo...

  8. Dissolution and analysis of amorphous silica in marine sediments.

    USGS Publications Warehouse

    Eggimann, D.W.; Manheim, F. T.; Betzer, P.R.

    1980-01-01

    The analytical estimation of amorphous silica in selected Atlantic and Antarctic Ocean sediments, the U.S.G.S. standard marine mud (MAG-1), A.A.P.G. clays, and samples from cultures of a marine diatom, Hemidiscus, has been examined. Our values for amorphous silica-rich circum-Antarctic sediments are equal to or greater than literature values, whereas our values for a set of amorphous silica-poor sediments from a transect of the N. Atlantic at 11oN, after appropriate correction for silica released from clays, are significantly lower than previous estimates from the same region. -from Authors

  9. First Nuclear DNA Amounts in more than 300 Angiosperms

    PubMed Central

    ZONNEVELD, B. J. M.; LEITCH, I. J.; BENNETT, M. D.

    2005-01-01

    • Background and Aims Genome size (DNA C-value) data are key biodiversity characters of fundamental significance used in a wide variety of biological fields. Since 1976, Bennett and colleagues have made scattered published and unpublished genome size data more widely accessible by assembling them into user-friendly compilations. Initially these were published as hard copy lists, but since 1997 they have also been made available electronically (see the Plant DNA C-values database www.kew.org/cval/homepage.html). Nevertheless, at the Second Plant Genome Size Meeting in 2003, Bennett noted that as many as 1000 DNA C-value estimates were still unpublished and hence unavailable. Scientists were strongly encouraged to communicate such unpublished data. The present work combines the databasing experience of the Kew-based authors with the unpublished C-values produced by Zonneveld to make a large body of valuable genome size data available to the scientific community. • Methods C-values for angiosperm species, selected primarily for their horticultural interest, were estimated by flow cytometry using the fluorochrome propidium iodide. The data were compiled into a table whose form is similar to previously published lists of DNA amounts by Bennett and colleagues. • Key Results and Conclusions The present work contains C-values for 411 taxa including first values for 308 species not listed previously by Bennett and colleagues. Based on a recent estimate of the global published output of angiosperm DNA C-value data (i.e. 200 first C-value estimates per annum) the present work equals 1·5 years of average global published output; and constitutes over 12 % of the latest 5-year global target set by the Second Plant Genome Size Workshop (see www.kew.org/cval/workshopreport.html). Hopefully, the present example will encourage others to unveil further valuable data which otherwise may lie forever unpublished and unavailable for comparative analyses. PMID:15905300

  10. THE UNIQUE VALUE OF BREATH BIOMARKERS FOR ESTIMATING PHAMACOKINETIC RATE CONSTANTS AND BODY BURDEN FROM RANDOM/INTERMITTENT DOSE

    EPA Science Inventory

    Biomarker measurements are used in three ways: 1) evaluating the time course and distribution of a chemical in the body, 2) estimating previous exposure or dose, and 3) assessing disease state. Blood and urine measurements are the primary methods employed. Of late, it has been ...

  11. Dose estimates for the 1104 m APS storage ring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moe, H.J.

    1989-06-01

    The estimated dose equivalent rates outside the shielded storage ring, and the estimated annual dose equivalent to members of the public due to direct radiation and skyshine from the ring, have been recalculated. The previous estimates found in LS-84 (MOE 87) and cited in the 1987 Conceptual Design Report of the APS (ANL 87) required revision because of changes in the ring circumference and in the proposed location of the ring with respect to the nearest site boundary. The values assumed for the neutron quality factors were also overestimated (by a factor of 2) in the previous computation, and themore » correct values have been used for this estimate. The methodology used to compute dose and dose rate from the storage ring is the same as that used in LS-90 (MOE 87a). The calculations assumed 80 cm thick walls of ordinary concrete (or the shielding equivalent of this) and a roof thickness of 1 meter of ordinary concrete. The circumference of the ring was increased to 1,104 m, and the closest distance to the boundary was taken as 140 m. The recalculation of the skyshine component used the same methodology as that used in LS-84.« less

  12. A means to estimate thermal and kinetic parameters of coal dust layer from hot surface ignition tests.

    PubMed

    Park, Haejun; Rangwala, Ali S; Dembsey, Nicholas A

    2009-08-30

    A method to estimate thermal and kinetic parameters of Pittsburgh seam coal subject to thermal runaway is presented using the standard ASTM E 2021 hot surface ignition test apparatus. Parameters include thermal conductivity (k), activation energy (E), coupled term (QA) of heat of reaction (Q) and pre-exponential factor (A) which are required, but rarely known input values to determine the thermal runaway propensity of a dust material. Four different dust layer thicknesses: 6.4, 12.7, 19.1 and 25.4mm, are tested, and among them, a single steady state dust layer temperature profile of 12.7 mm thick dust layer is used to estimate k, E and QA. k is calculated by equating heat flux from the hot surface layer and heat loss rate on the boundary assuming negligible heat generation in the coal dust layer at a low hot surface temperature. E and QA are calculated by optimizing a numerically estimated steady state dust layer temperature distribution to the experimentally obtained temperature profile of a 12.7 mm thick dust layer. Two unknowns, E and QA, are reduced to one from the correlation of E and QA obtained at criticality of thermal runaway. The estimated k is 0.1 W/mK matching the previously reported value. E ranges from 61.7 to 83.1 kJ/mol, and the corresponding QA ranges from 1.7 x 10(9) to 4.8 x 10(11)J/kg s. The mean values of E (72.4 kJ/mol) and QA (2.8 x 10(10)J/kg s) are used to predict the critical hot surface temperatures for other thicknesses, and good agreement is observed between measured and experimental values. Also, the estimated E and QA ranges match the corresponding ranges calculated from the multiple tests method and values reported in previous research.

  13. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, Addendum

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.

  14. Patterns of Reinforcement and the Essential Values of Brands: I. Incorporation of Utilitarian and Informational Reinforcement into the Estimation of Demand

    ERIC Educational Resources Information Center

    Yan, Ji; Foxall, Gordon R.; Doyle, John R.

    2012-01-01

    Essential value is defined by Hursh and Silberberg (2008) as the value of reinforcers, presented in an exponential model (Equation 1). This study extends previous research concerned with animal behavior or human responding in therapeutic situations. We applied 9 available demand curves to consumer data that included 10,000+ data points collected…

  15. Analyses for precision reduced optical observations from the international satellite geodesy experiment (ISAGEX)

    NASA Technical Reports Server (NTRS)

    Marsh, J. G.; Douglas, B. C.; Klosko, S. M.

    1973-01-01

    During the time period of December 1970 to September 1971 an International Satllite Geodesy Experiment (ISAGEX) was conducted. Over fifty optical and laser tracking stations participated in the data gathering portion of this experiment. Data from some of the stations had not been previously available for dynamical orbit computations. With the recent availability of new data from the Astrosoviet, East European and other optical stations, orbital analyses were conducted to insure compatibility with the previously available laser data. These data have also been analyzed using dynamical orbital techniques for the estimation of estimation of geocentric coordinates for six camera stations (for Astrosoviet, two East European). Thirteen arcs of GEOS-1 and 2 observations between two and four days in length were used. The uncertainty in these new station values is considered to be about 20 meters in each coordinate. Adjustments to the previously available values were generally a few hundred meters. With these geocentric coordinates these data will now be used to supplement earth physics investigations during the ISAGEX.

  16. Efficacy of Cardiopulmonary Resuscitation in the Microgravity Environment

    NASA Technical Reports Server (NTRS)

    Johnston, Smith L.; Campbell, Mark R.; Billica, Roger D.; Gilmore, Stevan M.

    2001-01-01

    End tidal carbon dioxide (EtCO 2) has been previously shown to be an effective non-invasive tool for estimating cardiac output during cardiopulmonary resuscitation (CPR). Animal models have shown that this diagnostic adjunct can be used as a predictor of survival when EtCO 2 values are maintained above 25% of prearrest values.

  17. Quantifying thermohaline circulations: seawater isotopic compositions and salinity as proxies of the ratio between advection time and evaporation time

    NASA Astrophysics Data System (ADS)

    Paldor, N.; Berman, H.; Lazar, B.

    2017-12-01

    Uncertainties in quantitative estimates of the thermohaline circulation in any particular basin are large, partly due to large uncertainties in quantifying excess evaporation over precipitation and surface velocities. A single nondimensional parameter, γ=(qx)/(hu) is proposed to characterize the "strength" of the thermohaline circulation by combining the physical parameters of surface velocity (u), evaporation rate (q), mixed layer depth (h) and trajectory length (x). Values of g can be estimated directly from cross-sections of salinity or seawater isotopic composition (δ18O and δD). Estimates of q in the Red Sea and the South-West Indian Ocean are 0.1 and 0.02, respectively, which implies that the thermohaline contribution to the circulation in the former is higher than in the latter. Once the value of g has been determined in a particular basin, either q or u can be estimated from known values of the remaining parameters. In the studied basins such estimates are consistent with previous studies.

  18. The application of the statistical theory of extreme values to gust-load problems

    NASA Technical Reports Server (NTRS)

    Press, Harry

    1950-01-01

    An analysis is presented which indicates that the statistical theory of extreme values is applicable to the problems of predicting the frequency of encountering the larger gust loads and gust velocities for both specific test conditions as well as commercial transport operations. The extreme-value theory provides an analytic form for the distributions of maximum values of gust load and velocity. Methods of fitting the distribution are given along with a method of estimating the reliability of the predictions. The theory of extreme values is applied to available load data from commercial transport operations. The results indicate that the estimates of the frequency of encountering the larger loads are more consistent with the data and more reliable than those obtained in previous analyses. (author)

  19. Partitioning of fluorotelomer alcohols to octanol and different sources of dissolved organic carbon.

    PubMed

    Carmosini, Nadia; Lee, Linda S

    2008-09-01

    Interest in the environmental fate of fluorotelomer alcohols (FTOHs) has spurred efforts to understand their equilibrium partitioning behavior. Experimentally determined partition coefficients for FTOHs between soil/water and air/water have been reported, but direct measurements of partition coefficients for dissolved organic carbon (DOC)/water (K(doc)) and octanol/ water(K(ow)) have been lacking. Here we measured the partitioning of 8:2 and 6:2 FTOH between one or more types of DOC and water using enhanced solubility or dialysis bag techniques, and also quantified K(ow) values for 4:2 to 8:2 FTOH using a batch equilibration method. The range in measured log K(doc) values for 8:2 FTOH using the enhanced solubility technique with DOC derived from two soils, two biosolids, and three reference humic acids is 2.00-3.97 with the lowest values obtained for the biosolids and an average across all other DOC sources (biosolid DOC excluded) of 3.54 +/- 0.29. For 6:2 FTOH and Aldrich humic acid, a log K(doc) value of 1.96 +/- 0.45 was measured using the dialysis technique. These average values are approximately 1 to 2 log units lower than previously indirectly estimated K(doc) values. Overall, the affinity for DOC tends to be slightly lower than that for particulate soil organic carbon. Measured log K(ow) values for 4:2 (3.30 +/- 0.04), 6:2 (4.54 +/- 0.01), and 8:2 FTOH (5.58 +/- 0.06) were in good agreement with previously reported estimates. Using relationships between experimentally measured partition coefficients and C-atom chain length, we estimated K(doc) and K(ow) values for shorter and longer chain FTOHs, respectively, that we were unable to measure experimentally.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hess, Peter

    An improved microscopic cleavage model, based on a Morse-type and Lennard-Jones-type interaction instead of the previously employed half-sine function, is used to determine the maximum cleavage strength for the brittle materials diamond, tungsten, molybdenum, silicon, GaAs, silica, and graphite. The results of both interaction potentials are in much better agreement with the theoretical strength values obtained by ab initio calculations for diamond, tungsten, molybdenum, and silicon than the previous model. Reasonable estimates of the intrinsic strength are presented for GaAs, silica, and graphite, where first principles values are not available.

  1. Autoregressive linear least square single scanning electron microscope image signal-to-noise ratio estimation.

    PubMed

    Sim, Kok Swee; NorHisham, Syafiq

    2016-11-01

    A technique based on linear Least Squares Regression (LSR) model is applied to estimate signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images. In order to test the accuracy of this technique on SNR estimation, a number of SEM images are initially corrupted with white noise. The autocorrelation function (ACF) of the original and the corrupted SEM images are formed to serve as the reference point to estimate the SNR value of the corrupted image. The LSR technique is then compared with the previous three existing techniques known as nearest neighbourhood, first-order interpolation, and the combination of both nearest neighborhood and first-order interpolation. The actual and the estimated SNR values of all these techniques are then calculated for comparison purpose. It is shown that the LSR technique is able to attain the highest accuracy compared to the other three existing techniques as the absolute difference between the actual and the estimated SNR value is relatively small. SCANNING 38:771-782, 2016. © 2016 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.

  2. Challenges Associated with Estimating Utility in Wet Age-Related Macular Degeneration: A Novel Regression Analysis to Capture the Bilateral Nature of the Disease.

    PubMed

    Hodgson, Robert; Reason, Timothy; Trueman, David; Wickstead, Rose; Kusel, Jeanette; Jasilek, Adam; Claxton, Lindsay; Taylor, Matthew; Pulikottil-Jacob, Ruth

    2017-10-01

    The estimation of utility values for the economic evaluation of therapies for wet age-related macular degeneration (AMD) is a particular challenge. Previous economic models in wet AMD have been criticized for failing to capture the bilateral nature of wet AMD by modelling visual acuity (VA) and utility values associated with the better-seeing eye only. Here we present a de novo regression analysis using generalized estimating equations (GEE) applied to a previous dataset of time trade-off (TTO)-derived utility values from a sample of the UK population that wore contact lenses to simulate visual deterioration in wet AMD. This analysis allows utility values to be estimated as a function of VA in both the better-seeing eye (BSE) and worse-seeing eye (WSE). VAs in both the BSE and WSE were found to be statistically significant (p < 0.05) when regressed separately. When included without an interaction term, only the coefficient for VA in the BSE was significant (p = 0.04), but when an interaction term between VA in the BSE and WSE was included, only the constant term (mean TTO utility value) was significant, potentially a result of the collinearity between the VA of the two eyes. The lack of both formal model fit statistics from the GEE approach and theoretical knowledge to support the superiority of one model over another make it difficult to select the best model. Limitations of this analysis arise from the potential influence of collinearity between the VA of both eyes, and the use of contact lenses to reflect VA states to obtain the original dataset. Whilst further research is required to elicit more accurate utility values for wet AMD, this novel regression analysis provides a possible source of utility values to allow future economic models to capture the quality of life impact of changes in VA in both eyes. Novartis Pharmaceuticals UK Limited.

  3. Serum proteins by capillary zone electrophoresis: approaches to the definition of reference values.

    PubMed

    Petrini, C; Alessio, M G; Scapellato, L; Brambilla, S; Franzini, C

    1999-10-01

    The Paragon CZE 2000 (Beckman Analytical, Milan, Italy) is an automatic dedicated capillary zone electrophoresis (CZE) system, producing a five-zone serum protein pattern with quantitative estimation of the zones. With the view of substituting this instrument for two previously used serum protein electrophoresis techniques, we planned to produce reference values for the "new" systems leading to compatible interpretation of the results. High resolution cellulose acetate electrophoresis with visual inspection and descriptive reporting (HR-CAE) and five-zone cellulose acetate electrophoresis with densitometry (CAE-D) were the previously used techniques. Serum samples (n = 167) giving "normal pattern" with HR-CAE were assayed with the CZE system, and the results were statistically assessed to yield 0.95 reference intervals. One thousand normal and pathological serum samples were then assayed with the CAE-D and the CZE techniques, and the regression equations of the CAE-D values over the CZE values for the five zones were used to transform the CAE-D reference limits into the CZE reference limits. The two sets of reference values thereby produced were in good agreement with each other and also with reference values previously reported for the CZE system. Thus, reference values for the CZE techniques permit interpretation of results coherent with the previously used techniques and reporting modes.

  4. A New Estimate of North American Mountain Snow Accumulation From Regional Climate Model Simulations

    NASA Astrophysics Data System (ADS)

    Wrzesien, Melissa L.; Durand, Michael T.; Pavelsky, Tamlin M.; Kapnick, Sarah B.; Zhang, Yu; Guo, Junyi; Shum, C. K.

    2018-02-01

    Despite the importance of mountain snowpack to understanding the water and energy cycles in North America's montane regions, no reliable mountain snow climatology exists for the entire continent. We present a new estimate of mountain snow water equivalent (SWE) for North America from regional climate model simulations. Climatological peak SWE in North America mountains is 1,006 km3, 2.94 times larger than previous estimates from reanalyses. By combining this mountain SWE value with the best available global product in nonmountain areas, we estimate peak North America SWE of 1,684 km3, 55% greater than previous estimates. In our simulations, the date of maximum SWE varies widely by mountain range, from early March to mid-April. Though mountains comprise 24% of the continent's land area, we estimate that they contain 60% of North American SWE. This new estimate is a suitable benchmark for continental- and global-scale water and energy budget studies.

  5. Estimation of skin entrance doses (SEDs) for common medical X-ray diagnostic examinations in India and proposed diagnostic reference levels (DRLs).

    PubMed

    Sonawane, A U; Shirva, V K; Pradhan, A S

    2010-02-01

    Skin entrance doses (SEDs) were estimated by carrying out measurements of air kerma from 101 X-ray machines installed in 45 major and selected hospitals in the country by using a silicon detector-based dose Test-O-Meter. 1209 number of air kerma measurements of diagnostic projections for adults have been analysed for seven types of common diagnostic examinations, viz. chest (AP, PA, LAT), lumbar spine (AP, LAT), thoracic spine (AP, LAT), abdomen (AP), pelvis (AP), hip joints (AP) and skull (PA, LAT) for different film-screen combinations. The values of estimated diagnostic reference levels (DRLs) (third quartile values of SEDs) were compared with guidance levels/DRLs of doses published by the IAEA-BSS-Safety Series No. 115, 1996; HPA (NRPB) (2000 and 2005), UK; CRCPD/CDRH (USA), European Commission and other national values. The values of DRLs obtained in this study are comparable with the values published by the IAEA-BSS-115 (1996); HPA (NRPB) (2000 and 2005) UK; EC and CRCPD/CDRH, USA including values obtained in previous studies in India.

  6. Reference-free error estimation for multiple measurement methods.

    PubMed

    Madan, Hennadii; Pernuš, Franjo; Špiclin, Žiga

    2018-01-01

    We present a computational framework to select the most accurate and precise method of measurement of a certain quantity, when there is no access to the true value of the measurand. A typical use case is when several image analysis methods are applied to measure the value of a particular quantitative imaging biomarker from the same images. The accuracy of each measurement method is characterized by systematic error (bias), which is modeled as a polynomial in true values of measurand, and the precision as random error modeled with a Gaussian random variable. In contrast to previous works, the random errors are modeled jointly across all methods, thereby enabling the framework to analyze measurement methods based on similar principles, which may have correlated random errors. Furthermore, the posterior distribution of the error model parameters is estimated from samples obtained by Markov chain Monte-Carlo and analyzed to estimate the parameter values and the unknown true values of the measurand. The framework was validated on six synthetic and one clinical dataset containing measurements of total lesion load, a biomarker of neurodegenerative diseases, which was obtained with four automatic methods by analyzing brain magnetic resonance images. The estimates of bias and random error were in a good agreement with the corresponding least squares regression estimates against a reference.

  7. Optimal back-extrapolation method for estimating plasma volume in humans using the indocyanine green dilution method.

    PubMed

    Polidori, David; Rowley, Clarence

    2014-07-22

    The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method.

  8. Effects of linking a soil-water-balance model with a groundwater-flow model

    USGS Publications Warehouse

    Stanton, Jennifer S.; Ryter, Derek W.; Peterson, Steven M.

    2013-01-01

    A previously published regional groundwater-flow model in north-central Nebraska was sequentially linked with the recently developed soil-water-balance (SWB) model to analyze effects to groundwater-flow model parameters and calibration results. The linked models provided a more detailed spatial and temporal distribution of simulated recharge based on hydrologic processes, improvement of simulated groundwater-level changes and base flows at specific sites in agricultural areas, and a physically based assessment of the relative magnitude of recharge for grassland, nonirrigated cropland, and irrigated cropland areas. Root-mean-squared (RMS) differences between the simulated and estimated or measured target values for the previously published model and linked models were relatively similar and did not improve for all types of calibration targets. However, without any adjustment to the SWB-generated recharge, the RMS difference between simulated and estimated base-flow target values for the groundwater-flow model was slightly smaller than for the previously published model, possibly indicating that the volume of recharge simulated by the SWB code was closer to actual hydrogeologic conditions than the previously published model provided. Groundwater-level and base-flow hydrographs showed that temporal patterns of simulated groundwater levels and base flows were more accurate for the linked models than for the previously published model at several sites, particularly in agricultural areas.

  9. Environmental Health: A Look at the Cost of Air Pollution

    ERIC Educational Resources Information Center

    Brennan, A. J. J.

    1973-01-01

    Previous estimates of the cost of air pollution seem to fall short of the true societal cost. Without trying to place a dollar value on the aesthetic loss and psychological pressures air pollution incurs, the author feels that $47 billion constitutes the annual bill for pollution. Pollution abatement and prevention costs are estimated to be $8.45…

  10. Potential remobilization of belowground permafrost carbon under future global warming

    Treesearch

    P. Kuhry; E. Dorrepaal; G. Hugelius; E.A.G. Schuur; C. Tarnocai

    2010-01-01

    Research on permafrost carbon has dramatically increased in the past few years. A new estimate of 1672 Pg C of belowground organic carbon in the northern circumpolar permafrost region more than doubles the previous value and highlights the potential role of permafrost carbon in the Earth System. Uncertainties in this new estimate remain due to relatively few available...

  11. On the Deficiencies of the Ferricinium-Ferrocene Redox Couple for Estimating Transfer Energies of Single Ions.

    DTIC Science & Technology

    1980-08-15

    breakdown of the "ferrocene assumption" for estimating the transfer thermodynamics of single ions. I 4 Experimental Most solvents were Aldrich " Gold ...solvent "donor number" DN1 6 (Table I). A similar finding has been noted previously for monoatomic cations. 1 5 The small negative value of - Sc+) in water

  12. HIV Model Parameter Estimates from Interruption Trial Data including Drug Efficacy and Reservoir Dynamics

    PubMed Central

    Luo, Rutao; Piovoso, Michael J.; Martinez-Picado, Javier; Zurakowski, Ryan

    2012-01-01

    Mathematical models based on ordinary differential equations (ODE) have had significant impact on understanding HIV disease dynamics and optimizing patient treatment. A model that characterizes the essential disease dynamics can be used for prediction only if the model parameters are identifiable from clinical data. Most previous parameter identification studies for HIV have used sparsely sampled data from the decay phase following the introduction of therapy. In this paper, model parameters are identified from frequently sampled viral-load data taken from ten patients enrolled in the previously published AutoVac HAART interruption study, providing between 69 and 114 viral load measurements from 3–5 phases of viral decay and rebound for each patient. This dataset is considerably larger than those used in previously published parameter estimation studies. Furthermore, the measurements come from two separate experimental conditions, which allows for the direct estimation of drug efficacy and reservoir contribution rates, two parameters that cannot be identified from decay-phase data alone. A Markov-Chain Monte-Carlo method is used to estimate the model parameter values, with initial estimates obtained using nonlinear least-squares methods. The posterior distributions of the parameter estimates are reported and compared for all patients. PMID:22815727

  13. Monaural room acoustic parameters from music and speech.

    PubMed

    Kendrick, Paul; Cox, Trevor J; Li, Francis F; Zhang, Yonggang; Chambers, Jonathon A

    2008-07-01

    This paper compares two methods for extracting room acoustic parameters from reverberated speech and music. An approach which uses statistical machine learning, previously developed for speech, is extended to work with music. For speech, reverberation time estimations are within a perceptual difference limen of the true value. For music, virtually all early decay time estimations are within a difference limen of the true value. The estimation accuracy is not good enough in other cases due to differences between the simulated data set used to develop the empirical model and real rooms. The second method carries out a maximum likelihood estimation on decay phases at the end of notes or speech utterances. This paper extends the method to estimate parameters relating to the balance of early and late energies in the impulse response. For reverberation time and speech, the method provides estimations which are within the perceptual difference limen of the true value. For other parameters such as clarity, the estimations are not sufficiently accurate due to the natural reverberance of the excitation signals. Speech is a better test signal than music because of the greater periods of silence in the signal, although music is needed for low frequency measurement.

  14. Sensitivity of NTCP parameter values against a change of dose calculation algorithm.

    PubMed

    Brink, Carsten; Berg, Martin; Nielsen, Morten

    2007-09-01

    Optimization of radiation treatment planning requires estimations of the normal tissue complication probability (NTCP). A number of models exist that estimate NTCP from a calculated dose distribution. Since different dose calculation algorithms use different approximations the dose distributions predicted for a given treatment will in general depend on the algorithm. The purpose of this work is to test whether the optimal NTCP parameter values change significantly when the dose calculation algorithm is changed. The treatment plans for 17 breast cancer patients have retrospectively been recalculated with a collapsed cone algorithm (CC) to compare the NTCP estimates for radiation pneumonitis with those obtained from the clinically used pencil beam algorithm (PB). For the PB calculations the NTCP parameters were taken from previously published values for three different models. For the CC calculations the parameters were fitted to give the same NTCP as for the PB calculations. This paper demonstrates that significant shifts of the NTCP parameter values are observed for three models, comparable in magnitude to the uncertainties of the published parameter values. Thus, it is important to quote the applied dose calculation algorithm when reporting estimates of NTCP parameters in order to ensure correct use of the models.

  15. Sensitivity of NTCP parameter values against a change of dose calculation algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brink, Carsten; Berg, Martin; Nielsen, Morten

    2007-09-15

    Optimization of radiation treatment planning requires estimations of the normal tissue complication probability (NTCP). A number of models exist that estimate NTCP from a calculated dose distribution. Since different dose calculation algorithms use different approximations the dose distributions predicted for a given treatment will in general depend on the algorithm. The purpose of this work is to test whether the optimal NTCP parameter values change significantly when the dose calculation algorithm is changed. The treatment plans for 17 breast cancer patients have retrospectively been recalculated with a collapsed cone algorithm (CC) to compare the NTCP estimates for radiation pneumonitis withmore » those obtained from the clinically used pencil beam algorithm (PB). For the PB calculations the NTCP parameters were taken from previously published values for three different models. For the CC calculations the parameters were fitted to give the same NTCP as for the PB calculations. This paper demonstrates that significant shifts of the NTCP parameter values are observed for three models, comparable in magnitude to the uncertainties of the published parameter values. Thus, it is important to quote the applied dose calculation algorithm when reporting estimates of NTCP parameters in order to ensure correct use of the models.« less

  16. Associated with aerospace vehicles development of methodologies for the estimation of thermal properties

    NASA Technical Reports Server (NTRS)

    Scott, Elaine P.

    1994-01-01

    Thermal stress analyses are an important aspect in the development of aerospace vehicles at NASA-LaRC. These analyses require knowledge of the temperature distributions within the vehicle structures which consequently necessitates the need for accurate thermal property data. The overall goal of this ongoing research effort is to develop methodologies for the estimation of the thermal property data needed to describe the temperature responses of these complex structures. The research strategy undertaken utilizes a building block approach. The idea here is to first focus on the development of property estimation methodologies for relatively simple conditions, such as isotropic materials at constant temperatures, and then systematically modify the technique for the analysis of more and more complex systems, such as anisotropic multi-component systems. The estimation methodology utilized is a statistically based method which incorporates experimental data and a mathematical model of the system. Several aspects of this overall research effort were investigated during the time of the ASEE summer program. One important aspect involved the calibration of the estimation procedure for the estimation of the thermal properties through the thickness of a standard material. Transient experiments were conducted using a Pyrex standard at various temperatures, and then the thermal properties (thermal conductivity and volumetric heat capacity) were estimated at each temperature. Confidence regions for the estimated values were also determined. These results were then compared to documented values. Another set of experimental tests were conducted on carbon composite samples at different temperatures. Again, the thermal properties were estimated for each temperature, and the results were compared with values obtained using another technique. In both sets of experiments, a 10-15 percent off-set between the estimated values and the previously determined values was found. Another effort was related to the development of the experimental techniques. Initial experiments required a resistance heater placed between two samples. The design was modified such that the heater was placed on the surface of only one sample, as would be necessary in the analysis of built up structures. Experiments using the modified technique were conducted on the composite sample used previously at different temperatures. The results were within 5 percent of those found using two samples. Finally, an initial heat transfer analysis, including conduction, convection and radiation components, was completed on a titanium sandwich structural sample. Experiments utilizing this sample are currently being designed and will be used to first estimate the material's effective thermal conductivity and later to determine the properties associated with each individual heat transfer component.

  17. Model-Based IN SITU Parameter Estimation of Ultrasonic Guided Waves in AN Isotropic Plate

    NASA Astrophysics Data System (ADS)

    Hall, James S.; Michaels, Jennifer E.

    2010-02-01

    Most ultrasonic systems employing guided waves for flaw detection require information such as dispersion curves, transducer locations, and expected propagation loss. Degraded system performance may result if assumed parameter values do not accurately reflect the actual environment. By characterizing the propagating environment in situ at the time of test, potentially erroneous a priori estimates are avoided and performance of ultrasonic guided wave systems can be improved. A four-part model-based algorithm is described in the context of previous work that estimates model parameters whereby an assumed propagation model is used to describe the received signals. This approach builds upon previous work by demonstrating the ability to estimate parameters for the case of single mode propagation. Performance is demonstrated on signals obtained from theoretical dispersion curves, finite element modeling, and experimental data.

  18. Value of Information Analysis for Time-lapse Seismic Data by Simulation-Regression

    NASA Astrophysics Data System (ADS)

    Dutta, G.; Mukerji, T.; Eidsvik, J.

    2016-12-01

    A novel method to estimate the Value of Information (VOI) of time-lapse seismic data in the context of reservoir development is proposed. VOI is a decision analytic metric quantifying the incremental value that would be created by collecting information prior to making a decision under uncertainty. The VOI has to be computed before collecting the information and can be used to justify its collection. Previous work on estimating the VOI of geophysical data has involved explicit approximation of the posterior distribution of reservoir properties given the data and then evaluating the prospect values for that posterior distribution of reservoir properties. Here, we propose to directly estimate the prospect values given the data by building a statistical relationship between them using regression. Various regression techniques such as Partial Least Squares Regression (PLSR), Multivariate Adaptive Regression Splines (MARS) and k-Nearest Neighbors (k-NN) are used to estimate the VOI, and the results compared. For a univariate Gaussian case, the VOI obtained from simulation-regression has been shown to be close to the analytical solution. Estimating VOI by simulation-regression is much less computationally expensive since the posterior distribution of reservoir properties given each possible dataset need not be modeled and the prospect values need not be evaluated for each such posterior distribution of reservoir properties. This method is flexible, since it does not require rigid model specification of posterior but rather fits conditional expectations non-parametrically from samples of values and data.

  19. Optimal estimation for global ground-level fine particulate matter concentrations

    NASA Astrophysics Data System (ADS)

    Donkelaar, Aaron; Martin, Randall V.; Spurr, Robert J. D.; Drury, Easan; Remer, Lorraine A.; Levy, Robert C.; Wang, Jun

    2013-06-01

    We develop an optimal estimation (OE) algorithm based on top-of-atmosphere reflectances observed by the MODIS satellite instrument to retrieve near-surface fine particulate matter (PM2.5). The GEOS-Chem chemical transport model is used to provide prior information for the Aerosol Optical Depth (AOD) retrieval and to relate total column AOD to PM2.5. We adjust the shape of the GEOS-Chem relative vertical extinction profiles by comparison with lidar retrievals from the CALIOP satellite instrument. Surface reflectance relationships used in the OE algorithm are indexed by land type. Error quantities needed for this OE algorithm are inferred by comparison with AOD observations taken by a worldwide network of sun photometers (AERONET) and extended globally based upon aerosol speciation and cross correlation for simulated values, and upon land type for observational values. Significant agreement in PM2.5 is found over North America for 2005 (slope = 0.89; r = 0.82; 1-σ error = 1 µg/m3 + 27%), with improved coverage and correlation relative to previous work for the same region and time period, although certain subregions, such as the San Joaquin Valley of California are better represented by previous estimates. Independently derived error estimates of the OE PM2.5 values at in situ locations over North America (of ±(2.5 µg/m3 + 31%) and Europe of ±(3.5 µg/m3 + 30%) are corroborated by comparison with in situ observations, although globally (error estimates of ±(3.0 µg/m3 + 35%), may be underestimated. Global population-weighted PM2.5 at 50% relative humidity is estimated as 27.8 µg/m3 at 0.1° × 0.1° resolution.

  20. Assessing the likely value of gravity and drawdown measurements to constrain estimates of hydraulic conductivity and specific yield during unconfined aquifer testing

    USGS Publications Warehouse

    Blainey, Joan B.; Ferré, Ty P.A.; Cordova, Jeffrey T.

    2007-01-01

    Pumping of an unconfined aquifer can cause local desaturation detectable with high‐resolution gravimetry. A previous study showed that signal‐to‐noise ratios could be predicted for gravity measurements based on a hydrologic model. We show that although changes should be detectable with gravimeters, estimations of hydraulic conductivity and specific yield based on gravity data alone are likely to be unacceptably inaccurate and imprecise. In contrast, a transect of low‐quality drawdown data alone resulted in accurate estimates of hydraulic conductivity and inaccurate and imprecise estimates of specific yield. Combined use of drawdown and gravity data, or use of high‐quality drawdown data alone, resulted in unbiased and precise estimates of both parameters. This study is an example of the value of a staged assessment regarding the likely significance of a new measurement method or monitoring scenario before collecting field data.

  1. Method and System for Temporal Filtering in Video Compression Systems

    NASA Technical Reports Server (NTRS)

    Lu, Ligang; He, Drake; Jagmohan, Ashish; Sheinin, Vadim

    2011-01-01

    Three related innovations combine improved non-linear motion estimation, video coding, and video compression. The first system comprises a method in which side information is generated using an adaptive, non-linear motion model. This method enables extrapolating and interpolating a visual signal, including determining the first motion vector between the first pixel position in a first image to a second pixel position in a second image; determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image; determining a third motion vector between the first pixel position in the first image and the second pixel position in the second image, the second pixel position in the second image, and the third pixel position in the third image using a non-linear model; and determining a position of the fourth pixel in a fourth image based upon the third motion vector. For the video compression element, the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a decoder. The encoder converts the source frame into a space-frequency representation, estimates the conditional statistics of at least one vector of space-frequency coefficients with similar frequencies, and is conditioned on previously encoded data. It estimates an encoding rate based on the conditional statistics and applies a Slepian-Wolf code with the computed encoding rate. The method for decoding includes generating a side-information vector of frequency coefficients based on previously decoded source data and encoder statistics and previous reconstructions of the source frequency vector. It also performs Slepian-Wolf decoding of a source frequency vector based on the generated side-information and the Slepian-Wolf code bits. The video coding element includes receiving a first reference frame having a first pixel value at a first pixel position, a second reference frame having a second pixel value at a second pixel position, and a third reference frame having a third pixel value at a third pixel position. It determines a first motion vector between the first pixel position and the second pixel position, a second motion vector between the second pixel position and the third pixel position, and a fourth pixel value for a fourth frame based upon a linear or nonlinear combination of the first pixel value, the second pixel value, and the third pixel value. A stationary filtering process determines the estimated pixel values. The parameters of the filter may be predetermined constants.

  2. Mixing effects on apparent reaction rates and isotope fractionation during denitrification in a heterogeneous aquifer

    USGS Publications Warehouse

    Green, Christopher T.; Böhlke, John Karl; Bekins, Barbara A.; Phillips, Steven P.

    2010-01-01

    Gradients in contaminant concentrations and isotopic compositions commonly are used to derive reaction parameters for natural attenuation in aquifers. Differences between field‐scale (apparent) estimated reaction rates and isotopic fractionations and local‐scale (intrinsic) effects are poorly understood for complex natural systems. For a heterogeneous alluvial fan aquifer, numerical models and field observations were used to study the effects of physical heterogeneity on reaction parameter estimates. Field measurements included major ions, age tracers, stable isotopes, and dissolved gases. Parameters were estimated for the O2 reduction rate, denitrification rate, O2 threshold for denitrification, and stable N isotope fractionation during denitrification. For multiple geostatistical realizations of the aquifer, inverse modeling was used to establish reactive transport simulations that were consistent with field observations and served as a basis for numerical experiments to compare sample‐based estimates of “apparent” parameters with “true“ (intrinsic) values. For this aquifer, non‐Gaussian dispersion reduced the magnitudes of apparent reaction rates and isotope fractionations to a greater extent than Gaussian mixing alone. Apparent and true rate constants and fractionation parameters can differ by an order of magnitude or more, especially for samples subject to slow transport, long travel times, or rapid reactions. The effect of mixing on apparent N isotope fractionation potentially explains differences between previous laboratory and field estimates. Similarly, predicted effects on apparent O2threshold values for denitrification are consistent with previous reports of higher values in aquifers than in the laboratory. These results show that hydrogeological complexity substantially influences the interpretation and prediction of reactive transport.

  3. (U-Th)/He ages of phosphates from Zagami and ALHA77005 Martian meteorites: Implications to shock temperatures

    NASA Astrophysics Data System (ADS)

    Min, Kyoungwon; Farah, Annette E.; Lee, Seung Ryeol; Lee, Jong Ik

    2017-01-01

    Shock conditions of Martian meteorites provide crucial information about ejection dynamics and original features of the Martian rocks. To better constrain equilibrium shock temperatures (Tequi-shock) of Martian meteorites, we investigated (U-Th)/He systematics of moderately-shocked (Zagami) and intensively shocked (ALHA77005) Martian meteorites. Multiple phosphate aggregates from Zagami and ALHA77005 yielded overall (U-Th)/He ages 92.2 ± 4.4 Ma (2σ) and 8.4 ± 1.2 Ma, respectively. These ages correspond to fractional losses of 0.49 ± 0.03 (Zagami) and 0.97 ± 0.01 (ALHA77005), assuming that the ejection-related shock event at ∼3 Ma is solely responsible for diffusive helium loss since crystallization. For He diffusion modeling, the diffusion domain radius is estimated based on detailed examination of fracture patterns in phosphates using a scanning electron microscope. For Zagami, the diffusion domain radius is estimated to be ∼2-9 μm, which is generally consistent with calculations from isothermal heating experiments (1-4 μm). For ALHA77005, the diffusion domain radius of ∼4-20 μm is estimated. Using the newly constrained (U-Th)/He data, diffusion domain radii, and other previously estimated parameters, the conductive cooling models yield Tequi-shock estimates of 360-410 °C and 460-560 °C for Zagami and ALHA77005, respectively. According to the sensitivity test, the estimated Tequi-shock values are relatively robust to input parameters. The Tequi-shock estimates for Zagami are more robust than those for ALHA77005, primarily because Zagami yielded intermediate fHe value (0.49) compared to ALHA77005 (0.97). For less intensively shocked Zagami, the He diffusion-based Tequi-shock estimates (this study) are significantly higher than expected from previously reported Tpost-shock values. For intensively shocked ALHA77005, the two independent approaches yielded generally consistent results. Using two other examples of previously studied Martian meteorites (ALHA84001 and Los Angeles), we compared Tequi-shock and Tpost-shock estimates. For intensively shocked meteorites (ALHA77005, Los Angeles), the He diffusion-based approach yield slightly higher or consistent Tequi-shock with estimations from Tpost-shock, and the discrepancy between the two methods increases as the intensity of shock increases. The reason for the discrepancy between the two methods, particularly for less-intensively shocked meteorites (Zagami, ALHA84001), remains to be resolved, but we prefer the He diffusion-based approach because its Tequi-shock estimates are relatively robust to input parameters.

  4. Estimation of nonmarket benefits of agricultural land retention in Eastern Canada

    Treesearch

    J. Michael Bowker; D.D. Didychuk

    1994-01-01

    We assess the nonmarket value for retention of farmland in the Moneton area of New Brunswick. We examine a number of factors explaining household external values for farmland preservation and expand on previous work by Beasley et al., Bergstrom et al., and Halstead. Our findings indicate that the marginal external benefit of preserving farmland in general in this...

  5. The Economic Value of Breastfeeding (With Results from Research Conducted in Ghana and the Ivory Coast). Cornell International Nutrition Monograph Series Number 6.

    ERIC Educational Resources Information Center

    Greiner, Ted; And Others

    This monograph focuses attention on economic considerations related to infant feeding practices in developing countries. By enlarging on previous methodologies, this paper proposes to improve the accuracy of past estimates of the economic value of human milk, or more specifically, the practice of breastfeeding. The theoretical model employed…

  6. A Secure Trust Establishment Scheme for Wireless Sensor Networks

    PubMed Central

    Ishmanov, Farruh; Kim, Sung Won; Nam, Seung Yeob

    2014-01-01

    Trust establishment is an important tool to improve cooperation and enhance security in wireless sensor networks. The core of trust establishment is trust estimation. If a trust estimation method is not robust against attack and misbehavior, the trust values produced will be meaningless, and system performance will be degraded. We present a novel trust estimation method that is robust against on-off attacks and persistent malicious behavior. Moreover, in order to aggregate recommendations securely, we propose using a modified one-step M-estimator scheme. The novelty of the proposed scheme arises from combining past misbehavior with current status in a comprehensive way. Specifically, we introduce an aggregated misbehavior component in trust estimation, which assists in detecting an on-off attack and persistent malicious behavior. In order to determine the current status of the node, we employ previous trust values and current measured misbehavior components. These components are combined to obtain a robust trust value. Theoretical analyses and evaluation results show that our scheme performs better than other trust schemes in terms of detecting an on-off attack and persistent misbehavior. PMID:24451471

  7. Group Additivity Determination for Oxygenates, Oxonium Ions, and Oxygen-Containing Carbenium Ions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dellon, Lauren D.; Sung, Chun-Yi; Robichaud, David J.

    Bio-oil produced from biomass fast pyrolysis often requires catalytic upgrading to remove oxygen and acidic species over zeolite catalysts. The elementary reactions in the mechanism for this process involve carbenium and oxonium ions. In order to develop a detailed kinetic model for the catalytic upgrading of biomass, rate constants are required for these elementary reactions. The parameters in the Arrhenius equation can be related to thermodynamic properties through structure-reactivity relationships, such as the Evans-Polanyi relationship. For this relationship, enthalpies of formation of each species are required, which can be reasonably estimated using group additivity. However, the literature previously lacked groupmore » additivity values for oxygenates, oxonium ions, and oxygen-containing carbenium ions. In this work, 71 group additivity values for these types of groups were regressed, 65 of which had not been reported previously and six of which were newly estimated based on regression in the context of the 65 new groups. Heats of formation based on atomization enthalpy calculations for a set of reference molecules and isodesmic reactions for a small set of larger species for which experimental data was available were used to demonstrate the accuracy of the Gaussian-4 quantum mechanical method in estimating enthalpies of formation for species involving the moieties of interest. Isodesmic reactions for a total of 195 species were constructed from the reference molecules to calculate enthalpies of formation that were used to regress the group additivity values. The results showed an average deviation of 1.95 kcal/mol between the values calculated from Gaussian-4 and isodesmic reactions versus those calculated from the group additivity values that were newly regressed. Importantly, the new groups enhance the database for group additivity values, especially those involving oxonium ions.« less

  8. Dual-energy X-ray absorptiometry: analysis of pediatric fat estimate errors due to tissue hydration effects.

    PubMed

    Testolin, C G; Gore, R; Rivkin, T; Horlick, M; Arbo, J; Wang, Z; Chiumello, G; Heymsfield, S B

    2000-12-01

    Dual-energy X-ray absorptiometry (DXA) percent (%) fat estimates may be inaccurate in young children, who typically have high tissue hydration levels. This study was designed to provide a comprehensive analysis of pediatric tissue hydration effects on DXA %fat estimates. Phase 1 was experimental and included three in vitro studies to establish the physical basis of DXA %fat-estimation models. Phase 2 extended phase 1 models and consisted of theoretical calculations to estimate the %fat errors emanating from previously reported pediatric hydration effects. Phase 1 experiments supported the two-compartment DXA soft tissue model and established that pixel ratio of low to high energy (R values) are a predictable function of tissue elemental content. In phase 2, modeling of reference body composition values from birth to age 120 mo revealed that %fat errors will arise if a "constant" adult lean soft tissue R value is applied to the pediatric population; the maximum %fat error, approximately 0.8%, would be present at birth. High tissue hydration, as observed in infants and young children, leads to errors in DXA %fat estimates. The magnitude of these errors based on theoretical calculations is small and may not be of clinical or research significance.

  9. On the ab initio evaluation of Hubbard parameters. II. The κ-(BEDT-TTF)2Cu[N(CN)2]Br crystal

    NASA Astrophysics Data System (ADS)

    Fortunelli, Alessandro; Painelli, Anna

    1997-05-01

    A previously proposed approach for the ab initio evaluation of Hubbard parameters is applied to BEDT-TTF dimers. The dimers are positioned according to four geometries taken as the first neighbors from the experimental data on the κ-(BEDT-TTF)2Cu[N(CN)2]Br crystal. RHF-SCF, CAS-SCF and frozen-orbital calculations using the 6-31G** basis set are performed with different values of the total charge, allowing us to derive all the relevant parameters. It is found that the electronic structure of the BEDT-TTF planes is adequately described by the standard Extended Hubbard Model, with the off-diagonal electron-electron interaction terms (X and W) of negligible size. The derived parameters are in good agreement with available experimental data. Comparison with previous theoretical estimates shows that the t values compare well with those obtained from Extended Hückel Theory (whereas the minimal basis set estimates are completely unreliable). On the other hand, the Uaeff values exhibit an appreciable dependence on the chemical environment.

  10. In vivo estimates of NO and CO conductance for haemoglobin and for lung transfer in humans.

    PubMed

    Guénard, Hervé Jean-Pierre; Martinot, Jean-Benoit; Martin, Sebastien; Maury, Bertrand; Lalande, Sophie; Kays, Christian

    2016-07-01

    Membrane conductance (Dm) and capillary lung volume (Vc) derived from NO and CO lung transfer measurements in humans depend on the blood conductance (θ) values of both gases. Many θ values have been proposed in the literature. In the present study, measurements of CO and NO transfer while breathing 15% or 21% O2 allowed the estimation of θNO and the calculation of the optimal equation relating 1/θCO to pulmonary capillary oxygen pressure (PcapO2). In 10 healthy subjects, the mean calculated θNO value was similar to the θNO value previously reported in the literature (4.5mmHgmin(-1)) provided that one among three θCO equations from the literature was chosen. Setting 1/θCO=a·PcapO2+b, optimal values of a and b could be chosen using two methods: 1) by minimizing the difference between Dm/Vc ratios for any PcapO2, 2) by establishing a linear equation relating a and b. Using these methods, we are proposing the equation 1/θCO=0.0062·PcapO2+1.16, which is similar to two equations previously reported in the literature. With this set of θ values, DmCO reached the morphometric range. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. An improved method for LCD displays colorimetric characterization

    NASA Astrophysics Data System (ADS)

    Li, Tong; Xie, Kai; Wang, Qiaojie; He, Nannan; Ye, Yushan

    2018-03-01

    The colorimetric characterization of the display can achieve the purpose of precisely controlling the color of the monitor. This paper describes an improved method for estimating the gamma value of liquid-crystal displays (LCDs) without using a measurement device was described by Xiao et al. It relies on observer's luminance matching by presenting eight half-tone patterns with luminance from 1/9 to 8/9 of the maximum value of each color channel. Since the previous method lacked partial low frequency information, we partially replaced the half-tone patterns. A large number of experiments show that the color difference is reduced from 3.726 to 2.835, and our half-tone pattern can better estimate the visual gamma value of LCDs.

  12. RESEARCH: An Ecoregional Approach to the Economic Valuation of Land- and Water-Based Recreation in the United States

    PubMed

    Bhat; Bergstrom; Teasley; Bowker; Cordell

    1998-01-01

    / This paper describes a framework for estimating the economic value of outdoor recreation across different ecoregions. Ten ecoregions in the continental United States were defined based on similarly functioning ecosystem characters. The individual travel cost method was employed to estimate recreation demand functions for activities such as motor boating and waterskiing, developed and primitive camping, coldwater fishing, sightseeing and pleasure driving, and big game hunting for each ecoregion. While our ecoregional approach differs conceptually from previous work, our results appear consistent with the previous travel cost method valuation studies.KEY WORDS: Recreation; Ecoregion; Travel cost method; Truncated Poisson model

  13. Robust Diagnosis Method Based on Parameter Estimation for an Interturn Short-Circuit Fault in Multipole PMSM under High-Speed Operation.

    PubMed

    Lee, Jewon; Moon, Seokbae; Jeong, Hyeyun; Kim, Sang Woo

    2015-11-20

    This paper proposes a diagnosis method for a multipole permanent magnet synchronous motor (PMSM) under an interturn short circuit fault. Previous works in this area have suffered from the uncertainties of the PMSM parameters, which can lead to misdiagnosis. The proposed method estimates the q-axis inductance (Lq) of the faulty PMSM to solve this problem. The proposed method also estimates the faulty phase and the value of G, which serves as an index of the severity of the fault. The q-axis current is used to estimate the faulty phase, the values of G and Lq. For this reason, two open-loop observers and an optimization method based on a particle-swarm are implemented. The q-axis current of a healthy PMSM is estimated by the open-loop observer with the parameters of a healthy PMSM. The Lq estimation significantly compensates for the estimation errors in high-speed operation. The experimental results demonstrate that the proposed method can estimate the faulty phase, G, and Lq besides exhibiting robustness against parameter uncertainties.

  14. Efficient Monte Carlo Estimation of the Expected Value of Sample Information Using Moment Matching.

    PubMed

    Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca

    2018-02-01

    The Expected Value of Sample Information (EVSI) is used to calculate the economic value of a new research strategy. Although this value would be important to both researchers and funders, there are very few practical applications of the EVSI. This is due to computational difficulties associated with calculating the EVSI in practical health economic models using nested simulations. We present an approximation method for the EVSI that is framed in a Bayesian setting and is based on estimating the distribution of the posterior mean of the incremental net benefit across all possible future samples, known as the distribution of the preposterior mean. Specifically, this distribution is estimated using moment matching coupled with simulations that are available for probabilistic sensitivity analysis, which is typically mandatory in health economic evaluations. This novel approximation method is applied to a health economic model that has previously been used to assess the performance of other EVSI estimators and accurately estimates the EVSI. The computational time for this method is competitive with other methods. We have developed a new calculation method for the EVSI which is computationally efficient and accurate. This novel method relies on some additional simulation so can be expensive in models with a large computational cost.

  15. Optimal back-extrapolation method for estimating plasma volume in humans using the indocyanine green dilution method

    PubMed Central

    2014-01-01

    Background The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. Methods We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Conclusions Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method. PMID:25052018

  16. Cancellation of factorials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zudilin, W W

    2001-08-31

    An arithmetical property allowing an improvement of some number-theoretic estimates is studied. Previous results were mostly qualitative. Application of quantitative results of the paper to the class of generalized hypergeometric G-functions extends the set of irrational numbers representable as values of these functions.

  17. Direct system parameter identification of mechanical structures with application to modal analysis

    NASA Technical Reports Server (NTRS)

    Leuridan, J. M.; Brown, D. L.; Allemang, R. J.

    1982-01-01

    In this paper a method is described to estimate mechanical structure characteristics in terms of mass, stiffness and damping matrices using measured force input and response data. The estimated matrices can be used to calculate a consistent set of damped natural frequencies and damping values, mode shapes and modal scale factors for the structure. The proposed technique is attractive as an experimental modal analysis method since the estimation of the matrices does not require previous estimation of frequency responses and since the method can be used, without any additional complications, for multiple force input structure testing.

  18. Reconsidering the use of rankings in the valuation of health states: a model for estimating cardinal values from ordinal data

    PubMed Central

    Salomon, Joshua A

    2003-01-01

    Background In survey studies on health-state valuations, ordinal ranking exercises often are used as precursors to other elicitation methods such as the time trade-off (TTO) or standard gamble, but the ranking data have not been used in deriving cardinal valuations. This study reconsiders the role of ordinal ranks in valuing health and introduces a new approach to estimate interval-scaled valuations based on aggregate ranking data. Methods Analyses were undertaken on data from a previously published general population survey study in the United Kingdom that included rankings and TTO values for hypothetical states described using the EQ-5D classification system. The EQ-5D includes five domains (mobility, self-care, usual activities, pain/discomfort and anxiety/depression) with three possible levels on each. Rank data were analysed using a random utility model, operationalized through conditional logit regression. In the statistical model, probabilities of observed rankings were related to the latent utilities of different health states, modeled as a linear function of EQ-5D domain scores, as in previously reported EQ-5D valuation functions. Predicted valuations based on the conditional logit model were compared to observed TTO values for the 42 states in the study and to predictions based on a model estimated directly from the TTO values. Models were evaluated using the intraclass correlation coefficient (ICC) between predictions and mean observations, and the root mean squared error of predictions at the individual level. Results Agreement between predicted valuations from the rank model and observed TTO values was very high, with an ICC of 0.97, only marginally lower than for predictions based on the model estimated directly from TTO values (ICC = 0.99). Individual-level errors were also comparable in the two models, with root mean squared errors of 0.503 and 0.496 for the rank-based and TTO-based predictions, respectively. Conclusions Modeling health-state valuations based on ordinal ranks can provide results that are similar to those obtained from more widely analyzed valuation techniques such as the TTO. The information content in aggregate ranking data is not currently exploited to full advantage. The possibility of estimating cardinal valuations from ordinal ranks could also simplify future data collection dramatically and facilitate wider empirical study of health-state valuations in diverse settings and population groups. PMID:14687419

  19. Costs of food waste along the value chain: evidence from South Africa.

    PubMed

    Nahman, Anton; de Lange, Willem

    2013-11-01

    In a previous paper (Nahman et al., 2012), the authors estimated the costs of household food waste in South Africa, based on the market value of the wasted food (edible portion only), as well as the costs of disposal to landfill. In this paper, we extend the analysis by assessing the costs of edible food waste throughout the entire food value chain, from agricultural production through to consumption at the household level. First, food waste at each stage of the value chain was quantified in physical units (tonnes) for various food commodity groups. Then, weighted average representative prices (per tonne) were estimated for each commodity group at each stage of the value chain. Finally, prices were multiplied by quantities, and the resulting values were aggregated across the value chain for all commodity groups. In this way, the total cost of food waste across the food value chain in South Africa was estimated at R61.5 billion per annum (approximately US$7.7 billion); equivalent to 2.1% of South Africa's annual gross domestic product. The bulk of this cost arises from the processing and distribution stages of the fruit and vegetable value chain, as well as the agricultural production and distribution stages of the meat value chain. These results therefore provide an indication of where interventions aimed at reducing food waste should be targeted. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Bayesian WLS/GLS regression for regional skewness analysis for regions with large crest stage gage networks

    USGS Publications Warehouse

    Veilleux, Andrea G.; Stedinger, Jery R.; Eash, David A.

    2012-01-01

    This paper summarizes methodological advances in regional log-space skewness analyses that support flood-frequency analysis with the log Pearson Type III (LP3) distribution. A Bayesian Weighted Least Squares/Generalized Least Squares (B-WLS/B-GLS) methodology that relates observed skewness coefficient estimators to basin characteristics in conjunction with diagnostic statistics represents an extension of the previously developed B-GLS methodology. B-WLS/B-GLS has been shown to be effective in two California studies. B-WLS/B-GLS uses B-WLS to generate stable estimators of model parameters and B-GLS to estimate the precision of those B-WLS regression parameters, as well as the precision of the model. The study described here employs this methodology to develop a regional skewness model for the State of Iowa. To provide cost effective peak-flow data for smaller drainage basins in Iowa, the U.S. Geological Survey operates a large network of crest stage gages (CSGs) that only record flow values above an identified recording threshold (thus producing a censored data record). CSGs are different from continuous-record gages, which record almost all flow values and have been used in previous B-GLS and B-WLS/B-GLS regional skewness studies. The complexity of analyzing a large CSG network is addressed by using the B-WLS/B-GLS framework along with the Expected Moments Algorithm (EMA). Because EMA allows for the censoring of low outliers, as well as the use of estimated interval discharges for missing, censored, and historic data, it complicates the calculations of effective record length (and effective concurrent record length) used to describe the precision of sample estimators because the peak discharges are no longer solely represented by single values. Thus new record length calculations were developed. The regional skewness analysis for the State of Iowa illustrates the value of the new B-WLS/BGLS methodology with these new extensions.

  1. Comparison of thermal and microwave paleointensity estimates in specimens that violate Thellier's laws

    NASA Astrophysics Data System (ADS)

    Grappone, J. M., Jr.; Biggin, A. J.; Barrett, T. J.; Hill, M. J.

    2017-12-01

    Deep in the Earth, thermodynamic behavior drives the geodynamo and creates the Earth's magnetic field. Determining how the strength of the field, its paleointensity (PI), varies with time, is vital to our understanding of Earth's evolution. Thellier-style paleointensity experiments assume the presence of non-interacting, single domain (SD) magnetic particles, which follow Thellier's laws. Most natural rocks however, contain larger, multi-domain (MD) or interacting single domain (ISD) particles that often violate these laws and cause experiments to fail. Even for samples that pass reliability criteria designed to minimize the impact of MD or ISD grains, different PI techniques can give systematically different estimates, implying violation of Thellier's laws. Our goal is to identify any disparities in PI results that may be explainable by protocol-specific MD and ISD behavior and determine optimum methods to maximize accuracy. Volcanic samples from the Hawai'ian SOH1 borehole previously produced method-dependent PI estimates. Previous studies showed consistently lower PI values when using a microwave (MW) system and the perpendicular method than using the original thermal Thellier-Thellier (OT) technique. However, the data were ambiguous regarding the cause of the discrepancy. The diverging estimates appeared to be either the result of using OT instead of the perpendicular method or the result of using MW protocols instead of thermal protocols. Comparison experiments were conducted using the thermal perpendicular method and microwave OT technique to bridge the gap. Preliminary data generally show that the perpendicular method gives lower estimates than OT for comparable Hlab values. MW estimates are also generally lower than thermal estimates using the same protocol.

  2. Super-spinning compact objects and models of high-frequency quasi-periodic oscillations observed in Galactic microquasars

    NASA Astrophysics Data System (ADS)

    Kotrlová, Andrea; Török, Gabriel; Šrámková, Eva; Stuchlík, Zdeněk

    2014-12-01

    We have previously applied several models of high-frequency quasi-periodic oscillations (HF QPOs) to estimate the spin of the central Kerr black hole in the three Galactic microquasars, GRS 1915+105, GRO J1655-40, and XTE J1550-564. Here we explore the alternative possibility that the central compact body is a super-spinning object (or a naked singularity) with the external space-time described by Kerr geometry with a dimensionless spin parameter a ≡ cJ/GM2> 1. We calculate the relevant spin intervals for a subset of HF QPO models considered in the previous study. Our analysis indicates that for all but one of the considered models there exists at least one interval of a> 1 that is compatible with constraints given by the ranges of the central compact object mass independently estimated for the three sources. For most of the models, the inferred values of a are several times higher than the extreme Kerr black hole value a = 1. These values may be too high since the spin of superspinars is often assumed to rapidly decrease due to accretion when a ≫ 1. In this context, we conclude that only the epicyclic and the Keplerian resonance model provides estimates that are compatible with the expectation of just a small deviation from a = 1.

  3. The option value of innovative treatments for non-small cell lung cancer and renal cell carcinoma.

    PubMed

    Thornton Snider, Julia; Batt, Katharine; Wu, Yanyu; Tebeka, Mahlet Gizaw; Seabury, Seth

    2017-10-01

    To develop a model of the option value a therapy provides by enabling patients to live to see subsequent innovations and to apply the model to the case of nivolumab in renal cell carcinoma (RCC) and non-small cell lung cancer (NSCLC). A model of the option value of nivolumab in RCC and NSCLC was developed and estimated. Data from the Surveillance, Epidemiology, and End Results (SEER) cancer registry and published clinical trial results were used to estimate survival curves for metastatic cancer patients with RCC, squamous NSCLC, or nonsquamous NSCLC. To estimate the conventional value of nivolumab, survival with the pre-nivolumab standard of care was compared with survival with nivolumab assuming no future innovation. To estimate the option value of nivolumab, long-term survival trends in RCC and squamous and nonsquamous NSCLC were measured in SEER to forecast mortality improvements that nivolumab patients may live to see. Compared with the previous standard of care, nivolumab extended life expectancy by 6.3 months in RCC, 7.5 months in squamous NSCLC, and 4.5 months in nonsquamous NSCLC, according to conventional methods. Accounting for expected future mortality trends, nivolumab patients are likely to gain an additional 1.2 months in RCC, 0.4 months in squamous NSCLC, and 0.5 months in nonsquamous NSCLC. These option values correspond to 18%, 5%, and 10% of the conventional value of nivolumab, respectively. Option value is important when valuing therapies like nivolumab that extend life in a rapidly evolving area of care.

  4. Calculation of photoionization cross section near auto-ionizing lines and magnesium photoionization cross section near threshold

    NASA Technical Reports Server (NTRS)

    Moore, E. N.; Altick, P. L.

    1972-01-01

    The research performed is briefly reviewed. A simple method was developed for the calculation of continuum states of atoms when autoionization is present. The method was employed to give the first theoretical cross section for beryllium and magnesium; the results indicate that the values used previously at threshold were sometimes seriously in error. These threshold values have potential applications in astrophysical abundance estimates.

  5. The Effect of Fentanyl on Bispectral Index (BIS) Values and Recall

    DTIC Science & Technology

    2002-12-01

    BIS Values CHAPTER ONE: INTRODUCTION Anesthesia has three main components known as the anesthesia triad: hypnosis (loss of consciousness), adequate...monitor, the primary way to estimate level of hypnosis was through changes in vital signs and the anesthesia provider’s previous experiences. Many...different EEG patterns. Another reason that EEG is difficult to use for assessing hypnosis is that most anesthesia providers use multiple classes of

  6. Direct evidence that prostate tumors show high sensitivity to fractionation (low alpha/beta ratio), similar to late-responding normal tissue.

    PubMed

    Brenner, David J; Martinez, Alvaro A; Edmundson, Gregory K; Mitchell, Christina; Thames, Howard D; Armour, Elwood P

    2002-01-01

    A direct approach to the question of whether prostate tumors have an atypically high sensitivity to fractionation (low alpha/beta ratio), more typical of the surrounding late-responding normal tissue. Earlier estimates of alpha/beta for prostate cancer have relied on comparing results from external beam radiotherapy (EBRT) and brachytherapy, an approach with significant pitfalls due to the many differences between the treatments. To circumvent this, we analyze recent data from a single EBRT + high-dose-rate (HDR) brachytherapy protocol, in which the brachytherapy was given in either 2 or 3 implants, and at various doses. For the analysis, standard models of tumor cure based on Poisson statistics were used in conjunction with the linear-quadratic formalism. Biochemical control at 3 years was the clinical endpoint. Patients were matched between the 3 HDR vs. 2 HDR implants by clinical stage, pretreatment prostate-specific antigen (PSA), Gleason score, length of follow-up, and age. The estimated value of alpha/beta from the current analysis of 1.2 Gy (95% CI: 0.03, 4.1 Gy) is consistent with previous estimates for prostate tumor control. This alpha/beta value is considerably less than typical values for tumors (> or =8 Gy), and more comparable to values in surrounding late-responding normal tissues. This analysis provides strong supporting evidence that alpha/beta values for prostate tumor control are atypically low, as indicated by previous analyses and radiobiological considerations. If true, hypofractionation or HDR regimens for prostate radiotherapy (with appropriate doses) should produce tumor control and late sequelae that are at least as good or even better than currently achieved, with the added possibility that early sequelae may be reduced.

  7. Field estimates of body drag coefficient on the basis of dives in passerine birds.

    PubMed

    Hedenström, A; Liechti, F

    2001-03-01

    During forward flight, a bird's body generates drag that tends to decelerate its speed. By flapping its wings, or by converting potential energy into work if gliding, the bird produces both lift and thrust to balance the pull of gravity and drag. In flight mechanics, a dimensionless number, the body drag coefficient (C(D,par)), describes the magnitude of the drag caused by the body. The drag coefficient depends on the shape (or streamlining), the surface texture of the body and the Reynolds number. It is an important variable when using flight mechanical models to estimate the potential migratory flight range and characteristic flight speeds of birds. Previous wind tunnel measurements on dead, frozen bird bodies indicated that C(D,par) is 0.4 for small birds, while large birds should have lower values of approximately 0.2. More recent studies of a few birds flying in a wind tunnel suggested that previous values probably overestimated C(D,par). We measured maximum dive speeds of passerine birds during the spring migration across the western Mediterranean. When the birds reach their top speed, the pull of gravity should balance the drag of the body (and wings), giving us an opportunity to estimate C(D,par). Our results indicate that C(D,par) decreases with increasing Reynolds number within the range 0.17-0.77, with a mean C(D,par) of 0.37 for small passerines. A somewhat lower mean value could not be excluded because diving birds may control their speed below the theoretical maximum. Our measurements therefore support the notion that 0.4 (the 'old' default value) is a realistic value of C(D,par) for small passerines.

  8. Retrospective Assessment of Cost Savings From Prevention

    PubMed Central

    Grosse, Scott D.; Berry, Robert J.; Tilford, J. Mick; Kucik, James E.; Waitzman, Norman J.

    2016-01-01

    Introduction Although fortification of food with folic acid has been calculated to be cost saving in the U.S., updated estimates are needed. This analysis calculates new estimates from the societal perspective of net cost savings per year associated with mandatory folic acid fortification of enriched cereal grain products in the U.S. that was implemented during 1997–1998. Methods Estimates of annual numbers of live-born spina bifida cases in 1995–1996 relative to 1999–2011 based on birth defects surveillance data were combined during 2015 with published estimates of the present value of lifetime direct costs updated in 2014 U.S. dollars for a live-born infant with spina bifida to estimate avoided direct costs and net cost savings. Results The fortification mandate is estimated to have reduced the annual number of U.S. live-born spina bifida cases by 767, with a lower-bound estimate of 614. The present value of mean direct lifetime cost per infant with spina bifida is estimated to be $791,900, or $577,000 excluding caregiving costs. Using a best estimate of numbers of avoided live-born spina bifida cases, fortification is estimated to reduce the present value of total direct costs for each year's birth cohort by $603 million more than the cost of fortification. A lower-bound estimate of cost savings using conservative assumptions, including the upper-bound estimate of fortification cost, is $299 million. Conclusions The estimates of cost savings are larger than previously reported, even using conservative assumptions. The analysis can also inform assessments of folic acid fortification in other countries. PMID:26790341

  9. Coastal Zone Ecosystem Services: from science to values and decision making; a case study.

    PubMed

    Luisetti, T; Turner, R K; Jickells, T; Andrews, J; Elliott, M; Schaafsma, M; Beaumont, N; Malcolm, S; Burdon, D; Adams, C; Watts, W

    2014-09-15

    This research is concerned with the following environmental research questions: socio-ecological system complexity, especially when valuing ecosystem services; ecosystems stock and services flow sustainability and valuation; the incorporation of scale issues when valuing ecosystem services; and the integration of knowledge from diverse disciplines for governance and decision making. In this case study, we focused on ecosystem services that can be jointly supplied but independently valued in economic terms: healthy climate (via carbon sequestration and storage), food (via fisheries production in nursery grounds), and nature recreation (nature watching and enjoyment). We also explored the issue of ecosystem stock and services flow, and we provide recommendations on how to value stock and flows of ecosystem services via accounting and economic values respectively. We considered broadly comparable estuarine systems located on the English North Sea coast: the Blackwater estuary and the Humber estuary. In the past, these two estuaries have undergone major land-claim. Managed realignment is a policy through which previously claimed intertidal habitats are recreated allowing the enhancement of the ecosystem services provided by saltmarshes. In this context, we investigated ecosystem service values, through biophysical estimates and welfare value estimates. Using an optimistic (extended conservation of coastal ecosystems) and a pessimistic (loss of coastal ecosystems because of, for example, European policy reversal) scenario, we find that context dependency, and hence value transfer possibilities, vary among ecosystem services and benefits. As a result, careful consideration in the use and application of value transfer, both in biophysical estimates and welfare value estimates, is advocated to supply reliable information for policy making. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.

  10. Inferring heat recirculation and albedo for exoplanetary atmospheres: Comparing optical phase curves and secondary eclipse data

    NASA Astrophysics Data System (ADS)

    von Paris, P.; Gratier, P.; Bordé, P.; Selsis, F.

    2016-03-01

    Context. Basic atmospheric properties, such as albedo and heat redistribution between day- and nightsides, have been inferred for a number of planets using observations of secondary eclipses and thermal phase curves. Optical phase curves have not yet been used to constrain these atmospheric properties consistently. Aims: We model previously published phase curves of CoRoT-1b, TrES-2b, and HAT-P-7b, and infer albedos and recirculation efficiencies. These are then compared to previous estimates based on secondary eclipse data. Methods: We use a physically consistent model to construct optical phase curves. This model takes Lambertian reflection, thermal emission, ellipsoidal variations, and Doppler boosting, into account. Results: CoRoT-1b shows a non-negligible scattering albedo (0.11 < AS < 0.3 at 95% confidence) as well as small day-night temperature contrasts, which are indicative of moderate to high re-distribution of energy between dayside and nightside. These values are contrary to previous secondary eclipse and phase curve analyses. In the case of HAT-P-7b, model results suggest a relatively high scattering albedo (AS ≈ 0.3). This confirms previous phase curve analysis; however, it is in slight contradiction to values inferred from secondary eclipse data. For TrES-2b, both approaches yield very similar estimates of albedo and heat recirculation. Discrepancies between recirculation and albedo values as inferred from secondary eclipse and optical phase curve analyses might be interpreted as a hint that optical and IR observations probe different atmospheric layers, hence temperatures.

  11. Prevalence of body dysmorphic disorder on a psychiatric inpatient ward and the value of a screening question.

    PubMed

    Veale, David; Akyüz, Elvan U; Hodsoll, John

    2015-12-15

    The aim of this study was to estimate the prevalence of body dysmorphic disorder (BDD) on an inpatient ward in the UK with a larger sample than previously studied and to investigate the value of a simple screening question during an assessment interview. Four hundred and thirty two consecutive admissions were screened for BDD on an adult psychiatric ward over a period of 13 months. Those who screened positive had a structured diagnostic interview for BDD. The prevalence of BDD was estimated to be 5.8% (C.I. 3.6-8.1%). Our screening question had a slightly low specificity (76.6%) for detecting BDD. The strength of this study was a larger sample size and narrower confidence interval than previous studies. The study adds to previous observations that BDD is poorly identified in psychiatric inpatients. BDD was identified predominantly in those presenting with depression, substance misuse or an anxiety disorder. The screening question could be improved by excluding those with weight or shape concerns. Missing the diagnosis is likely to lead to inappropriate treatment. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  12. Valuing preferences over stormwater management outcomes including improved hydrologic function

    NASA Astrophysics Data System (ADS)

    LondoñO Cadavid, Catalina; Ando, Amy W.

    2013-07-01

    Stormwater runoff causes environmental problems such as flooding, soil erosion, and water pollution. Conventional stormwater management has focused primarily on flood reduction, while a new generation of decentralized stormwater solutions yields ancillary benefits such as healthier aquatic habitat, improved surface water quality, and increased water table recharge. Previous research has estimated values for flood reduction from stormwater management, but no estimates exist for the willingness to pay (WTP) for some of the other environmental benefits of alternative approaches to stormwater control. This paper uses a choice experiment survey of households in Champaign-Urbana, Illinois, to estimate the values of several attributes of stormwater management outcomes. We analyzed data from 131 surveyed households in randomly selected neighborhoods. We find that people value reduced basement flooding more than reductions in yard or street flooding, but WTP for basement flood reduction in the area only exists if individuals are currently experiencing significant flooding themselves. Citizens value both improved water quality and improved hydrologic function and aquatic habitat from runoff reduction. Thus, widespread investment in low impact development stormwater solutions could have very large total benefits, and stormwater managers should be wary of policies and infrastructure plans that reduce flooding at the expense of water quality and aquatic habitat.

  13. Gravity-darkening exponents in semi-detached binary systems from their photometric observations. II.

    NASA Astrophysics Data System (ADS)

    Djurašević, G.; Rovithis-Livaniou, H.; Rovithis, P.; Georgiades, N.; Erkapić, S.; Pavlović, R.

    2006-01-01

    This second part of our study concerning gravity-darkening presents the results for 8 semi-detached close binary systems. From the light-curve analysis of these systems the exponent of the gravity-darkening (GDE) for the Roche lobe filling components has been empirically derived. The method used for the light-curve analysis is based on Roche geometry, and enables simultaneous estimation of the systems' parameters and the gravity-darkening exponents. Our analysis is restricted to the black-body approximation which can influence in some degree the parameter estimation. The results of our analysis are: 1) For four of the systems, namely: TX UMa, β Per, AW Cam and TW Cas, there is a very good agreement between empirically estimated and theoretically predicted values for purely convective envelopes. 2) For the AI Dra system, the estimated value of gravity-darkening exponent is greater, and for UX Her, TW And and XZ Pup lesser than corresponding theoretical predictions, but for all mentioned systems the obtained values of the gravity-darkening exponent are quite close to the theoretically expected values. 3) Our analysis has proved generally that with the correction of the previously estimated mass ratios of the components within some of the analysed systems, the theoretical predictions of the gravity-darkening exponents for stars with convective envelopes are highly reliable. The anomalous values of the GDE found in some earlier studies of these systems can be considered as the consequence of the inappropriate method used to estimate the GDE. 4) The empirical estimations of GDE given in Paper I and in the present study indicate that in the light-curve analysis one can apply the recent theoretical predictions of GDE with high confidence for stars with both convective and radiative envelopes.

  14. Estimation of seismic quality factor: Artificial neural networks and current approaches

    NASA Astrophysics Data System (ADS)

    Yıldırım, Eray; Saatçılar, Ruhi; Ergintav, Semih

    2017-01-01

    The aims of this study are to estimate soil attenuation using alternatives to traditional methods, to compare results of using these methods, and to examine soil properties using the estimated results. The performances of all methods, amplitude decay, spectral ratio, Wiener filter, and artificial neural network (ANN) methods, are examined on field and synthetic data with noise and without noise. High-resolution seismic reflection field data from Yeniköy (Arnavutköy, İstanbul) was used as field data, and 424 estimations of Q values were made for each method (1,696 total). While statistical tests on synthetic and field data are quite close to the Q value estimation results of ANN, Wiener filter, and spectral ratio methods, the amplitude decay methods showed a higher estimation error. According to previous geological and geophysical studies in this area, the soil is water-saturated, quite weak, consisting of clay and sandy units, and, because of current and past landslides in the study area and its vicinity, researchers reported heterogeneity in the soil. Under the same physical conditions, Q value calculated on field data can be expected to be 7.9 and 13.6. ANN models with various structures, training algorithm, input, and number of neurons are investigated. A total of 480 ANN models were generated consisting of 60 models for noise-free synthetic data, 360 models for different noise content synthetic data and 60 models to apply to the data collected in the field. The models were tested to determine the most appropriate structure and training algorithm. In the final ANN, the input vectors consisted of the difference of the width, energy, and distance of seismic traces, and the output was Q value. Success rate of both ANN methods with noise-free and noisy synthetic data were higher than the other three methods. Also according to the statistical tests on estimated Q value from field data, the method showed results that are more suitable. The Q value can be estimated practically and quickly by processing the traces with the recommended ANN model. Consequently, the ANN method could be used for estimating Q value from seismic data.

  15. Temperature estimation from hydroxyl airglow emission in the Venus night side mesosphere

    NASA Astrophysics Data System (ADS)

    Migliorini, A.; Snels, M.; Gérard, J.-C.; Soret, L.; Piccioni, G.; Drossart, P.

    2018-01-01

    The temperature of the night side of Venus at about 95 km has been determined by using spectral features of the hydroxyl airglow emission around 3 μm, recorded from July 2006 to July 2008 by VIRTIS onboard Venus Express. The retrieved temperatures vary from 145.5 to about 198.1 K with an average value of 176.3 ± 14.3 K and are in good agreement with previous ground-based and space observations. The variability with respect to latitude and local time has been studied, showing a minimum of temperature at equatorial latitudes, while temperature values increase toward mid latitudes with a local maximum at about 35°N. The present work provides an independent contribution to the temperature estimation in the transition region between the Venus upper mesosphere and the lower thermosphere, by using the OH emission as a thermometer, following the technique previously applied to the high-resolution O2(a1Δg) airglow emissions observed from ground.

  16. Estimating tissue-specific discrimination factors and turnover rates of stable isotopes of nitrogen and carbon in the smallnose fanskate Sympterygia bonapartii (Rajidae).

    PubMed

    Galván, D E; Jañez, J; Irigoyen, A J

    2016-08-01

    This study aimed to estimate trophic discrimination factors (TDFs) and metabolic turnover rates of nitrogen and carbon stable isotopes in blood and muscle of the smallnose fanskate Sympterygia bonapartii by feeding six adult individuals, maintained in captivity, with a constant diet for 365 days. TDFs were estimated as the difference between δ(13) C or δ(15) N values of the food and the tissues of S. bonapartii after they had reached equilibrium with their diet. The duration of the experiment was enough to reach the equilibrium condition in blood for both elements (estimated time to reach 95% of turnover: C t95%blood  = 150 days, N t95%blood  = 290 days), whilst turnover rates could not be estimated for muscle because of variation among samples. Estimates of Δ(13) C and Δ(15) N values in blood and muscle using all individuals were Δ(13) Cblood = 1·7‰, Δ(13) Cmuscle = 1·3‰, Δ(15) Nblood = 2·5‰ and Δ(15) Nmuscle = 1·5‰, but there was evidence of differences of c.0·4‰ in the Δ(13) C values between sexes. The present values for TDFs and turnover rates constitute the first evidence for dietary switching in batoids based on long-term controlled feeding experiments. Overall, the results showed that S. bonapartii has relatively low turnover rates and isotopic measurements would not track seasonal movements adequately. The estimated Δ(13) C values in S. bonapartii blood and muscle were similar to previous estimations for elasmobranchs and to generally accepted values in bony fishes (Δ(13) C = 1·5‰). For Δ(15) N, the results were similar to published reports for blood but smaller than reports for muscle and notably smaller than the typical values used to estimate trophic position (Δ(15) N c. 3·4‰). Thus, trophic position estimations for elasmobranchs based on typical Δ(15) N values could lead to underestimates of actual trophic positions. Finally, the evidence of differences in TDFs between sexes reveals a need for more targeted research. © 2016 The Fisheries Society of the British Isles.

  17. Lipophilicity assessment of basic drugs (log P(o/w) determination) by a chromatographic method.

    PubMed

    Pallicer, Juan M; Sales, Joaquim; Rosés, Martí; Ràfols, Clara; Bosch, Elisabeth

    2011-09-16

    A previously reported chromatographic method to determine the 1-octanol/water partition coefficient (log P(o/w)) of organic compounds is used to estimate the hydrophobicity of bases, mainly commercial drugs with diverse chemical nature and pK(a) values higher than 9. For that reason, mobile phases buffered at high pH to avoid the ionization of the solutes and three different columns (Phenomenex Gemini NX, Waters XTerra RP-18 and Waters XTerra MS C(18)) with appropriate alkaline-resistant stationary phases have been used. Non-ionizable substances studied in previous works were also included in the set of compounds to evaluate the consistency of the method. The results showed that all the columns provide good estimations of the log P(o/w) for most of the compounds included in this study. The Gemini NX column has been selected to calculate log P(o/w) values of the set of studied drugs, and really good correlations between the determined log P(o/w) values and those considered as reference were obtained, proving the ability of the procedure for the lipophilicity assessment of bioactive compounds with very different structures and functionalities. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Fine-tuning satellite-based rainfall estimates

    NASA Astrophysics Data System (ADS)

    Harsa, Hastuadi; Buono, Agus; Hidayat, Rahmat; Achyar, Jaumil; Noviati, Sri; Kurniawan, Roni; Praja, Alfan S.

    2018-05-01

    Rainfall datasets are available from various sources, including satellite estimates and ground observation. The locations of ground observation scatter sparsely. Therefore, the use of satellite estimates is advantageous, because satellite estimates can provide data on places where the ground observations do not present. However, in general, the satellite estimates data contain bias, since they are product of algorithms that transform the sensors response into rainfall values. Another cause may come from the number of ground observations used by the algorithms as the reference in determining the rainfall values. This paper describe the application of bias correction method to modify the satellite-based dataset by adding a number of ground observation locations that have not been used before by the algorithm. The bias correction was performed by utilizing Quantile Mapping procedure between ground observation data and satellite estimates data. Since Quantile Mapping required mean and standard deviation of both the reference and the being-corrected data, thus the Inverse Distance Weighting scheme was applied beforehand to the mean and standard deviation of the observation data in order to provide a spatial composition of them, which were originally scattered. Therefore, it was possible to provide a reference data point at the same location with that of the satellite estimates. The results show that the new dataset have statistically better representation of the rainfall values recorded by the ground observation than the previous dataset.

  19. Estimation of the Contribution of CYP2C8 and CYP3A4 in Repaglinide Metabolism by Human Liver Microsomes Under Various Buffer Conditions.

    PubMed

    Kudo, Toshiyuki; Goda, Hitomi; Yokosuka, Yuki; Tanaka, Ryo; Komatsu, Seina; Ito, Kiyomi

    2017-09-01

    We have previously reported that the microsomal activities of CYP2C8 and CYP3A4 largely depend on the buffer condition used in in vitro metabolic studies, with different patterns observed between the 2 isozymes. In the present study, therefore, the possibility of buffer condition dependence of the fraction metabolized by CYP2C8 (fm2C8) for repaglinide, a dual substrate of CYP2C8 and CYP3A4, was estimated using human liver microsomes under various buffer conditions. Montelukast and ketoconazole showed a potent and concentration-dependent inhibition of CYP2C8-mediated paclitaxel 6α-hydroxylation and CYP3A4-mediated triazolam α-hydroxylation, respectively, without dependence on the buffer condition. Repaglinide depletion was inhibited by both inhibitors, but the degree of inhibition depended on buffer conditions. Based on these results, the contribution of CYP2C8 in repaglinide metabolism was estimated to be larger than that of CYP3A4 under each buffer condition, and the fm2C8 value of 0.760, estimated in 50 mM phosphate buffer, was the closest to the value (0.801) estimated in our previous modeling analysis based on its concentration increase in a clinical drug interaction study. Researchers should be aware of the possibility of buffer condition affecting the estimated contribution of enzyme(s) in drug metabolism processes involving multiple enzymes. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  20. Estimating soil matric potential in Owens Valley, California

    USGS Publications Warehouse

    Sorenson, Stephen K.; Miller, Reuben F.; Welch, Michael R.; Groeneveld, David P.; Branson, Farrel A.

    1989-01-01

    Much of the floor of Owens Valley, California, is covered with alkaline scrub and alkaline meadow plant communities, whose existence is dependent partly on precipitation and partly on water infiltrated into the rooting zone from the shallow water table. The extent to which these plant communities are capable of adapting to and surviving fluctuations in the water table depends on physiological adaptations of the plants and on the water content, matric potential characteristics of the soils. Two methods were used to estimate soil matric potential in test sites in Owens Valley. The first, the filter-paper method, uses water content of filter papers equilibrated to water content of soil samples taken with a hand auger. The previously published calibration relations used to estimate soil matric potential from the water content of the filter papers were modified on the basis of current laboratory data. The other method of estimating soil matric potential was a modeling approach based on data from this and previous investigations. These data indicate that the base-10 logarithm of soil matric potential is a linear function of gravimetric soil water content for a particular soil. The slope and intercepts of this function vary with the texture and saturation capacity of the soil. Estimates of soil water characteristic curves were made at two sites by averaging the gravimetric soil water content and soil matric potential values from multiple samples at 0.1-m depth intervals derived by using the hand auger and filter-paper method and entering these values in the soil water model. The characteristic curves then were used to estimate soil matric potential from estimates of volumetric soil water content derived from neutron-probe readings. Evaluation of the modeling technique at two study sites indicated that estimates of soil matric potential within 0.5 pF units of the soil matric potential value derived by using the filter-paper method could be obtained 90 to 95 percent of the time in soils where water content was less than field capacity. The greatest errors occurred at depths where there was a distinct transition between soils of different textures.

  1. `Been There Done That': Disentangling Option Value Effects from User Heterogeneity When Valuing Natural Resources with a Use Component

    NASA Astrophysics Data System (ADS)

    Lyssenko, Nikita; Martínez-Espiñeira, Roberto

    2012-11-01

    Endogeneity bias arises in contingent valuation studies when the error term in the willingness to pay (WTP) equation is correlated with explanatory variables because observable and unobservable characteristics of the respondents affect both their WTP and the value of those variables. We correct for the endogeneity of variables that capture previous experience with the resource valued, humpback whales, and with the geographic area of study. We consider several endogenous behavioral variables. Therefore, we apply a multivariate Probit approach to jointly model them with WTP. In this case, correcting for endogeneity increases econometric efficiency and substantially corrects the bias affecting the estimated coefficients of the experience variables, by isolating the decreasing effect on option value caused by having already experienced the resource. Stark differences are unveiled between the marginal effects on WTP of previous experience of the resource in an alternative location versus experience in the location studied, Newfoundland and Labrador (Canada).

  2. 'Been there done that': disentangling option value effects from user heterogeneity when valuing natural resources with a use component.

    PubMed

    Lyssenko, Nikita; Martínez-Espiñeira, Roberto

    2012-11-01

    Endogeneity bias arises in contingent valuation studies when the error term in the willingness to pay (WTP) equation is correlated with explanatory variables because observable and unobservable characteristics of the respondents affect both their WTP and the value of those variables. We correct for the endogeneity of variables that capture previous experience with the resource valued, humpback whales, and with the geographic area of study. We consider several endogenous behavioral variables. Therefore, we apply a multivariate Probit approach to jointly model them with WTP. In this case, correcting for endogeneity increases econometric efficiency and substantially corrects the bias affecting the estimated coefficients of the experience variables, by isolating the decreasing effect on option value caused by having already experienced the resource. Stark differences are unveiled between the marginal effects on WTP of previous experience of the resource in an alternative location versus experience in the location studied, Newfoundland and Labrador (Canada).

  3. The oil and gas resource potential of the Arctic National Wildlife Refuge 1002 area, Alaska

    USGS Publications Warehouse

    ,

    1999-01-01

    In anticipation of the need for scientific support for policy decisions and in light of the decade-old perspective of a previous assessment, the USGS has completed a reassessment of the petroleum potential of the ANWR 1002 area. This was a comprehensive study by a team of USGS scientists in collaboration on technical issues (but not the assessment) with colleagues in other agencies and universities. The study incorporated all available public data and included new field and analytic work as well as the reevaluation of all previous work.Using a methodology similar to that used in previous USGS assessments in the ANWR and the NPRA, this study estimates that the total quantity of technically recoverable oil in the 1002 area is 7.7 BBO (mean value), which is distributed among 10 plays. Using a conservative estimate of 512 million barrels as a minimum commercially developable field size, then about 2.6 BBO of oil distributed in about three fields is expected to be economically recoverable in the undeformed part of the 1002 area. Using a similar estimated minimum field size, which may not be conservative considering the increased distance from infrastructure, the deformed area would be expected to have about 600 MMBO in one field.The amounts of in-place oil estimated for the 1002 area are larger than previous USGS estimates. The increase results in large part from improved resolution of reprocessed seismic data and geologic analogs provided by recent nearby oil discoveries.

  4. 48 CFR 8.405-6 - Limiting sources.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...

  5. 48 CFR 8.405-6 - Limiting sources.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...

  6. 48 CFR 8.405-6 - Limiting sources.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...

  7. 48 CFR 8.405-6 - Limiting sources.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...

  8. Recovering Parameters of Johnson's SB Distribution

    Treesearch

    Bernard R. Parresol

    2003-01-01

    A new parameter recovery model for Johnson's SB distribution is developed. This latest alternative approach permits recovery of the range and both shape parameters. Previous models recovered only the two shape parameters. Also, a simple procedure for estimating the distribution minimum from sample values is presented. The new methodology...

  9. The Boundaries of Genocide: Quantifying the Uncertainty of the Death Toll During the Pol Pot Regime (1975-1979)

    PubMed Central

    Heuveline, Patrick

    2015-01-01

    Estimates of excess deaths under Pol Pot's rule of Cambodia (1975-79) range from under one million to over three million. The more plausible among those, methodologically, still vary from one to two million deaths, but this range of independent point estimates has no particular statistical meaning. Stochastically reconstructing population dynamics in Cambodia from extant historical and demographic data yields interpretable distributions of the death toll and other demographic indicators. The resulting 95-percent simulation interval (1.2 to 2.8 million excess deaths) demonstrates substantial uncertainty with regards to the exact scale of mortality, yet still excludes nearly half of the previous death-toll estimates. The 1.5 to 2.25 million interval contains 69 per cent of the simulations for the actual number of excess death, more than the wider (one to two million) range of previous plausible estimates. The median value of 1.9 million excess deaths represents 21 percent of the population at risk. PMID:26218856

  10. Estimating a WTP-based value of a QALY: the 'chained' approach.

    PubMed

    Robinson, Angela; Gyrd-Hansen, Dorte; Bacon, Philomena; Baker, Rachel; Pennington, Mark; Donaldson, Cam

    2013-09-01

    A major issue in health economic evaluation is that of the value to place on a quality adjusted life year (QALY), commonly used as a measure of health care effectiveness across Europe. This critical policy issue is reflected in the growing interest across Europe in development of more sound methods to elicit such a value. EuroVaQ was a collaboration of researchers from 9 European countries, the main aim being to develop more robust methods to determine the monetary value of a QALY based on surveys of the general public. The 'chained' approach of deriving a societal willingness-to-pay (WTP) based monetary value of a QALY used the following basic procedure. First, utility values were elicited for health states using the standard gamble (SG) and time trade off (TTO) methods. Second, a monetary value to avoid some risk/duration of that health state was elicited and the implied WTP per QALY estimated. We developed within EuroVaQ an adaptation to the 'chained approach' that attempts to overcome problems documented previously (in particular the tendency to arrive at exceedingly high WTP per QALY values). The survey was administered via Internet panels in each participating country and almost 22,000 responses achieved. Estimates of the value of a QALY varied across question and were, if anything, on the low side with the (trimmed) 'all country' mean WTP per QALY ranging from $18,247 to $34,097. Untrimmed means were considerably higher and medians considerably lower in each case. We conclude that the adaptation to the chained approach described here is a potentially useful technique for estimating WTP per QALY. A number of methodological challenges do still exist, however, and there is scope for further refinement. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. On piecewise interpolation techniques for estimating solar radiation missing values in Kedah

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saaban, Azizan; Zainudin, Lutfi; Bakar, Mohd Nazari Abu

    2014-12-04

    This paper discusses the use of piecewise interpolation method based on cubic Ball and Bézier curves representation to estimate the missing value of solar radiation in Kedah. An hourly solar radiation dataset is collected at Alor Setar Meteorology Station that is taken from Malaysian Meteorology Deparment. The piecewise cubic Ball and Bézier functions that interpolate the data points are defined on each hourly intervals of solar radiation measurement and is obtained by prescribing first order derivatives at the starts and ends of the intervals. We compare the performance of our proposed method with existing methods using Root Mean Squared Errormore » (RMSE) and Coefficient of Detemination (CoD) which is based on missing values simulation datasets. The results show that our method is outperformed the other previous methods.« less

  12. Asteroid mass estimation with Markov-chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Siltala, Lauri; Granvik, Mikael

    2017-10-01

    Estimates for asteroid masses are based on their gravitational perturbations on the orbits of other objects such as Mars, spacecraft, or other asteroids and/or their satellites. In the case of asteroid-asteroid perturbations, this leads to a 13-dimensional inverse problem at minimum where the aim is to derive the mass of the perturbing asteroid and six orbital elements for both the perturbing asteroid and the test asteroid by fitting their trajectories to their observed positions. The fitting has typically been carried out with linearized methods such as the least-squares method. These methods need to make certain assumptions regarding the shape of the probability distributions of the model parameters. This is problematic as these assumptions have not been validated. We have developed a new Markov-chain Monte Carlo method for mass estimation which does not require an assumption regarding the shape of the parameter distribution. Recently, we have implemented several upgrades to our MCMC method including improved schemes for handling observational errors and outlier data alongside the option to consider multiple perturbers and/or test asteroids simultaneously. These upgrades promise significantly improved results: based on two separate results for (19) Fortuna with different test asteroids we previously hypothesized that simultaneous use of both test asteroids would lead to an improved result similar to the average literature value for (19) Fortuna with substantially reduced uncertainties. Our upgraded algorithm indeed finds a result essentially equal to the literature value for this asteroid, confirming our previous hypothesis. Here we show these new results for (19) Fortuna and other example cases, and compare our results to previous estimates. Finally, we discuss our plans to improve our algorithm further, particularly in connection with Gaia.

  13. Emission of atmospherically significant halocarbons by naturally occurring and farmed tropical macroalgae

    NASA Astrophysics Data System (ADS)

    Leedham, E. C.; Hughes, C.; Keng, F. S. L.; Phang, S.-M.; Malin, G.; Sturges, W. T.

    2013-06-01

    Current estimates of global halocarbon emissions highlight the tropical coastal environment as an important source of very short-lived (VSL) biogenic halocarbons to the troposphere and stratosphere, due to a combination of assumed high primary productivity in tropical coastal waters and the prevalence of deep convective transport, potentially capable of rapidly lifting surface emissions to the upper troposphere/lower stratosphere. However, despite this perceived importance, direct measurements of tropical coastal biogenic halocarbon emissions, notably from macroalgae (seaweeds), have not been made. In light of this, we provide the first dedicated study of halocarbon production by a range of 15 common tropical macroalgal species and compare these results to those from previous studies of polar and temperate macroalgae. Variation between species was substantial; CHBr3 production rates, measured at the end of a 24 h incubation, varied from 1.4 to 1129 pmol g FW-1 h-1 (FW = fresh weight of sample). We used our laboratory-determined emission rates to estimate emissions of CHBr3 and CH2Br2 (the two dominant VSL precursors of stratospheric bromine) from the coastlines of Malaysia and elsewhere in South East Asia (SEA). We compare these values to previous top-down model estimates of emissions from these regions and, by using several emission scenarios, we calculate an annual CHBr3 emission of 40 (6-224 Mmol Br-1 yr), a value that is lower than previous estimates. The contribution of tropical aquaculture to current emission budgets is also considered. Whilst the current aquaculture contribution to halocarbon emissions in this regional is small, the potential exists for substantial increases in aquaculture to make a significant contribution to regional halocarbon budgets.

  14. Retrospective Assessment of Cost Savings From Prevention: Folic Acid Fortification and Spina Bifida in the U.S.

    PubMed

    Grosse, Scott D; Berry, Robert J; Mick Tilford, J; Kucik, James E; Waitzman, Norman J

    2016-05-01

    Although fortification of food with folic acid has been calculated to be cost saving in the U.S., updated estimates are needed. This analysis calculates new estimates from the societal perspective of net cost savings per year associated with mandatory folic acid fortification of enriched cereal grain products in the U.S. that was implemented during 1997-1998. Estimates of annual numbers of live-born spina bifida cases in 1995-1996 relative to 1999-2011 based on birth defects surveillance data were combined during 2015 with published estimates of the present value of lifetime direct costs updated in 2014 U.S. dollars for a live-born infant with spina bifida to estimate avoided direct costs and net cost savings. The fortification mandate is estimated to have reduced the annual number of U.S. live-born spina bifida cases by 767, with a lower-bound estimate of 614. The present value of mean direct lifetime cost per infant with spina bifida is estimated to be $791,900, or $577,000 excluding caregiving costs. Using a best estimate of numbers of avoided live-born spina bifida cases, fortification is estimated to reduce the present value of total direct costs for each year's birth cohort by $603 million more than the cost of fortification. A lower-bound estimate of cost savings using conservative assumptions, including the upper-bound estimate of fortification cost, is $299 million. The estimates of cost savings are larger than previously reported, even using conservative assumptions. The analysis can also inform assessments of folic acid fortification in other countries. Published by Elsevier Inc.

  15. Inverse modeling of Asian (222)Rn flux using surface air (222)Rn concentration.

    PubMed

    Hirao, Shigekazu; Yamazawa, Hiromi; Moriizumi, Jun

    2010-11-01

    When used with an atmospheric transport model, the (222)Rn flux distribution estimated in our previous study using soil transport theory caused underestimation of atmospheric (222)Rn concentrations as compared with measurements in East Asia. In this study, we applied a Bayesian synthesis inverse method to produce revised estimates of the annual (222)Rn flux density in Asia by using atmospheric (222)Rn concentrations measured at seven sites in East Asia. The Bayesian synthesis inverse method requires a prior estimate of the flux distribution and its uncertainties. The atmospheric transport model MM5/HIRAT and our previous estimate of the (222)Rn flux distribution as the prior value were used to generate new flux estimates for the eastern half of the Eurasian continent dividing into 10 regions. The (222)Rn flux densities estimated using the Bayesian inversion technique were generally higher than the prior flux densities. The area-weighted average (222)Rn flux density for Asia was estimated to be 33.0 mBq m(-2) s(-1), which is substantially higher than the prior value (16.7 mBq m(-2) s(-1)). The estimated (222)Rn flux densities decrease with increasing latitude as follows: Southeast Asia (36.7 mBq m(-2) s(-1)); East Asia (28.6 mBq m(-2) s(-1)) including China, Korean Peninsula and Japan; and Siberia (14.1 mBq m(-2) s(-1)). Increase of the newly estimated fluxes in Southeast Asia, China, Japan, and the southern part of Eastern Siberia from the prior ones contributed most significantly to improved agreement of the model-calculated concentrations with the atmospheric measurements. The sensitivity analysis of prior flux errors and effects of locally exhaled (222)Rn showed that the estimated fluxes in Northern and Central China, Korea, Japan, and the southern part of Eastern Siberia were robust, but that in Central Asia had a large uncertainty.

  16. Estimating hydraulic parameters of a heterogeneous aquitard using long-term multi-extensometer and groundwater level data

    NASA Astrophysics Data System (ADS)

    Zhuang, Chao; Zhou, Zhifang; Illman, Walter A.; Guo, Qiaona; Wang, Jinguo

    2017-09-01

    The classical aquitard-drainage model COMPAC has been modified to simulate the compaction process of a heterogeneous aquitard consisting of multiple sub-units (Multi-COMPAC). By coupling Multi-COMPAC with the parameter estimation code PEST++, the vertical hydraulic conductivity ( K v) and elastic ( S ske) and inelastic ( S skp) skeletal specific-storage values of each sub-unit can be estimated using observed long-term multi-extensometer and groundwater level data. The approach was first tested through a synthetic case with known parameters. Results of the synthetic case revealed that it was possible to accurately estimate the three parameters for each sub-unit. Next, the methodology was applied to a field site located in Changzhou city, China. Based on the detailed stratigraphic information and extensometer data, the aquitard of interest was subdivided into three sub-units. Parameters K v, S ske and S skp of each sub-unit were estimated simultaneously and then were compared with laboratory results and with bulk values and geologic data from previous studies, demonstrating the reliability of parameter estimates. Estimated S skp values ranged within the magnitude of 10-4 m-1, while K v ranged over 10-10-10-8 m/s, suggesting moderately high heterogeneity of the aquitard. However, the elastic deformation of the third sub-unit, consisting of soft plastic silty clay, is masked by delayed drainage, and the inverse procedure leads to large uncertainty in the S ske estimate for this sub-unit.

  17. Revised Thickness of the Lunar Crust from GRAIL Data: Implications for Lunar Bulk Composition

    NASA Technical Reports Server (NTRS)

    Taylor, G. Jeffrey; Wieczorek, Mark A.; Neumann, Gregory A.; Nimmo, Francis; Kiefer, Walter S.; Melosh, H. Jay; Phillips, Roger J.; Solomon, Sean C.; Andrews-Hanna, Jeffrey C.; Asmar, Sami W.; hide

    2013-01-01

    High-resolution gravity data from GRAIL have yielded new estimates of the bulk density and thickness of the lunar crust. The bulk density of the highlands crust is 2550 kg m-3. From a comparison with crustal composition measured remotely, this density implies a mean porosity of 12%. With this bulk density and constraints from the Apollo seismic experiment, the average global crustal thickness is found to lie between 34 and 43 km, a value 10 to 20 km less than several previous estimates. Crustal thickness is a central parameter in estimating bulk lunar composition. Estimates of the concentrations of refractory elements in the Moon from heat flow, remote sensing and sample data, and geophysical data fall into two categories: those with refractory element abundances enriched by 50% or more relative to Earth, and those with abundances the same as Earth. Settling this issue has implications for processes operating during lunar formation. The crustal thickness resulting from analysis of GRAIL data is less than several previous estimates. We show here that a refractory-enriched Moon is not required

  18. Outgassed water on Mars - Constraints from melt inclusions in SNC meteorites

    NASA Technical Reports Server (NTRS)

    Mcsween, Harry Y., Jr.; Harvey, Ralph P.

    1993-01-01

    The SNC (shergottite-nakhlite-chassignite) meteorites, thought to be igneous rocks from Mars, contain melt inclusions trapped at depth in early-formed crystals. Determination of the pre-eruptive water contents of SNC parental magmas from calculations of the solidification histories of these amphibole-bearing inclusions indicates that Martian magmas commonly contained 1.4 percent water by weight. When combined with an estimate of the volume of igneous materials on Mars, this information suggests that the total amount of water outgassed since 3.9 billion years ago corresponds to global depths on the order of 200 meters. This value is significantly higher than previous geochemical estimates but lower than estimates based on erosion by floods. These results imply a wetter Mars interior than has been previously thought and support suggestions of significant outgassing before formation of a stable crust or heterogeneous accretion of a veneer of cometary matter.

  19. Data Anonymization that Leads to the Most Accurate Estimates of Statistical Characteristics: Fuzzy-Motivated Approach

    PubMed Central

    Xiang, G.; Ferson, S.; Ginzburg, L.; Longpré, L.; Mayorga, E.; Kosheleva, O.

    2013-01-01

    To preserve privacy, the original data points (with exact values) are replaced by boxes containing each (inaccessible) data point. This privacy-motivated uncertainty leads to uncertainty in the statistical characteristics computed based on this data. In a previous paper, we described how to minimize this uncertainty under the assumption that we use the same standard statistical estimates for the desired characteristics. In this paper, we show that we can further decrease the resulting uncertainty if we allow fuzzy-motivated weighted estimates, and we explain how to optimally select the corresponding weights. PMID:25187183

  20. Dose-volume histogram prediction using density estimation.

    PubMed

    Skarpman Munter, Johanna; Sjölund, Jens

    2015-09-07

    Knowledge of what dose-volume histograms can be expected for a previously unseen patient could increase consistency and quality in radiotherapy treatment planning. We propose a machine learning method that uses previous treatment plans to predict such dose-volume histograms. The key to the approach is the framing of dose-volume histograms in a probabilistic setting.The training consists of estimating, from the patients in the training set, the joint probability distribution of some predictive features and the dose. The joint distribution immediately provides an estimate of the conditional probability of the dose given the values of the predictive features. The prediction consists of estimating, from the new patient, the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimate of the dose-volume histogram.To illustrate how the proposed method relates to previously proposed methods, we use the signed distance to the target boundary as a single predictive feature. As a proof-of-concept, we predicted dose-volume histograms for the brainstems of 22 acoustic schwannoma patients treated with stereotactic radiosurgery, and for the lungs of 9 lung cancer patients treated with stereotactic body radiation therapy. Comparing with two previous attempts at dose-volume histogram prediction we find that, given the same input data, the predictions are similar.In summary, we propose a method for dose-volume histogram prediction that exploits the intrinsic probabilistic properties of dose-volume histograms. We argue that the proposed method makes up for some deficiencies in previously proposed methods, thereby potentially increasing ease of use, flexibility and ability to perform well with small amounts of training data.

  1. Generic Sensor Modeling Using Pulse Method

    NASA Technical Reports Server (NTRS)

    Helder, Dennis L.; Choi, Taeyoung

    2005-01-01

    Recent development of high spatial resolution satellites such as IKONOS, Quickbird and Orbview enable observation of the Earth's surface with sub-meter resolution. Compared to the 30 meter resolution of Landsat 5 TM, the amount of information in the output image was dramatically increased. In this era of high spatial resolution, the estimation of spatial quality of images is gaining attention. Historically, the Modulation Transfer Function (MTF) concept has been used to estimate an imaging system's spatial quality. Sometimes classified by target shapes, various methods were developed in laboratory environment utilizing sinusoidal inputs, periodic bar patterns and narrow slits. On-orbit sensor MTF estimation was performed on 30-meter GSD Landsat4 Thematic Mapper (TM) data from the bridge pulse target as a pulse input . Because of a high resolution sensor s small Ground Sampling Distance (GSD), reasonably sized man-made edge, pulse, and impulse targets can be deployed on a uniform grassy area with accurate control of ground targets using tarps and convex mirrors. All the previous work cited calculated MTF without testing the MTF estimator's performance. In previous report, a numerical generic sensor model had been developed to simulate and improve the performance of on-orbit MTF estimating techniques. Results from the previous sensor modeling report that have been incorporated into standard MTF estimation work include Fermi edge detection and the newly developed 4th order modified Savitzky-Golay (MSG) interpolation technique. Noise sensitivity had been studied by performing simulations on known noise sources and a sensor model. Extensive investigation was done to characterize multi-resolution ground noise. Finally, angle simulation was tested by using synthetic pulse targets with angles from 2 to 15 degrees, several brightness levels, and different noise levels from both ground targets and imaging system. As a continuing research activity using the developed sensor model, this report was dedicated to MTF estimation via pulse input method characterization using the Fermi edge detection and 4th order MSG interpolation method. The relationship between pulse width and MTF value at Nyquist was studied including error detection and correction schemes. Pulse target angle sensitivity was studied by using synthetic targets angled from 2 to 12 degrees. In this report, from the ground and system noise simulation, a minimum SNR value was suggested for a stable MTF value at Nyquist for the pulse method. Target width error detection and adjustment technique based on a smooth transition of MTF profile is presented, which is specifically applicable only to the pulse method with 3 pixel wide targets.

  2. Non-convex Statistical Optimization for Sparse Tensor Graphical Model

    PubMed Central

    Sun, Wei; Wang, Zhaoran; Liu, Han; Cheng, Guang

    2016-01-01

    We consider the estimation of sparse graphical models that characterize the dependency structure of high-dimensional tensor-valued data. To facilitate the estimation of the precision matrix corresponding to each way of the tensor, we assume the data follow a tensor normal distribution whose covariance has a Kronecker product structure. The penalized maximum likelihood estimation of this model involves minimizing a non-convex objective function. In spite of the non-convexity of this estimation problem, we prove that an alternating minimization algorithm, which iteratively estimates each sparse precision matrix while fixing the others, attains an estimator with the optimal statistical rate of convergence as well as consistent graph recovery. Notably, such an estimator achieves estimation consistency with only one tensor sample, which is unobserved in previous work. Our theoretical results are backed by thorough numerical studies. PMID:28316459

  3. Phylogenetic relationships of the dwarf boas and a comparison of Bayesian and bootstrap measures of phylogenetic support.

    PubMed

    Wilcox, Thomas P; Zwickl, Derrick J; Heath, Tracy A; Hillis, David M

    2002-11-01

    Four New World genera of dwarf boas (Exiliboa, Trachyboa, Tropidophis, and Ungaliophis) have been placed by many systematists in a single group (traditionally called Tropidophiidae). However, the monophyly of this group has been questioned in several studies. Moreover, the overall relationships among basal snake lineages, including the placement of the dwarf boas, are poorly understood. We obtained mtDNA sequence data for 12S, 16S, and intervening tRNA-val genes from 23 species of snakes representing most major snake lineages, including all four genera of New World dwarf boas. We then examined the phylogenetic position of these species by estimating the phylogeny of the basal snakes. Our phylogenetic analysis suggests that New World dwarf boas are not monophyletic. Instead, we find Exiliboa and Ungaliophis to be most closely related to sand boas (Erycinae), boas (Boinae), and advanced snakes (Caenophidea), whereas Tropidophis and Trachyboa form an independent clade that separated relatively early in snake radiation. Our estimate of snake phylogeny differs significantly in other ways from some previous estimates of snake phylogeny. For instance, pythons do not cluster with boas and sand boas, but instead show a strong relationship with Loxocemus and Xenopeltis. Additionally, uropeltids cluster strongly with Cylindrophis, and together are embedded in what has previously been considered the macrostomatan radiation. These relationships are supported by both bootstrapping (parametric and nonparametric approaches) and Bayesian analysis, although Bayesian support values are consistently higher than those obtained from nonparametric bootstrapping. Simulations show that Bayesian support values represent much better estimates of phylogenetic accuracy than do nonparametric bootstrap support values, at least under the conditions of our study. Copyright 2002 Elsevier Science (USA)

  4. Human neutrophil kinetics: modeling of stable isotope labeling data supports short blood neutrophil half-lives.

    PubMed

    Lahoz-Beneytez, Julio; Elemans, Marjet; Zhang, Yan; Ahmed, Raya; Salam, Arafa; Block, Michael; Niederalt, Christoph; Asquith, Becca; Macallan, Derek

    2016-06-30

    Human neutrophils have traditionally been thought to have a short half-life in blood; estimates vary from 4 to 18 hours. This dogma was recently challenged by stable isotope labeling studies with heavy water, which yielded estimates in excess of 3 days. To investigate this disparity, we generated new stable isotope labeling data in healthy adult subjects using both heavy water (n = 4) and deuterium-labeled glucose (n = 9), a compound with more rapid labeling kinetics. To interpret results, we developed a novel mechanistic model and applied it to previously published (n = 5) and newly generated data. We initially constrained the ratio of the blood neutrophil pool to the marrow precursor pool (ratio = 0.26; from published values). Analysis of heavy water data sets yielded turnover rates consistent with a short blood half-life, but parameters, particularly marrow transit time, were poorly defined. Analysis of glucose-labeling data yielded more precise estimates of half-life (0.79 ± 0.25 days; 19 hours) and marrow transit time (5.80 ± 0.42 days). Substitution of this marrow transit time in the heavy water analysis gave a better-defined blood half-life of 0.77 ± 0.14 days (18.5 hours), close to glucose-derived values. Allowing the ratio of blood neutrophils to mitotic neutrophil precursors (R) to vary yielded a best-fit value of 0.19. Reanalysis of the previously published model and data also revealed the origin of their long estimates for neutrophil half-life: an implicit assumption that R is very large, which is physiologically untenable. We conclude that stable isotope labeling in healthy humans is consistent with a blood neutrophil half-life of less than 1 day. © 2016 by The American Society of Hematology.

  5. Oxygen isotope fractionation between bird bone phosphate and drinking water

    NASA Astrophysics Data System (ADS)

    Amiot, Romain; Angst, Delphine; Legendre, Serge; Buffetaut, Eric; Fourel, François; Adolfssen, Jan; André, Aurore; Bojar, Ana Voica; Canoville, Aurore; Barral, Abel; Goedert, Jean; Halas, Stanislaw; Kusuhashi, Nao; Pestchevitskaya, Ekaterina; Rey, Kevin; Royer, Aurélien; Saraiva, Antônio Álamo Feitosa; Savary-Sismondini, Bérengère; Siméon, Jean-Luc; Touzeau, Alexandra; Zhou, Zhonghe; Lécuyer, Christophe

    2017-06-01

    Oxygen isotope compositions of bone phosphate (δ18Op) were measured in broiler chickens reared in 21 farms worldwide characterized by contrasted latitudes and local climates. These sedentary birds were raised during an approximately 3 to 4-month period, and local precipitation was the ultimate source of their drinking water. This sampling strategy allowed the relationship to be determined between the bone phosphate δ18Op values (from 9.8 to 22.5‰ V-SMOW) and the local rainfall δ18Ow values estimated from nearby IAEA/WMO stations (from -16.0 to -1.0‰ V-SMOW). Linear least square fitting of data provided the following isotopic fractionation equation: δ18Ow = 1.119 (±0.040) δ18Op - 24.222 (±0.644); R 2 = 0.98. The δ18Op-δ18Ow couples of five extant mallard ducks, a common buzzard, a European herring gull, a common ostrich, and a greater rhea fall within the predicted range of the equation, indicating that the relationship established for extant chickens can also be applied to birds of various ecologies and body masses. Applied to published oxygen isotope compositions of Miocene and Pliocene penguins from Peru, this new equation computes estimates of local seawater similar to those previously calculated. Applied to the basal bird Confuciusornis from the Early Cretaceous of Northeastern China, our equation gives a slightly higher δ18Ow value compared to the previously estimated one, possibly as a result of lower body temperature. These data indicate that caution should be exercised when the relationship estimated for modern birds is applied to their basal counterparts that likely had a metabolism intermediate between that of their theropod dinosaur ancestors and that of advanced ornithurines.

  6. Mercury's Solar Wind Interaction as Characterized by Magnetospheric Plasma Mantle Observations With MESSENGER

    NASA Astrophysics Data System (ADS)

    Jasinski, Jamie M.; Slavin, James A.; Raines, Jim M.; DiBraccio, Gina A.

    2017-12-01

    We analyze 94 traversals of Mercury's southern magnetospheric plasma mantle using data from the MESSENGER spacecraft. The mean and median proton number densities in the mantle are 1.5 and 1.3 cm-3, respectively. For sodium number density these values are 0.004 and 0.002 cm-3. Moderately higher densities are observed on the magnetospheric dusk side. The mantle supplies up to 1.5 × 108 cm-2 s-1 and 0.8 × 108 cm-2 s-1 of proton and sodium flux to the plasma sheet, respectively. We estimate the cross-electric magnetospheric potential from each observation and find a mean of 19 kV (standard deviation of 16 kV) and a median of 13 kV. This is an important result as it is lower than previous estimations and shows that Mercury's magnetosphere is at times not as highly driven by the solar wind as previously thought. Our values are comparable to the estimations for the ice giant planets, Uranus and Neptune, but lower than Earth. The estimated potentials do have a very large range of values (1-74 kV), showing that Mercury's magnetosphere is highly dynamic. A correlation of the potential is found to the interplanetary magnetic field (IMF) magnitude, supporting evidence that dayside magnetic reconnection can occur at all shear angles at Mercury. But we also see that Mercury has an Earth-like magnetospheric response, favoring -BZ IMF orientation. We find evidence that -BX orientations in the IMF favor the southern cusp and southern mantle. This is in agreement with telescopic observations of exospheric emission, but in disagreement with modeling.

  7. Mercury's solar wind interaction as characterized by magnetospheric plasma mantle observations with MESSENGER

    NASA Astrophysics Data System (ADS)

    Jasinski, J. M.; Slavin, J. A.; Raines, J. M.; DiBraccio, G. A.

    2017-12-01

    We analyze 94 traversals of Mercury's magnetospheric plasma mantle using data from the MESSENGER spacecraft. The mean and median proton number density in the mantle are 1.5 and 1.3 cm-3, respectively. For sodium number density these values are 0.004 and 0.002 cm-3. Moderately higher densities are observed on the magnetospheric dusk side. The mantle supplies up to 1.5 x 108 cm-2 s-1 and 0.8 x 108cm-2 s-1 of proton and sodium flux to the plasma sheet, respectively. We estimate the cross-electric magnetospheric potential from each observation and find a mean of 19 kV (standard deviation of 16 kV) and a median of 13 kV. This is an important result as it is lower than previous estimations and shows that Mercury's magnetosphere is at times not as highly driven by the solar wind as previously thought. Our values are comparable to the estimations for the ice giant planets, Uranus and Neptune, but lower than Earth. The estimated potentials do have a very large range of values (1 - 74 kV), showing that Mercury's magnetosphere is highly dynamic. A correlation of the potential is found to the interplanetary magnetic field (IMF) magnitude, supporting evidence that dayside magnetic reconnection can occur at all shear angles at Mercury. But we also see that Mercury has an Earth-like magnetospheric response, favoring -BZ IMF orientation. We find evidence that -BX orientations in the IMF favor the southern cusp and southern mantle. This is in agreement with telescopic observations of exospheric emission, but in disagreement with modeling.

  8. Gray-world-assumption-based illuminant color estimation using color gamuts with high and low chroma

    NASA Astrophysics Data System (ADS)

    Kawamura, Harumi; Yonemura, Shunichi; Ohya, Jun; Kojima, Akira

    2013-02-01

    A new approach is proposed for estimating illuminant colors from color images under an unknown scene illuminant. The approach is based on a combination of a gray-world-assumption-based illuminant color estimation method and a method using color gamuts. The former method, which is one we had previously proposed, improved on the original method that hypothesizes that the average of all the object colors in a scene is achromatic. Since the original method estimates scene illuminant colors by calculating the average of all the image pixel values, its estimations are incorrect when certain image colors are dominant. Our previous method improves on it by choosing several colors on the basis of an opponent-color property, which is that the average color of opponent colors is achromatic, instead of using all colors. However, it cannot estimate illuminant colors when there are only a few image colors or when the image colors are unevenly distributed in local areas in the color space. The approach we propose in this paper combines our previous method and one using high chroma and low chroma gamuts, which makes it possible to find colors that satisfy the gray world assumption. High chroma gamuts are used for adding appropriate colors to the original image and low chroma gamuts are used for narrowing down illuminant color possibilities. Experimental results obtained using actual images show that even if the image colors are localized in a certain area in the color space, the illuminant colors are accurately estimated, with smaller estimation error average than that generated in the conventional method.

  9. The mean intensity of radiation at 2 microns in the solar neighborhood

    NASA Technical Reports Server (NTRS)

    Jura, M.

    1979-01-01

    Consideration is given to the value of the mean intensity at 2 microns in the solar neighborhood, and it is found that it is likely to be a factor of four greater than previously estimated on theoretical grounds. It is noted however, that the estimate does agree with a reasonable extrapolation of the results of the survey of the Galactic plane by the Japanese group. It is concluded that the mean intensity in the solar neighborhood therefore probably peaks somewhat longward of 1 micron, and that this result is important for understanding the temperature of interstellar dust and the intensity of the far infrared background. This means specifically that dark clouds probably emit significantly more far infrared radiation than previously predicted.

  10. Color-magnitude diagrams for six metal-rich, low-latitude globular clusters

    NASA Technical Reports Server (NTRS)

    Armandroff, Taft E.

    1988-01-01

    Colors and magnitudes for stars on CCD frames for six metal-rich, low-latitude, previously unstudied globular clusters and one well-studied, metal-rich cluster (47 Tuc) have been derived and color-magnitude diagrams have been constructed. The photometry for stars in 47 Tuc are in good agreement with previous studies, while the V magnitudes of the horizontal-branch stars in the six program clusters do not agree with estimates based on secondary methods. The distances to these clusters are different from prior estimates. Redding values are derived for each program cluster. The horizontal branches of the program clusters all appear to lie entirely redwards of the red edge of the instability strip, as is normal for their metallicities.

  11. Universal properties of knotted polymer rings.

    PubMed

    Baiesi, M; Orlandini, E

    2012-09-01

    By performing Monte Carlo sampling of N-steps self-avoiding polygons embedded on different Bravais lattices we explore the robustness of universality in the entropic, metric, and geometrical properties of knotted polymer rings. In particular, by simulating polygons with N up to 10(5) we furnish a sharp estimate of the asymptotic values of the knot probability ratios and show their independence on the lattice type. This universal feature was previously suggested, although with different estimates of the asymptotic values. In addition, we show that the scaling behavior of the mean-squared radius of gyration of polygons depends on their knot type only through its correction to scaling. Finally, as a measure of the geometrical self-entanglement of the self-avoiding polygons we consider the standard deviation of the writhe distribution and estimate its power-law behavior in the large N limit. The estimates of the power exponent do depend neither on the lattice nor on the knot type, strongly supporting an extension of the universality property to some features of the geometrical entanglement.

  12. Estimation of the Mean Axon Diameter and Intra-axonal Space Volume Fraction of the Human Corpus Callosum: Diffusion q-space Imaging with Low q-values.

    PubMed

    Suzuki, Yuriko; Hori, Masaaki; Kamiya, Kouhei; Fukunaga, Issei; Aoki, Shigeki; VAN Cauteren, Marc

    2016-01-01

    Q-space imaging (QSI) is a diffusion-weighted imaging (DWI) technique that enables investigation of tissue microstructure. However, for sufficient displacement resolution to measure the microstructure, QSI requires high q-values that are usually difficult to achieve with a clinical scanner. The recently introduced "low q-value method" fits the echo attenuation to only low q-values to extract the root mean square displacement. We investigated the clinical feasibility of the low q-value method for estimating the microstructure of the human corpus callosum using a 3.0-tesla clinical scanner within a clinically feasible scan time. We performed a simulation to explore the acceptable range of maximum q-values for the low q-value method. We simulated echo attenuations caused by restricted diffusion in the intra-axonal space (IAS) and hindered diffusion in the extra-axonal space (EAS) assuming 100,000 cylinders with various diameters, and we estimated mean axon diameter, IAS volume fraction, and EAS diffusivity by fitting echo attenuations with different maximum q-values. Furthermore, we scanned the corpus callosum of 7 healthy volunteers and estimated the mean axon diameter and IAS volume fraction. Good agreement between estimated and defined values in the simulation study with maximum q-values of 700 and 800 cm(-1) suggested that the maximum q-value used in the in vivo experiment, 737 cm(-1), was reasonable. In the in vivo experiment, the mean axon diameter was larger in the body of the corpus callosum and smaller in the genu and splenium, and this anterior-to-posterior trend is consistent with previously reported histology, although our mean axon diameter seems larger in size. On the other hand, we found an opposite anterior-to-posterior trend, with high IAS volume fraction in the genu and splenium and a lower fraction in the body, which is similar to the fiber density reported in the histology study. The low q-value method may provide insights into tissue microstructure using a 3T clinical scanner within clinically feasible scan time.

  13. [Dual process in large number estimation under uncertainty].

    PubMed

    Matsumuro, Miki; Miwa, Kazuhisa; Terai, Hitoshi; Yamada, Kento

    2016-08-01

    According to dual process theory, there are two systems in the mind: an intuitive and automatic System 1 and a logical and effortful System 2. While many previous studies about number estimation have focused on simple heuristics and automatic processes, the deliberative System 2 process has not been sufficiently studied. This study focused on the System 2 process for large number estimation. First, we described an estimation process based on participants’ verbal reports. The task, corresponding to the problem-solving process, consisted of creating subgoals, retrieving values, and applying operations. Second, we investigated the influence of such deliberative process by System 2 on intuitive estimation by System 1, using anchoring effects. The results of the experiment showed that the System 2 process could mitigate anchoring effects.

  14. Toward a consistent model for strain accrual and release for the New Madrid Seismic Zone, central United States

    USGS Publications Warehouse

    Hough, S.E.; Page, M.

    2011-01-01

    At the heart of the conundrum of seismogenesis in the New Madrid Seismic Zone is the apparently substantial discrepancy between low strain rate and high recent seismic moment release. In this study we revisit the magnitudes of the four principal 1811–1812 earthquakes using intensity values determined from individual assessments from four experts. Using these values and the grid search method of Bakun and Wentworth (1997), we estimate magnitudes around 7.0 for all four events, values that are significantly lower than previously published magnitude estimates based on macroseismic intensities. We further show that the strain rate predicted from postglacial rebound is sufficient to produce a sequence with the moment release of one Mmax6.8 every 500 years, a rate that is much lower than previous estimates of late Holocene moment release. However, Mw6.8 is at the low end of the uncertainty range inferred from analysis of intensities for the largest 1811–1812 event. We show that Mw6.8 is also a reasonable value for the largest main shock given a plausible rupture scenario. One can also construct a range of consistent models that permit a somewhat higher Mmax, with a longer average recurrence rate. It is thus possible to reconcile predicted strain and seismic moment release rates with alternative models: one in which 1811–1812 sequences occur every 500 years, with the largest events being Mmax∼6.8, or one in which sequences occur, on average, less frequently, with Mmax of ∼7.0. Both models predict that the late Holocene rate of activity will continue for the next few to 10 thousand years.

  15. Space shuttle engineering and operations support. Orbiter to spacelab electrical power interface. Avionics system engineering

    NASA Technical Reports Server (NTRS)

    Emmons, T. E.

    1976-01-01

    The results are presented of an investigation of the factors which affect the determination of Spacelab (S/L) minimum interface main dc voltage and available power from the orbiter. The dedicated fuel cell mode of powering the S/L is examined along with the minimum S/L interface voltage and available power using the predicted fuel cell power plant performance curves. The values obtained are slightly lower than current estimates and represent a more marginal operating condition than previously estimated.

  16. Determination of HART I Blade Structural Properties by Laboratory Testing

    NASA Technical Reports Server (NTRS)

    Jung, Sung N.; Lau, Benton H.

    2012-01-01

    The structural properties of higher harmonic Aeroacoustic Rotor Test (HART I) blades were measured using the original set of blades tested in the German-dutch wind tunnel (DNW) in 1994. the measurements include bending and torsion stiffness, geometric offsets, and mass and inertia properties of the blade. the measured properties were compared to the estimated values obtained initially from the blade manufacturer. The previously estimated blade properties showed consistently higher stiffness, up to 30 percent for the flap bending in the blade inboard root section.

  17. Entrance radiation doses during paediatric cardiac catheterisations performed for diagnosis or the treatment of congenital heart disease.

    PubMed

    Papadopoulou, D; Yakoumakis, Em; Sandilos, P; Thanopoulos, V; Makri, Tr; Gialousis, G; Houndas, D; Yakoumakis, N; Georgiou, Ev

    2005-01-01

    The purpose of this study was to estimate the radiation exposure of children, during cardiac catheterisations for the diagnosis or treatment of congenital heart disease. Radiation doses were estimated for 45 children aged from 1 d to 13 y old. Thermoluminescent dosemeters (TLDs) were used to estimate the posterior entrance dose (DP), the lateral entrance dose (DLAT), the thyroid dose and the gonads dose. A dose-area product (DAP) meter was also attached externally to the tube of the angiographic system and gave a direct value in mGy cm2 for each procedure. Posterior and lateral entrance dose values during cardiac catheterisations ranged from 1 to 197 mGy and from 1.1 to 250.3 mGy, respectively. Radiation exposure to the thyroid and the gonads ranged from 0.3 to 8.4 mGy to 0.1 and 0.7 mGy, respectively. Finally, the DAP meter values ranged between 360 and 33,200 mGy cm2. Radiation doses measured in this study are comparable with those reported to previous studies. Moreover, strong correlation was found between the DAP values and the entrance radiation dose measured with TLDs.

  18. Tmax Determined Using a Bayesian Estimation Deconvolution Algorithm Applied to Bolus Tracking Perfusion Imaging: A Digital Phantom Validation Study.

    PubMed

    Uwano, Ikuko; Sasaki, Makoto; Kudo, Kohsuke; Boutelier, Timothé; Kameda, Hiroyuki; Mori, Futoshi; Yamashita, Fumio

    2017-01-10

    The Bayesian estimation algorithm improves the precision of bolus tracking perfusion imaging. However, this algorithm cannot directly calculate Tmax, the time scale widely used to identify ischemic penumbra, because Tmax is a non-physiological, artificial index that reflects the tracer arrival delay (TD) and other parameters. We calculated Tmax from the TD and mean transit time (MTT) obtained by the Bayesian algorithm and determined its accuracy in comparison with Tmax obtained by singular value decomposition (SVD) algorithms. The TD and MTT maps were generated by the Bayesian algorithm applied to digital phantoms with time-concentration curves that reflected a range of values for various perfusion metrics using a global arterial input function. Tmax was calculated from the TD and MTT using constants obtained by a linear least-squares fit to Tmax obtained from the two SVD algorithms that showed the best benchmarks in a previous study. Correlations between the Tmax values obtained by the Bayesian and SVD methods were examined. The Bayesian algorithm yielded accurate TD and MTT values relative to the true values of the digital phantom. Tmax calculated from the TD and MTT values with the least-squares fit constants showed excellent correlation (Pearson's correlation coefficient = 0.99) and agreement (intraclass correlation coefficient = 0.99) with Tmax obtained from SVD algorithms. Quantitative analyses of Tmax values calculated from Bayesian-estimation algorithm-derived TD and MTT from a digital phantom correlated and agreed well with Tmax values determined using SVD algorithms.

  19. Computing the Deflection of the Vertical for Improving Aerial Surveys: A Comparison between EGM2008 and ITALGEO05 Estimates.

    PubMed

    Barzaghi, Riccardo; Carrion, Daniela; Pepe, Massimiliano; Prezioso, Giuseppina

    2016-07-26

    Recent studies on the influence of the anomalous gravity field in GNSS/INS applications have shown that neglecting the impact of the deflection of vertical in aerial surveys induces horizontal and vertical errors in the measurement of an object that is part of the observed scene; these errors can vary from a few tens of centimetres to over one meter. The works reported in the literature refer to vertical deflection values based on global geopotential model estimates. In this paper we compared this approach with the one based on local gravity data and collocation methods. In particular, denoted by ξ and η, the two mutually-perpendicular components of the deflection of the vertical vector (in the north and east directions, respectively), their values were computed by collocation in the framework of the Remove-Compute-Restore technique, applied to the gravity database used for estimating the ITALGEO05 geoid. Following this approach, these values have been computed at different altitudes that are relevant in aerial surveys. The (ξ, η) values were then also estimated using the high degree EGM2008 global geopotential model and compared with those obtained in the previous computation. The analysis of the differences between the two estimates has shown that the (ξ, η) global geopotential model estimate can be reliably used in aerial navigation applications that require the use of sensors connected to a GNSS/INS system only above a given height (e.g., 3000 m in this paper) that must be defined by simulations.

  20. Computing the Deflection of the Vertical for Improving Aerial Surveys: A Comparison between EGM2008 and ITALGEO05 Estimates

    PubMed Central

    Barzaghi, Riccardo; Carrion, Daniela; Pepe, Massimiliano; Prezioso, Giuseppina

    2016-01-01

    Recent studies on the influence of the anomalous gravity field in GNSS/INS applications have shown that neglecting the impact of the deflection of vertical in aerial surveys induces horizontal and vertical errors in the measurement of an object that is part of the observed scene; these errors can vary from a few tens of centimetres to over one meter. The works reported in the literature refer to vertical deflection values based on global geopotential model estimates. In this paper we compared this approach with the one based on local gravity data and collocation methods. In particular, denoted by ξ and η, the two mutually-perpendicular components of the deflection of the vertical vector (in the north and east directions, respectively), their values were computed by collocation in the framework of the Remove-Compute-Restore technique, applied to the gravity database used for estimating the ITALGEO05 geoid. Following this approach, these values have been computed at different altitudes that are relevant in aerial surveys. The (ξ, η) values were then also estimated using the high degree EGM2008 global geopotential model and compared with those obtained in the previous computation. The analysis of the differences between the two estimates has shown that the (ξ, η) global geopotential model estimate can be reliably used in aerial navigation applications that require the use of sensors connected to a GNSS/INS system only above a given height (e.g., 3000 m in this paper) that must be defined by simulations. PMID:27472333

  1. Estimation of Canopy Clumping Index From MISR and MODIS Sensors Using the Normalized Difference Hotspot and Darkspot (NDHD) Method: The Influence of BRDF Models and Solar Zenith Angle

    NASA Astrophysics Data System (ADS)

    Wei, S.; Fang, H.

    2016-12-01

    The Clumping index (CI) describes the spatial distribution pattern of foliage, and is a critical parameter used to characterize the terrestrial ecosystem and model land-surface processes. Global and regional scale CI maps have been generated from POLDER, MODIS, and MISR sensors based on an empirical relationship with the normalized difference between hotspot and darkspot (NDHD) index by previous studies. However, the hotspot and darkspot values and CI values can be considerably different from different bidirectional reflectance distribution function (BRDF) models and solar zenith angles (SZA). In this study, we evaluated the effects of different configurations of BRDF models and SZA values on CI estimation using the NDHD method. CI maps estimated from MISR and MODIS were compared with reference data at the VALERI sites. Results show that for moderate to least clumped vegetation (CI > 0.5), CIs retrieved with the observational SZA agree well with field values, while SZA =0° results in underestimates, and SZA = 60° results in overestimates. For highly clumped (CI < 0.5) and sparsely vegetated areas (FCOVER<25%), the Ross-Li model with 60° SZA is recommended for CI estimation. The suitable NDHD configuration was further used to estimate a 15-year time series CI from MODIS BRDF data. The time series CI shows a reasonable seasonal trajectory, and varies consistently with the MODIS leaf area index (LAI). This study enables better usage of the NDHD method for CI estimation, and can be a useful reference for research on CI validation.

  2. The Impact of Alzheimer's Disease on the Chinese Economy.

    PubMed

    Keogh-Brown, Marcus R; Jensen, Henning Tarp; Arrighi, H Michael; Smith, Richard D

    2016-02-01

    Recent increases in life expectancy may greatly expand future Alzheimer's Disease (AD) burdens. China's demographic profile, aging workforce and predicted increasing burden of AD-related care make its economy vulnerable to AD impacts. Previous economic estimates of AD predominantly focus on health system burdens and omit wider whole-economy effects, potentially underestimating the full economic benefit of effective treatment. AD-related prevalence, morbidity and mortality for 2011-2050 were simulated and were, together with associated caregiver time and costs, imposed on a dynamic Computable General Equilibrium model of the Chinese economy. Both economic and non-economic outcomes were analyzed. Simulated Chinese AD prevalence quadrupled during 2011-50 from 6-28 million. The cumulative discounted value of eliminating AD equates to China's 2012 GDP (US$8 trillion), and the annual predicted real value approaches US AD cost-of-illness (COI) estimates, exceeding US$1 trillion by 2050 (2011-prices). Lost labor contributes 62% of macroeconomic impacts. Only 10% derives from informal care, challenging previous COI-estimates of 56%. Health and macroeconomic models predict an unfolding 2011-2050 Chinese AD epidemic with serious macroeconomic consequences. Significant investment in research and development (medical and non-medical) is warranted and international researchers and national authorities should therefore target development of effective AD treatment and prevention strategies.

  3. The Impact of Alzheimer's Disease on the Chinese Economy

    PubMed Central

    Keogh-Brown, Marcus R.; Jensen, Henning Tarp; Arrighi, H. Michael; Smith, Richard D.

    2015-01-01

    Background Recent increases in life expectancy may greatly expand future Alzheimer's Disease (AD) burdens. China's demographic profile, aging workforce and predicted increasing burden of AD-related care make its economy vulnerable to AD impacts. Previous economic estimates of AD predominantly focus on health system burdens and omit wider whole-economy effects, potentially underestimating the full economic benefit of effective treatment. Methods AD-related prevalence, morbidity and mortality for 2011–2050 were simulated and were, together with associated caregiver time and costs, imposed on a dynamic Computable General Equilibrium model of the Chinese economy. Both economic and non-economic outcomes were analyzed. Findings Simulated Chinese AD prevalence quadrupled during 2011–50 from 6–28 million. The cumulative discounted value of eliminating AD equates to China's 2012 GDP (US$8 trillion), and the annual predicted real value approaches US AD cost-of-illness (COI) estimates, exceeding US$1 trillion by 2050 (2011-prices). Lost labor contributes 62% of macroeconomic impacts. Only 10% derives from informal care, challenging previous COI-estimates of 56%. Interpretation Health and macroeconomic models predict an unfolding 2011–2050 Chinese AD epidemic with serious macroeconomic consequences. Significant investment in research and development (medical and non-medical) is warranted and international researchers and national authorities should therefore target development of effective AD treatment and prevention strategies. PMID:26981556

  4. ESTIMATION OF FUNCTIONALS OF SPARSE COVARIANCE MATRICES.

    PubMed

    Fan, Jianqing; Rigollet, Philippe; Wang, Weichen

    High-dimensional statistical tests often ignore correlations to gain simplicity and stability leading to null distributions that depend on functionals of correlation matrices such as their Frobenius norm and other ℓ r norms. Motivated by the computation of critical values of such tests, we investigate the difficulty of estimation the functionals of sparse correlation matrices. Specifically, we show that simple plug-in procedures based on thresholded estimators of correlation matrices are sparsity-adaptive and minimax optimal over a large class of correlation matrices. Akin to previous results on functional estimation, the minimax rates exhibit an elbow phenomenon. Our results are further illustrated in simulated data as well as an empirical study of data arising in financial econometrics.

  5. ESTIMATION OF FUNCTIONALS OF SPARSE COVARIANCE MATRICES

    PubMed Central

    Fan, Jianqing; Rigollet, Philippe; Wang, Weichen

    2016-01-01

    High-dimensional statistical tests often ignore correlations to gain simplicity and stability leading to null distributions that depend on functionals of correlation matrices such as their Frobenius norm and other ℓr norms. Motivated by the computation of critical values of such tests, we investigate the difficulty of estimation the functionals of sparse correlation matrices. Specifically, we show that simple plug-in procedures based on thresholded estimators of correlation matrices are sparsity-adaptive and minimax optimal over a large class of correlation matrices. Akin to previous results on functional estimation, the minimax rates exhibit an elbow phenomenon. Our results are further illustrated in simulated data as well as an empirical study of data arising in financial econometrics. PMID:26806986

  6. Hamaker constants of iron oxide nanoparticles.

    PubMed

    Faure, Bertrand; Salazar-Alvarez, German; Bergström, Lennart

    2011-07-19

    The Hamaker constants for iron oxide nanoparticles in various media have been calculated using Lifshitz theory. Expressions for the dielectric responses of three iron oxide phases (magnetite, maghemite, and hematite) were derived from recently published optical data. The nonretarded Hamaker constants for the iron oxide nanoparticles interacting across water, A(1w1) = 33 - 39 zJ, correlate relatively well with previous reports, whereas the calculated values in nonpolar solvents (hexane and toluene), A(131) = 9 - 29 zJ, are much lower than the previous estimates, particularly for magnetite. The magnitude of van der Waals interactions varies significantly between the studied phases (magnetite < maghemite < hematite), which highlights the importance of a thorough characterization of the particles. The contribution of magnetic dispersion interactions for particle sizes in the superparamagnetic regime was found to be negligible. Previous conjectures related to colloidal stability and self-assembly have been revisited on the basis of the new Lifshitz values of the Hamaker constants.

  7. State and parameter estimation of the heat shock response system using Kalman and particle filters.

    PubMed

    Liu, Xin; Niranjan, Mahesan

    2012-06-01

    Traditional models of systems biology describe dynamic biological phenomena as solutions to ordinary differential equations, which, when parameters in them are set to correct values, faithfully mimic observations. Often parameter values are tweaked by hand until desired results are achieved, or computed from biochemical experiments carried out in vitro. Of interest in this article, is the use of probabilistic modelling tools with which parameters and unobserved variables, modelled as hidden states, can be estimated from limited noisy observations of parts of a dynamical system. Here we focus on sequential filtering methods and take a detailed look at the capabilities of three members of this family: (i) extended Kalman filter (EKF), (ii) unscented Kalman filter (UKF) and (iii) the particle filter, in estimating parameters and unobserved states of cellular response to sudden temperature elevation of the bacterium Escherichia coli. While previous literature has studied this system with the EKF, we show that parameter estimation is only possible with this method when the initial guesses are sufficiently close to the true values. The same turns out to be true for the UKF. In this thorough empirical exploration, we show that the non-parametric method of particle filtering is able to reliably estimate parameters and states, converging from initial distributions relatively far away from the underlying true values. Software implementation of the three filters on this problem can be freely downloaded from http://users.ecs.soton.ac.uk/mn/HeatShock

  8. Differential processing of the two subunits of human choriogonadotropin (hCG) by granulosa cells. I. Preparation and characterization of selectively labeled hCG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Landefeld, T.D.; Byrne, M.D.; Campbell, K.L.

    1981-12-01

    The alpha- and beta-subunits of hCG were radioiodinated and recombined with unlabeled complementary subunits. The resultant recombined hormones, selectively labeled in either the alpha- or beta-subunit, were separated from unrecombined subunit by sodium dodecyl sulfate-polyacrylamide gel electrophoresis, extracted with Triton X-100, and characterized by binding analysis. The estimates of maximum binding (active fraction) of the two resultant selectively labeled, recombined hCG preparations, determined with excess receptor were 0.41 and 0.59. These values are similar to those obtained when hCG is labeled as an intact molecule. The specific activities of the recombined preparations were estimated by four different methods, and themore » resulting values were used in combination with the active fraction estimates to determine the concentrations of active free and bound hormone. Binding analyses were run using varying concentrations of both labeled and unlabeled hormone. Estimates of the equilibrium dissociation binding constant (Kd) and receptor capacity were calculated in three different ways. The mean estimates of capacity (52.6 and 52.7 fmol/mg tissue) and Kd (66.6 and 65.7 pM) for the two preparations were indistinguishable. Additionally, these values were similar to values reported previously for hCG radioiodinated as an intact molecule. The availability of well characterized, selectively labeled hCG preparations provides new tools for studying the mechanism of action and the target cell processing of the subunits of this hormone.« less

  9. SCS-CN parameter determination using rainfall-runoff data in heterogeneous watersheds. The two-CN system approach

    NASA Astrophysics Data System (ADS)

    Soulis, K. X.; Valiantzas, J. D.

    2011-10-01

    The Soil Conservation Service Curve Number (SCS-CN) approach is widely used as a simple method for predicting direct runoff volume for a given rainfall event. The CN values can be estimated by being selected from tables. However, it is more accurate to estimate the CN value from measured rainfall-runoff data (assumed available) in a watershed. Previous researchers indicated that the CN values calculated from measured rainfall-runoff data vary systematically with the rainfall depth. They suggested the determination of a single asymptotic CN value observed for very high rainfall depths to characterize the watersheds' runoff response. In this paper, the novel hypothesis that the observed correlation between the calculated CN value and the rainfall depth in a watershed reflects the effect of the inevitable presence of soil-cover complex spatial variability along watersheds is being tested. Based on this hypothesis, the simplified concept of a two-CN heterogeneous system is introduced to model the observed CN-rainfall variation by reducing the CN spatial variability into two classes. The behavior of the CN-rainfall function produced by the proposed two-CN system concept is approached theoretically, it is analyzed systematically, and it is found to be similar to the variation observed in natural watersheds. Synthetic data tests, natural watersheds examples, and detailed study of two natural experimental watersheds with known spatial heterogeneity characteristics were used to evaluate the method. The results indicate that the determination of CN values from rainfall runoff data using the proposed two-CN system approach provides reasonable accuracy and it over performs the previous original method based on the determination of a single asymptotic CN value. Although the suggested method increases the number of unknown parameters to three (instead of one), a clear physical reasoning for them is presented.

  10. Does Sentinel multi sensor data offer synergy in Improving Accuracy of Aboveground Biomass Estimate of Dense Tropical Forest? - Utility of Decision Tree Based Machine Learning Algorithms

    NASA Astrophysics Data System (ADS)

    Ghosh, S. M.; Behera, M. D.

    2017-12-01

    Forest aboveground biomass (AGB) is an important factor for preparation of global policy making decisions to tackle the impact of climate change. Several previous studies has concluded that remote sensing methods are more suitable for estimating forest biomass on regional scale. Among all available remote sensing data and methods, Synthetic Aperture Radar (SAR) data in combination with decision tree based machine learning algorithms has shown better promise in estimating higher biomass values. There aren't many studies done for biomass estimation of dense Indian tropical forests with high biomass density. In this study aboveground biomass was estimated for two major tree species, Sal (Shorea robusta) and Teak (Tectona grandis), of Katerniaghat Wildlife Sanctuary, a tropical forest situated in northern India. Biomass was estimated by combining C-band SAR data from Sentinel-1A satellite, vegetation indices produced using Sentinel-2A data and ground inventory plots. Along with SAR backscatter value, SAR texture images were also used as input as earlier studies had found that image texture has a correlation with vegetation biomass. Decision tree based nonlinear machine learning algorithms were used in place of parametric regression models for establishing relationship between fields measured values and remotely sensed parameters. Using random forest model with a combination of vegetation indices with SAR backscatter as predictor variables shows best result for Sal forest, with a coefficient of determination value of 0.71 and a RMSE value of 105.027 t/ha. In teak forest also best result can be found in the same combination but for stochastic gradient boosted model with a coefficient of determination value of 0.6 and a RMSE value of 79.45 t/ha. These results are mostly better than the results of other studies done for similar kind of forests. This study shows that Sentinel series satellite data has exceptional capabilities in estimating dense forest AGB and machine learning algorithms are better means to do so than parametric regression models.

  11. Estimates of Flow Duration, Mean Flow, and Peak-Discharge Frequency Values for Kansas Stream Locations

    USGS Publications Warehouse

    Perry, Charles A.; Wolock, David M.; Artman, Joshua C.

    2004-01-01

    Streamflow statistics of flow duration and peak-discharge frequency were estimated for 4,771 individual locations on streams listed on the 1999 Kansas Surface Water Register. These statistics included the flow-duration values of 90, 75, 50, 25, and 10 percent, as well as the mean flow value. Peak-discharge frequency values were estimated for the 2-, 5-, 10-, 25-, 50-, and 100-year floods. Least-squares multiple regression techniques were used, along with Tobit analyses, to develop equations for estimating flow-duration values of 90, 75, 50, 25, and 10 percent and the mean flow for uncontrolled flow stream locations. The contributing-drainage areas of 149 U.S. Geological Survey streamflow-gaging stations in Kansas and parts of surrounding States that had flow uncontrolled by Federal reservoirs and used in the regression analyses ranged from 2.06 to 12,004 square miles. Logarithmic transformations of climatic and basin data were performed to yield the best linear relation for developing equations to compute flow durations and mean flow. In the regression analyses, the significant climatic and basin characteristics, in order of importance, were contributing-drainage area, mean annual precipitation, mean basin permeability, and mean basin slope. The analyses yielded a model standard error of prediction range of 0.43 logarithmic units for the 90-percent duration analysis to 0.15 logarithmic units for the 10-percent duration analysis. The model standard error of prediction was 0.14 logarithmic units for the mean flow. Regression equations used to estimate peak-discharge frequency values were obtained from a previous report, and estimates for the 2-, 5-, 10-, 25-, 50-, and 100-year floods were determined for this report. The regression equations and an interpolation procedure were used to compute flow durations, mean flow, and estimates of peak-discharge frequency for locations along uncontrolled flow streams on the 1999 Kansas Surface Water Register. Flow durations, mean flow, and peak-discharge frequency values determined at available gaging stations were used to interpolate the regression-estimated flows for the stream locations where available. Streamflow statistics for locations that had uncontrolled flow were interpolated using data from gaging stations weighted according to the drainage area and the bias between the regression-estimated and gaged flow information. On controlled reaches of Kansas streams, the streamflow statistics were interpolated between gaging stations using only gaged data weighted by drainage area.

  12. Variance to mean ratio, R(t), for poisson processes on phylogenetic trees.

    PubMed

    Goldman, N

    1994-09-01

    The ratio of expected variance to mean, R(t), of numbers of DNA base substitutions for contemporary sequences related by a "star" phylogeny is widely seen as a measure of the adherence of the sequences' evolution to a Poisson process with a molecular clock, as predicted by the "neutral theory" of molecular evolution under certain conditions. A number of estimators of R(t) have been proposed, all predicted to have mean 1 and distributions based on the chi 2. Various genes have previously been analyzed and found to have values of R(t) far in excess of 1, calling into question important aspects of the neutral theory. In this paper, I use Monte Carlo simulation to show that the previously suggested means and distributions of estimators of R(t) are highly inaccurate. The analysis is applied to star phylogenies and to general phylogenetic trees, and well-known gene sequences are reanalyzed. For star phylogenies the results show that Kimura's estimators ("The Neutral Theory of Molecular Evolution," Cambridge Univ. Press, Cambridge, 1983) are unsatisfactory for statistical testing of R(t), but confirm the accuracy of Bulmer's correction factor (Genetics 123: 615-619, 1989). For all three nonstar phylogenies studied, attained values of all three estimators of R(t), although larger than 1, are within their true confidence limits under simple Poisson process models. This shows that lineage effects can be responsible for high estimates of R(t), restoring some limited confidence in the molecular clock and showing that the distinction between lineage and molecular clock effects is vital.(ABSTRACT TRUNCATED AT 250 WORDS)

  13. Attenuation and source properties at the Coso Geothermal area, California

    USGS Publications Warehouse

    Hough, S.E.; Lees, J.M.; Monastero, F.

    1999-01-01

    We use a multiple-empirical Green's function method to determine source properties of small (M -0.4 to 1.3) earthquakes and P- and S-wave attenuation at the Coso Geothermal Field, California. Source properties of a previously identified set of clustered events from the Coso geothermal region are first analyzed using an empirical Green's function (EGF) method. Stress-drop values of at least 0.5-1 MPa are inferred for all of the events; in many cases, the corner frequency is outside the usable bandwidth, and the stress drop can only be constrained as being higher than 3 MPa. P- and S-wave stress-drop estimates are identical to the resolution limits of the data. These results are indistinguishable from numerous EGF studies of M 2-5 earthquakes, suggesting a similarity in rupture processes that extends to events that are both tiny and induced, providing further support for Byerlee's Law. Whole-path Q estimates for P and S waves are determined using the multiple-empirical Green's function (MEGF) method of Hough (1997), whereby spectra from clusters of colocated events at a given station are inverted for a single attenuation parameter, ??, with source parameters constrained from EGF analysis. The ?? estimates, which we infer to be resolved to within 0.01 sec or better, exhibit almost as much scatter as a function of hypocentral distance as do values from previous single-spectrum studies for which much higher uncertainties in individual ?? estimates are expected. The variability in ?? estimates determined here therefore suggests real lateral variability in Q structure. Although the ray-path coverage is too sparse to yield a complete three-dimensional attenuation tomographic image, we invert the inferred ?? value for three-dimensional structure using a damped least-squares method, and the results do reveal significant lateral variability in Q structure. The inferred attenuation variability corresponds to the heat-flow variations within the geothermal region. A central low-Q region corresponds well with the central high-heat flow region; additional detailed structure is also suggested.

  14. Status of linear boundary-layer stability and the e to the nth method, with emphasis on swept-wing applications

    NASA Technical Reports Server (NTRS)

    Hefner, J. N.; Bushnell, D. M.

    1980-01-01

    The-state-of-the-art for the application of linear stability theory and the e to the nth power method for transition prediction and laminar flow control design are summarized, with analyses of previously published low disturbance, swept wing data presented. For any set of transition data with similar stream distrubance levels and spectra, the e to the nth power method for estimating the beginning of transition works reasonably well; however, the value of n can vary significantly, depending upon variations in disturbance field or receptivity. Where disturbance levels are high, the values of n are appreciably below the usual average value of 9 to 10 obtained for relatively low disturbance levels. It is recommended that the design of laminar flow control systems be based on conservative estimates of n and that, in considering the values of n obtained from different analytical approaches or investigations, the designer explore the various assumptions which entered into the analyses.

  15. Hemograms for and nutritional condition of migrant bald eagles tested for exposure to lead.

    PubMed

    Miller, M J; Wayland, M E; Bortolotti, G R

    2001-07-01

    Plasma proteins, hematocrit, differential blood counts were examined and nutritional condition was estimated for bald eagles (Haliaeetus leucocephalus) trapped (n = 66) during antumn migration, 1994-95 at Galloway Bay (Saskatchewan, Canada), for the purposes of estimating prevalence of exposure to lead. Sex and age differences in hematocrit and plasma proteins were not observed; however, female eagles exhibited larger median absolute heterophil counts than males. Hematologic values were similar to those previously reported from eagles in captivity. Departures from expected hematological values from a healthy population of eagles were not observed in birds with elevated levels of blood lead (> or =0.200 microg/ml). Similarly, nutritional condition was not related to blood-lead concentrations. Therefore, it appears that lead exposure in this population was below a threshold required to indicate toxicological alteration in the hematological values and index of nutritional condition that we measured.

  16. Photometric observations of nine Transneptunian objects and Centaurs

    NASA Astrophysics Data System (ADS)

    Hromakina, T.; Perna, D.; Belskaya, I.; Dotto, E.; Rossi, A.; Bisi, F.

    2018-02-01

    We present the results of photometric observations of six Transneptunian objects and three Centaurs, estimations of their rotational periods and corresponding amplitudes. For six of them we present also lower limits of density values. All observations were made using 3.6-m TNG telescope (La Palma, Spain). For four objects - (148975) 2001 XA255, (281371) 2008 FC76, (315898) 2008 QD4, and 2008 CT190 - the estimation of short-term variability was made for the first time. We confirm rotation period values for two objects: (55636) 2002 TX300 and (202421) 2005 UQ513, and improve the precision of previously reported rotational period values for other three - (120178) 2003 OP32, (145452) 2005 RN43, (444030) 2004 NT33 - by using both our and literature data. We also discuss here that small distant bodies, similar to asteroids in the Main belt, tend to have double-peaked rotational periods caused by the elongated shape rather than surface albedo variations.

  17. Estimation of Vegetation Aerodynamic Roughness of Natural Regions Using Frontal Area Density Determined from Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.; Crago, Richard

    1994-01-01

    Parameterizations of the frontal area index and canopy area index of natural or randomly distributed plants are developed, and applied to the estimation of local aerodynamic roughness using satellite imagery. The formulas are expressed in terms of the subpixel fractional vegetation cover and one non-dimensional geometric parameter that characterizes the plant's shape. Geometrically similar plants and Poisson distributed plant centers are assumed. An appropriate averaging technique to extend satellite pixel-scale estimates to larger scales is provided. ne parameterization is applied to the estimation of aerodynamic roughness using satellite imagery for a 2.3 sq km coniferous portion of the Landes Forest near Lubbon, France, during the 1986 HAPEX-Mobilhy Experiment. The canopy area index is estimated first for each pixel in the scene based on previous estimates of fractional cover obtained using Landsat Thematic Mapper imagery. Next, the results are incorporated into Raupach's (1992, 1994) analytical formulas for momentum roughness and zero-plane displacement height. The estimates compare reasonably well to reference values determined from measurements taken during the experiment and to published literature values. The approach offers the potential for estimating regionally variable, vegetation aerodynamic roughness lengths over natural regions using satellite imagery when there exists only limited knowledge of the vegetated surface.

  18. Multialternative drift-diffusion model predicts the relationship between visual fixations and choice in value-based decisions.

    PubMed

    Krajbich, Ian; Rangel, Antonio

    2011-08-16

    How do we make decisions when confronted with several alternatives (e.g., on a supermarket shelf)? Previous work has shown that accumulator models, such as the drift-diffusion model, can provide accurate descriptions of the psychometric data for binary value-based choices, and that the choice process is guided by visual attention. However, the computational processes used to make choices in more complicated situations involving three or more options are unknown. We propose a model of trinary value-based choice that generalizes what is known about binary choice, and test it using an eye-tracking experiment. We find that the model provides a quantitatively accurate description of the relationship between choice, reaction time, and visual fixation data using the same parameters that were estimated in previous work on binary choice. Our findings suggest that the brain uses similar computational processes to make binary and trinary choices.

  19. Estimating economic gains for landowners due to time-dependent changes in biotechnology

    Treesearch

    John E. Wagner; Thomas P. Holmes

    1998-01-01

    This paper presents a model for examining the economic value of biotechnological research given time-dependent changes in biotechnology. Previous papers examined this issue assuming a time-neutral change in biotechnology. However, when analyzing the genetic improvements of increasing a tree's resistance to a pathogen, this assumption is untenable. The authors...

  20. Public Participation in Insect Research through the Use of Pheromones

    ERIC Educational Resources Information Center

    Harvey, Deborah; Hedenström, Erik; Finch, Paul

    2017-01-01

    In a project to determine the UK distribution of a conservation-status beetle "Elater ferrugineus", 300 volunteers were recruited and supplied with traps containing a female pheromone that is an effective attractant for adult males. The occurrence and distribution of the insect were extended from previously estimated values and shown to…

  1. Estimating Critical Values for Strength of Alignment among Curriculum, Assessments, and Instruction

    ERIC Educational Resources Information Center

    Fulmer, Gavin W.

    2010-01-01

    School accountability decisions based on standardized tests hinge on the degree of alignment of the test with a state's standards. Yet no established criteria were available for judging strength of alignment. Previous studies of alignment among tests, standards, and teachers' instruction have yielded mixed results that are difficult to interpret…

  2. Estimating Critical Values for Strength of Alignment among Curriculum, Assessments, and Instruction

    ERIC Educational Resources Information Center

    Fulmer, Gavin W.

    2011-01-01

    School accountability decisions based on standardized tests hinge on the degree of alignment of the test with the state's standards documents. Yet, there exist no established criteria for judging strength of alignment. Previous measures of alignment among tests, standards, and teachers' instruction have yielded mixed results that are difficult to…

  3. Wireless Intrusion Detection

    DTIC Science & Technology

    2007-03-01

    32 4.4 Algorithm Pseudo - Code ...................................................................................34 4.5 WIND Interface With a...difference estimates of xc temporal derivatives, or by using a polynomial fit to the previous values of xc. 34 4.4 ALGORITHM PSEUDO - CODE Pseudo ...Phase Shift Keying DQPSK Differential Quadrature Phase Shift Keying EVM Error Vector Magnitude FFT Fast Fourier Transform FPGA Field Programmable

  4. The economic value of remote sensing of earth resources from space: An ERTS overview and the value of continuity of service. Volume 10: Industry

    NASA Technical Reports Server (NTRS)

    Lietzke, K. R.

    1974-01-01

    The economic benefits of an ERS system in the area of industrial resources are discussed. Contributions of ERTS imagery to the improvement of shipping routes, detection of previously unknown and potentially active faults in construction areas, and monitoring industrial pollution are described. Due to lack of economic research concerning the subject of ERS applications in this resource area the benefit estimations reported are regarded as tentative and preliminary.

  5. Hydrology and numerical simulation of groundwater movement and heat transport in Snake Valley and surrounding areas, Juab, Miller, and Beaver Counties, Utah, and White Pine and Lincoln Counties, Nevada

    USGS Publications Warehouse

    Masbruch, Melissa D.; Gardner, Philip M.; Brooks, Lynette E.

    2014-01-01

    Snake Valley and surrounding areas, along the Utah-Nevada state border, are part of the Great Basin carbonate and alluvial aquifer system. The groundwater system in the study area consists of water in unconsolidated deposits in basins and water in consolidated rock underlying the basins and in the adjacent mountain blocks. Most recharge occurs from precipitation on the mountain blocks and most discharge occurs from the lower altitude basin-fill deposits mainly as evapotranspiration, springflow, and well withdrawals.The Snake Valley area regional groundwater system was simulated using a three-dimensional model incorporating both groundwater flow and heat transport. The model was constructed with MODFLOW-2000, a version of the U.S. Geological Survey’s groundwater flow model, and MT3DMS, a transport model that simulates advection, dispersion, and chemical reactions of solutes or heat in groundwater systems. Observations of groundwater discharge by evapotranspiration, springflow, mountain stream base flow, and well withdrawals; groundwater-level altitudes; and groundwater temperatures were used to calibrate the model. Parameter values estimated by regression analyses were reasonable and within the range of expected values.This study represents one of the first regional modeling efforts to include calibration to groundwater temperature data. The inclusion of temperature observations reduced parameter uncertainty, in some cases quite significantly, over using just water-level altitude and discharge observations. Of the 39 parameters used to simulate horizontal hydraulic conductivity, uncertainty on 11 of these parameters was reduced to one order of magnitude or less. Other significant reductions in parameter uncertainty occurred in parameters representing the vertical anisotropy ratio, drain and river conductance, recharge rates, and well withdrawal rates.The model provides a good representation of the groundwater system. Simulated water-level altitudes range over almost 2,000 meters (m); 98 percent of the simulated values of water-level altitudes in wells are within 30 m of observed water-level altitudes, and 58 percent of them are within 12 m. Nineteen of 20 simulated discharges are within 30 percent of observed discharge. Eighty-one percent of the simulated values of groundwater temperatures in wells are within 2 degrees Celsius (°C) of the observed values, and 55 percent of them are within 0.75 °C. The numerical model represents a more robust quantification of groundwater budget components than previous studies because the model integrates all components of the groundwater budget. The model also incorporates new data including (1) a detailed hydrogeologic framework, and (2) more observations, including several new water-level altitudes throughout the study area, several new measurements of spring discharge within Snake Valley which had not previously been monitored, and groundwater temperature data. Uncertainty in the estimates of subsurface flow are less than those of previous studies because the model balanced recharge and discharge across the entire simulated area, not just in each hydrographic area, and because of the large dataset of observations (water-level altitudes, discharge, and temperatures) used to calibrate the model and the resulting transmissivity distribution.Groundwater recharge from precipitation and unconsumed irrigation in Snake Valley is 160,000 acre-feet per year (acre-ft/yr), which is within the range of previous estimates. Subsurface inflow from southern Spring Valley to southern Snake Valley is 13,000 acre-ft/yr and is within the range of previous estimates; subsurface inflow from Spring Valley to Snake Valley north of the Snake Range, however, is only 2,200 acre-ft/yr, which is much less than has been previously estimated. Groundwater discharge from groundwater evapotranspiration and springs is 100,000 acre-ft/yr, and discharge to mountain streams is 3,300 acre-ft/yr; these are within the range of previous estimates. Current well withdrawals are 28,000 acre-ft/yr. Subsurface outflow from Snake Valley moves into Pine Valley (2,000 acre-ft/yr), Wah Wah Valley (23 acre-ft/yr), Tule Valley (33,000 acre-ft/yr), Fish Springs Flat (790 acre-ft/yr), and outside of the study area towards Great Salt Lake Desert (8,400 acre-ft/yr); these outflows, totaling about 44,000 acre-ft/yr, are within the range of previous estimates.The subsurface flow amounts indicate the degree of connectivity between hydrographic areas within the study area. The simulated transmissivity and locations of natural discharge, however, provide a better estimate of the effect of groundwater withdrawals on groundwater resources than does the amount and direction of subsurface flow between hydrographic areas. The distribution of simulated transmissivity throughout the study area includes many areas of high transmissivity within and between hydrographic areas. Increased well withdrawals within these high transmissivity areas will likely affect a large part of the study area, resulting in declining groundwater levels, as well as leading to a decrease in natural discharge to springs and evapotranspiration.

  6. Estimation of the IC to CG Ratio Using JEM-GLIMS and Ground-based Lightning Network Data

    NASA Astrophysics Data System (ADS)

    Bandholnopparat, K.; Sato, M.; Takahashi, Y.; Adachi, T.; Ushio, T.

    2017-12-01

    The ratio between intracloud (IC) discharge and cloud-to-ground (CG) discharge, which is denoted by Z, is the important parameter for the studies on the climatological differences of thunderstorm structures and for the quantitative evaluation of lightning contributions to the global electric circuit. However, the latitudinal, regional, and seasonal dependences of Z-value are not fully clarified. The purposes of this study are (i) to develop new methods to identify IC and CG discharges using optical data obtained by the Global Lightning and Sprite Measurements on Japanese Experiment Module (JEM-GLIMS) from space and ground-based lightning data, (ii) to estimate Z-value and its latitudinal, regional, and seasonal dependences. As a first step, we compared the JEM-GLIMS data to the ground-based lightning data obtained by JLDN, NLDN, WWLLN, and GEON in order to distinguish the lightning discharge type detected by JEM-GLIMS. As a next step, we have calculated intensity ratios between the blue and red PH channels, that is, PH2(337 nm)/PH3(762 nm), PH5(316 nm)/PH3, PH6(392 nm)/PH3, PH2/PH4(599-900 nm), PH5/PH4, and PH6/PH4 for each lightning event. From these analyses, it is found that 447 and 454 of 8355 lightning events were identified to be CG and IC discharges, respectively. It is also found that the PH intensity ratio of IC discharges is clearly higher than that of CG discharges. In addition, the difference of the PH2/PH3, PH2/PH4, and PH6/PH4 ratio between IC and CG cases is relatively large, which means these three ratios are the useful proxy to classify the discharge types for other 7454 lightning events. Finally, the estimated Z-value varies from 0.18 - 0.84 from the equator to the higher latitude. The decrease of the Z-value from the equator to the higher latitude is confirmed both in the northern and the southern hemispheres. Although this latitudinal dependence of the Z-value is similar to previous studies, i.e., Boccippio et al. (2001), the estimated absolute Z-value is smaller than that in previous studies. The reason of the smaller absolute Z-value may be because JEM-GLIMS used the high threshold for the event triggering and missed many lightning events having lower optical energies. At the presentation, we will show the regional and seasonal dependences of the Z-value in detail.

  7. A new function for estimating local rainfall thresholds for landslide triggering

    NASA Astrophysics Data System (ADS)

    Cepeda, J.; Nadim, F.; Høeg, K.; Elverhøi, A.

    2009-04-01

    The widely used power law for establishing rainfall thresholds for triggering of landslides was first proposed by N. Caine in 1980. The most updated global thresholds presented by F. Guzzetti and co-workers in 2008 were derived using Caine's power law and a rigorous and comprehensive collection of global data. Caine's function is defined as I = α×Dβ, where I and D are the mean intensity and total duration of rainfall, and α and β are parameters estimated for a lower boundary curve to most or all the positive observations (i.e., landslide triggering rainfall events). This function does not account for the effect of antecedent precipitation as a conditioning factor for slope instability, an approach that may be adequate for global or regional thresholds that include landslides in surface geologies with a wide range of subsurface drainage conditions and pore-pressure responses to sustained rainfall. However, in a local scale and in geological settings dominated by a narrow range of drainage conditions and behaviours of pore-pressure response, the inclusion of antecedent precipitation in the definition of thresholds becomes necessary in order to ensure their optimum performance, especially when used as part of early warning systems (i.e., false alarms and missed events must be kept to a minimum). Some authors have incorporated the effect of antecedent rainfall in a discrete manner by first comparing the accumulated precipitation during a specified number of days against a reference value and then using a Caine's function threshold only when that reference value is exceeded. The approach in other authors has been to calculate threshold values as linear combinations of several triggering and antecedent parameters. The present study is aimed to proposing a new threshold function based on a generalisation of Caine's power law. The proposed function has the form I = (α1×Anα2)×Dβ, where I and D are defined as previously. The expression in parentheses is equivalent to Caine's α parameter. α1, α2 and β are parameters estimated for the threshold. An is the n-days cumulative rainfall. The suggested procedure to estimate the threshold is as follows: (1) Given N storms, assign one of the following flags to each storm: nL (non-triggering storms), yL (triggering storms), uL (uncertain-triggering storms). Successful predictions correspond to nL and yL storms occurring below and above the threshold, respectively. Storms flagged as uL are actually assigned either an nL or yL flag using a randomization procedure. (2) Establish a set of values of ni (e.g. 1, 4, 7, 10, 15 days, etc.) to test for accumulated precipitation. (3) For each storm and each ni value, obtain the antecedent accumulated precipitation in ni days Ani. (4) Generate a 3D grid of values of α1, α2 and β. (5) For a certain value of ni, generate confusion matrices for the N storms at each grid point and estimate an evaluation metrics parameter EMP (e.g., accuracy, specificity, etc.). (6) Repeat the previous step for all the set of ni values. (7) From the 3D grid corresponding to each ni value, search for the optimum grid point EMPopti(global minimum or maximum parameter). (8) Search for the optimum value of ni in the space ni vs EMPopti . (9) The threshold is defined by the value of ni obtained in the previous step and the corresponding values of α1, α2 and β. The procedure is illustrated using rainfall data and landslide observations from the San Salvador volcano, where a rainfall-triggered debris flow destroyed a neighbourhood in the capital city of El Salvador in 19 September, 1982, killing not less than 300 people.

  8. Inferring rate and state friction parameters from a rupture model of the 1995 Hyogo-ken Nanbu (Kobe) Japan earthquake

    USGS Publications Warehouse

    Guatteri, Mariagiovanna; Spudich, P.; Beroza, G.C.

    2001-01-01

    We consider the applicability of laboratory-derived rate- and state-variable friction laws to the dynamic rupture of the 1995 Kobe earthquake. We analyze the shear stress and slip evolution of Ide and Takeo's [1997] dislocation model, fitting the inferred stress change time histories by calculating the dynamic load and the instantaneous friction at a series of points within the rupture area. For points exhibiting a fast-weakening behavior, the Dieterich-Ruina friction law, with values of dc = 0.01-0.05 m for critical slip, fits the stress change time series well. This range of dc is 10-20 times smaller than the slip distance over which the stress is released, Dc, which previous studies have equated with the slip-weakening distance. The limited resolution and low-pass character of the strong motion inversion degrades the resolution of the frictional parameters and suggests that the actual dc is less than this value. Stress time series at points characterized by a slow-weakening behavior are well fitted by the Dieterich-Ruina friction law with values of dc ??? 0.01-0.05 m. The apparent fracture energy Gc can be estimated from waveform inversions more stably than the other friction parameters. We obtain a Gc = 1.5??106 J m-2 for the 1995 Kobe earthquake, in agreement with estimates for previous earthquakes. From this estimate and a plausible upper bound for the local rock strength we infer a lower bound for Dc of about 0.008 m. Copyright 2001 by the American Geophysical Union.

  9. Long Chain Saturated and Unsaturated Carboxylic Acids: Filling a Large Gap of Knowledge in Their Enthalpies of Formation.

    PubMed

    Rogers, Donald W; Zavitsas, Andreas A

    2017-01-06

    Despite their abundance in nature and their importance in biology, medicine, nutrition, and in industry, gas phase enthalpies of formation of many long chain saturated and unsaturated fatty acids and of dicarboxylic acids are either unavailable or have been estimated with large uncertainties. Available experimental values for stearic acid show a spread of 68 kJ mol -1 . This work fills the knowledge gap by obtaining reliable values by quantum theoretical calculations using G4 model chemistry. Compounds with up to 20 carbon atoms are treated. The theoretical results are in excellent agreement with well established experimental values when such values exist, and they provide a large number of previously unavailable values.

  10. Species richness in soil bacterial communities: a proposed approach to overcome sample size bias.

    PubMed

    Youssef, Noha H; Elshahed, Mostafa S

    2008-09-01

    Estimates of species richness based on 16S rRNA gene clone libraries are increasingly utilized to gauge the level of bacterial diversity within various ecosystems. However, previous studies have indicated that regardless of the utilized approach, species richness estimates obtained are dependent on the size of the analyzed clone libraries. We here propose an approach to overcome sample size bias in species richness estimates in complex microbial communities. Parametric (Maximum likelihood-based and rarefaction curve-based) and non-parametric approaches were used to estimate species richness in a library of 13,001 near full-length 16S rRNA clones derived from soil, as well as in multiple subsets of the original library. Species richness estimates obtained increased with the increase in library size. To obtain a sample size-unbiased estimate of species richness, we calculated the theoretical clone library sizes required to encounter the estimated species richness at various clone library sizes, used curve fitting to determine the theoretical clone library size required to encounter the "true" species richness, and subsequently determined the corresponding sample size-unbiased species richness value. Using this approach, sample size-unbiased estimates of 17,230, 15,571, and 33,912 were obtained for the ML-based, rarefaction curve-based, and ACE-1 estimators, respectively, compared to bias-uncorrected values of 15,009, 11,913, and 20,909.

  11. A non-stationary cost-benefit based bivariate extreme flood estimation approach

    NASA Astrophysics Data System (ADS)

    Qi, Wei; Liu, Junguo

    2018-02-01

    Cost-benefit analysis and flood frequency analysis have been integrated into a comprehensive framework to estimate cost effective design values. However, previous cost-benefit based extreme flood estimation is based on stationary assumptions and analyze dependent flood variables separately. A Non-Stationary Cost-Benefit based bivariate design flood estimation (NSCOBE) approach is developed in this study to investigate influence of non-stationarities in both the dependence of flood variables and the marginal distributions on extreme flood estimation. The dependence is modeled utilizing copula functions. Previous design flood selection criteria are not suitable for NSCOBE since they ignore time changing dependence of flood variables. Therefore, a risk calculation approach is proposed based on non-stationarities in both marginal probability distributions and copula functions. A case study with 54-year observed data is utilized to illustrate the application of NSCOBE. Results show NSCOBE can effectively integrate non-stationarities in both copula functions and marginal distributions into cost-benefit based design flood estimation. It is also found that there is a trade-off between maximum probability of exceedance calculated from copula functions and marginal distributions. This study for the first time provides a new approach towards a better understanding of influence of non-stationarities in both copula functions and marginal distributions on extreme flood estimation, and could be beneficial to cost-benefit based non-stationary bivariate design flood estimation across the world.

  12. Generalized weighted likelihood density estimators with application to finite mixture of exponential family distributions

    PubMed Central

    Zhan, Tingting; Chevoneva, Inna; Iglewicz, Boris

    2010-01-01

    The family of weighted likelihood estimators largely overlaps with minimum divergence estimators. They are robust to data contaminations compared to MLE. We define the class of generalized weighted likelihood estimators (GWLE), provide its influence function and discuss the efficiency requirements. We introduce a new truncated cubic-inverse weight, which is both first and second order efficient and more robust than previously reported weights. We also discuss new ways of selecting the smoothing bandwidth and weighted starting values for the iterative algorithm. The advantage of the truncated cubic-inverse weight is illustrated in a simulation study of three-components normal mixtures model with large overlaps and heavy contaminations. A real data example is also provided. PMID:20835375

  13. Inferring thermodynamic stability relationship of polymorphs from melting data.

    PubMed

    Yu, L

    1995-08-01

    This study investigates the possibility of inferring the thermodynamic stability relationship of polymorphs from their melting data. Thermodynamic formulas are derived for calculating the Gibbs free energy difference (delta G) between two polymorphs and its temperature slope from mainly the temperatures and heats of melting. This information is then used to estimate delta G, thus relative stability, at other temperatures by extrapolation. Both linear and nonlinear extrapolations are considered. Extrapolating delta G to zero gives an estimation of the transition (or virtual transition) temperature, from which the presence of monotropy or enantiotropy is inferred. This procedure is analogous to the use of solubility data measured near the ambient temperature to estimate a transition point at higher temperature. For several systems examined, the two methods are in good agreement. The qualitative rule introduced this way for inferring the presence of monotropy or enantiotropy is approximately the same as The Heat of Fusion Rule introduced previously on a statistical mechanical basis. This method is applied to 96 pairs of polymorphs from the literature. In most cases, the result agrees with the previous determination. The deviation of the calculated transition temperatures from their previous values (n = 18) is 2% on average and 7% at maximum.

  14. Uncertainty estimation of water levels for the Mitch flood event in Tegucigalpa

    NASA Astrophysics Data System (ADS)

    Fuentes Andino, D. C.; Halldin, S.; Lundin, L.; Xu, C.

    2012-12-01

    Hurricane Mitch in 1998 left a devastating flood in Tegucigalpa, the capital city of Honduras. Simulation of elevated water surfaces provides a good way to understand the hydraulic mechanism of large flood events. In this study the one-dimensional HEC-RAS model for steady flow conditions together with the two-dimensional Lisflood-fp model were used to estimate the water level for the Mitch event in the river reaches at Tegucigalpa. Parameters uncertainty of the model was investigated using the generalized likelihood uncertainty estimation (GLUE) framework. Because of the extremely large magnitude of the Mitch flood, no hydrometric measurements were taken during the event. However, post-event indirect measurements of discharge and observed water levels were obtained in previous works by JICA and USGS. To overcome the problem of lacking direct hydrometric measurement data, uncertainty in the discharge was estimated. Both models could well define the value for channel roughness, though more dispersion resulted from the floodplain value. Analysis of the data interaction showed that there was a tradeoff between discharge at the outlet and floodplain roughness for the 1D model. The estimated discharge range at the outlet of the study area encompassed the value indirectly estimated by JICA, however the indirect method used by the USGS overestimated the value. If behavioral parameter sets can well reproduce water surface levels for past events such as Mitch, more reliable predictions for future events can be expected. The results acquired in this research will provide guidelines to deal with the problem of modeling past floods when no direct data was measured during the event, and to predict future large events taking uncertainty into account. The obtained range of the uncertain flood extension will be an outcome useful for decision makers.

  15. A retrospective evaluation method for in vitro mammalian genotoxicity tests using cytotoxicity index transformation formulae.

    PubMed

    Fujita, Yurika; Kasamatsu, Toshio; Ikeda, Naohiro; Nishiyama, Naohiro; Honda, Hiroshi

    2016-01-15

    Although in vitro chromosomal aberration tests and micronucleus tests have been widely used for genotoxicity evaluation, false-positive results have been reported under strong cytotoxic conditions. To reduce false-positive results, the new Organization for Economic Co-operation and Development (OECD) test guideline (TG) recommends the use of a new cytotoxicity index, relative increase in cell count or relative population doubling (RICC/RPD), instead of the traditionally used index, relative cell count (RCC). Although the use of the RICC/RPD may result in different outcomes and require re-evaluation of tested substances, it is impractical to re-evaluate all existing data. Therefore, we established a method to estimate test results from existing RCC data. First, we developed formulae to estimate RICC/RPD from RCC without cell counts by considering cell doubling time and experiment time. Next, the accuracy of the cytotoxicity index transformation formulae was verified by comparing estimated RICC/RPD and measured RICC/RPD for 3 major chemicals associated with false-positive genotoxicity test results: ethyl acrylate, eugenol and p-nitrophenol. Moreover, 25 compounds with false-positive in vitro chromosomal aberration (CA) test results were re-evaluated to establish a retrospective evaluation method based on derived estimated RICC/RPD values. The estimated RICC/RPD values were in good agreement with the measured RICC/RPD values for every concentration and chemical, and the estimated RICC suggested the possibility that 12 chemicals (48%) with previously judged false-positive results in fact had negative results. Our method enables transformation of RCC data into RICC/RPD values with a high degree of accuracy and will facilitate comprehensive retrospective evaluation of test results. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Preliminary evaluation of magnitude and frequency of floods in selected small drainage basins in Ohio

    USGS Publications Warehouse

    Kolva, J.R.

    1985-01-01

    A previous study of flood magitudes and frequencies in Ohio concluded that existing regionalized flood equations may not be adequate for estimating peak flows in small basins that are heavily forested, surface mined, or located in northwestern Ohio. In order to provide a large data base for improving estimation of flood peaks in these basins, 30 crest-stage gages were installed in 1977, in cooperation with the Ohio Department of Transportation, to provide a 10-year record of flood data The study area consists of two distinct parts: Northwestern Ohio, which contains 8 sites, and southern and eastern Ohio, which contains 22 sites in small forested or surface-mined drainage basins. Basin characteristics were determined for all 30 sites for 1978 conditions. Annual peaks were recorded or estimated for all 30 sites for water years 1978-82; an additional year of peak discharges was available at four sites. The 2-year (Q2) and 5-year (Q5) flood peaks were determined from these annual peaks.Q2 and Q5 values also were calculated using published regionalized regression equations for Ohio. The ratios of the observed to predicted 2-year (R2) and 5-year (R5) values were then calculated. This study found that observed flood peaks aree lower than estimated peaks by a significant amount in surface-mined basins. The average ratios of observed to predicted R2 values are 0.51 for basins with more than 40 percent surface-minded land, and 0.68 for sites with any surface-mined land. The average R5 value is 0.55 for sites with more than 40 percent surface-minded land, and 0.61 for sites with any surface-mined land. Estimated flood peaks from forested basins agree with the observed values fairly well. R2 values average 0.87 for sites with 20 percent or more forested land, but no surface-mined land, and R5 values average 0.96. If all sites with more than 20 percent forested land and some surface-mined land are considered, R2 the values average 0.86, and the R5 values average 0.82.

  17. Degree of Approximation by a General Cλ -Summability Method

    NASA Astrophysics Data System (ADS)

    Sonker, S.; Munjal, A.

    2018-03-01

    In the present study, two theorems explaining the degree of approximation of signals belonging to the class Lip(α, p, w) by a more general C λ -method (Summability method) have been formulated. Improved estimations have been observed in terms of λ(n) where (λ(n))‑α ≤ n ‑α for 0 < α ≤ 1 as compared to previous studies presented in terms of n. These estimations of infinite matrices are very much applicable in solid state physics which further motivates for an investigation of perturbations of matrix valued functions.

  18. High-pressure phase transitions - Examples of classical predictability

    NASA Astrophysics Data System (ADS)

    Celebonovic, Vladan

    1992-09-01

    The applicability of the Savic and Kasanin (1962-1967) classical theory of dense matter to laboratory experiments requiring estimates of high-pressure phase transitions was examined by determining phase transition pressures for a set of 19 chemical substances (including elements, hydrocarbons, metal oxides, and salts) for which experimental data were available. A comparison between experimental and transition points and those predicted by the Savic-Kasanin theory showed that the theory can be used for estimating values of transition pressures. The results also support conclusions obtained in previous astronomical applications of the Savic-Kasanin theory.

  19. Wildfire risk and housing prices: a case study from Colorado Springs.

    Treesearch

    G.H. Donovan; P.A. Champ; D.T. Butry

    2007-01-01

    Unlike other natural hazards such as floods, hurricanes, and earthquakes, wildfire risk has not previously been examined using a hedonic property value model. In this article, we estimate a hedonic model based on parcel-level wildfire risk ratings from Colorado Springs. We found that providing homeowners with specific information about the wildfire risk rating of their...

  20. Multivariate Epi-splines and Evolving Function Identification Problems

    DTIC Science & Technology

    2015-04-15

    such extrinsic information as well as observed function and subgradient values often evolve in applications, we establish conditions under which the...previous study [30] dealt with compact intervals of IR. Splines are intimately tied to optimization problems through their variational theory pioneered...approxima- tion. Motivated by applications in curve fitting, regression, probability density estimation, variogram computation, financial curve construction

  1. Nuclear DNA C‐values in 30 Species Double the Familial Representation in Pteridophytes

    PubMed Central

    OBERMAYER, RENATE; LEITCH, ILIA J.; HANSON, LYNDA; BENNETT, MICHAEL D.

    2002-01-01

    Nuclear DNA C‐values and genome size are important biodiversity characters with fundamental biological significance. Yet C‐value data for pteridophytes, a diverse group of vascular plants with approx. 9000 extant species, remain scarce. A recent survey by Bennett and Leitch (2001, Annals of Botany 87: 335–345) found that C‐values were reported for only 48 pteridophyte species. To improve phylogenetic representation in this group and to check previously reported estimates, C‐values for 30 taxa in 17 families were measured using flow cytometry for all but one species. This technique proved generally applicable, but the ease with which C‐value data were generated varied greatly between materials. Comparing the new data with those previously published revealed several large discrepancies. After discounting doubtful data, C‐values for 62 pteridophyte species remained acceptable for analysis. The present work has increased the number of such species’ C‐values by 93 %, and more than doubled the number of families represented (from 10 to 21). Analysis shows that pteridophyte C‐values vary approx. 450‐fold, from 0·16 pg in Selaginella kraussiana to 72·7 pg in Psilotum nudum var. gasa. Superimposing C‐value data onto a robust phylogeny of pteridophytes suggests some possible trends in C‐value evolution and highlights areas for future work. PMID:12197518

  2. U.S. Geological Survey 2002 petroleum resource assessment of the National Petroleum Reserve in Alaska (NPRA)

    USGS Publications Warehouse

    Bird, K.J.; Houseknecht, D.W.

    2002-01-01

    A new USGS assessment concludes that NPRA holds signicantly greater petroleum resources than previously estimated. Technically recoverable, undiscovered oil beneath the Federal part of NPRA likely ranges between 5.9 and 13.2 billion barrels, with a mean (expected) value of 9.3 billion barrels. An estimated 1.3 to 5.6 billion barrels of those technically recoverable oil resources is economically recoverable at market prices of $22 to $30 per barrel. Technically recoverable, undiscovered nonassociated natural gas for the same area likely ranges between 39.1 and 83.2 trillion cubic feet, with a mean (expected) value of 59.7 trillion cubic feet. The economic viability of this gas will depend on the availability of a natural-gas pipeline for transport to market.

  3. Image-based non-contact monitoring of skin texture changed by piloerection for emotion estimation

    NASA Astrophysics Data System (ADS)

    Uchida, Mihiro; Akaho, Rina; Ogawa, Keiko; Tsumura, Norimichi

    2018-02-01

    In this paper, we find the effective feature values of skin textures captured by non-contact camera to monitor piloerection on the skin for emotion estimation. Recently, emotion estimation is required for service robots to interact with human more naturally. There are a lot of researches of estimating emotion and additional methods are required to improve emotion estimation because using only a few methods may not give enough information for emotion estimation. In the previous study, it is necessary to fix a device on the subject's arm for detecting piloerection, but the contact monitoring can be stress itself and distract the subject from concentrating in the stimuli and evoking strong emotion. So, we focused on the piloerection as the object obtained with non-contact methods. The piloerection is observed as goose bumps on the skin when the subject is emotionally moved, scared and so on. This phenomenon is caused by contraction of arrector pili muscles with the activation of sympathetic nervous system. This piloerection changes skin texture. Skin texture is important in the cosmetic industry to evaluate skin condition. Therefore, we thought that it will be effective to evaluate the condition of skin texture for emotion estimation. The evaluations were performed by extracting the effective feature values from skin textures captured with a high resolution camera. The effective feature values should have high correlation with the degree of piloerection. In this paper, we found that standard deviation of short-line inclination angles in the texture is well correlated with the degree of piloerection.

  4. Cost of services provided by the National Breast and Cervical Cancer Early Detection Program.

    PubMed

    Ekwueme, Donatus U; Subramanian, Sujha; Trogdon, Justin G; Miller, Jacqueline W; Royalty, Janet E; Li, Chunyu; Guy, Gery P; Crouse, Wesley; Thompson, Hope; Gardner, James G

    2014-08-15

    The National Breast and Cervical Cancer Early Detection Program (NBCCEDP) is the largest cancer screening program for low-income women in the United States. This study updates previous estimates of the costs of delivering preventive cancer screening services in the NBCCEDP. We developed a standardized web-based cost-assessment tool to collect annual activity-based cost data on screening for breast and cervical cancer in the NBCCEDP. Data were collected from 63 of the 66 programs that received funding from the Centers for Disease Control and Prevention during the 2006/2007 fiscal year. We used these data to calculate costs of delivering preventive public health services in the program. We estimated the total cost of all NBCCEDP services to be $296 (standard deviation [SD], $123) per woman served (including the estimated value of in-kind donations, which constituted approximately 15% of this total estimated cost). The estimated cost of screening and diagnostic services was $145 (SD, $38) per women served, which represented 57.7% of the total cost excluding the value of in-kind donations. Including the value of in-kind donations, the weighted mean cost of screening a woman for breast cancer was $110 with an office visit and $88 without, the weighted mean cost of a diagnostic procedure was $401, and the weighted mean cost per breast cancer detected was $35,480. For cervical cancer, the corresponding cost estimates were $61, $21, $415, and $18,995, respectively. These NBCCEDP cost estimates may help policy makers in planning and implementing future costs for various potential changes to the program. © 2014 American Cancer Society.

  5. Real-ear-to-coupler difference predictions as a function of age for two coupling procedures.

    PubMed

    Bagatto, Marlene P; Scollie, Susan D; Seewald, Richard C; Moodie, K Shane; Hoover, Brenda M

    2002-09-01

    The predicted real-ear-to-coupler difference (RECD) values currently used in pediatric hearing instrument prescription methods are based on 12-month age range categories and were derived from measures using standard acoustic immittance probe tips. Consequently, the purpose of this study was to develop normative RECD predicted values for foam/acoustic immittance tips and custom earmolds across the age continuum. To this end, RECD data were collected on 392 infants and children (141 with acoustic immittance tips, 251 with earmolds) to develop normative regression equations for use in deriving continuous age predictions of RECDs for foam/acoustic immittance tips and earmolds. Owing to the substantial between-subject variability observed in the data, the predictive equations of RECDs by age (in months) resulted in only gross estimates of RECD values (i.e., within +/- 4.4 dB for 95% of acoustic immittance tip measures; within +/- 5.4 dB in 95% of measures with custom earmolds) across frequency. Thus, it is concluded that the estimates derived from this study should not be used to replace the more precise individual RECD measurements. Relative to previously available normative RECD values for infants and young children, however, the estimates derived through this study provide somewhat more accurate predicted values for use under those circumstances for which individual RECD measurements cannot be made.

  6. Implications of the new Centers for Disease Control and Prevention blood lead reference value.

    PubMed

    Burns, Mackenzie S; Gerstenberger, Shawn L

    2014-06-01

    The Centers for Disease Control and Prevention recently established a new reference value (≥ 5 μg/dL) as the standard for identifying children with elevated blood lead levels (EBLs). At present, 535,000 US children aged 1 to 5 years (2.6%) are estimated to have EBLs according to the new standard, versus 0.8% according to the previous standard (≥ 10 μg/dL). Because EBLs signify the threshold for public health intervention, this new definition increases demands on lead poisoning prevention efforts. Primary prevention has been proven to reduce lead poisoning cases and is also cost effective; however, federal budget cuts threaten the existence of such programs. Protection for the highest-risk children necessitates a reinstatement of federal funding to previous levels.

  7. Stable nitrogen isotope ratios and accumulation of various HOCs in northern Baltic aquatic food chains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Broman, D.; Axelman, J.; Bergqvist, P.A.

    Ratios of naturally occurring stable isotopes of nitrogen ({delta}{sup 15}N) can be used to numerically classify trophic levels of organisms in food chains. By combining analyses results of various HOCs (e.g. PCDD/Fs, PCBs, DDTs, HCHs and some other pesticides) the biomagnification of these substances can be quantitatively estimated. In this paper different pelagic and benthic northern Baltic food chains were studied. The {delta}{sup 15}N-data gave food chain descriptions qualitatively consistent with previous conceptions of trophic arrangements in the food chains. The different HOCs concentrations were plotted versus the {delta}{sup 15}N-values for the different trophic levels and an exponential model ofmore » the form e{sup (A+B*{delta}N)} was fitted to the data. The estimates of the constant B in the model allows for an estimation of a biomagnification power (B) of different singular, or groups of, contaminants. A B-value around zero indicates that a substance is flowing through the food chain without being magnified, whereas a value > 0 indicates that a substance is biomagnified. Negative B-values indicate that a substance is not taken up or is metabolized. The A-term of the expression is only a scaling factor depending on the background level of the contaminant.« less

  8. Ice Age Sea Level Change on a Dynamic Earth

    NASA Astrophysics Data System (ADS)

    Austermann, J.; Mitrovica, J. X.; Latychev, K.; Rovere, A.; Moucha, R.

    2014-12-01

    Changes in global mean sea level (GMSL) are a sensitive indicator of climate variability during the current ice age. Reconstructions are largely based on local sea level records, and the mapping to GMSL is computed from simulations of glacial isostatic adjustment (GIA) on 1-D Earth models. We argue, using two case studies, that resolving important, outstanding issues in ice age paleoclimate requires a more sophisticated consideration of mantle structure and dynamics. First, we consider the coral record from Barbados, which is widely used to constrain global ice volume changes since the Last Glacial Maximum (LGM, ~21 ka). Analyses of the record using 1-D viscoelastic Earth models have estimated a GMSL change since LGM of ~120 m, a value at odds with analyses of other far field records, which range from 130-135 m. We revisit the Barbados case using a GIA model that includes laterally varying Earth structure (Austermann et al., Nature Geo., 2013) and demonstrate that neglecting this structure, in particular the high-viscosity slab in the mantle linked to the subduction of the South American plate, has biased (low) previous estimates of GMSL change since LGM by ~10 m. Our analysis brings the Barbados estimate into accord with studies from other far-field sites. Second, we revisit estimates of GMSL during the mid-Pliocene warm period (MPWP, ~3 Ma), which was characterized by temperatures 2-3°C higher than present. The ice volume deficit during this period is a source of contention, with estimates ranging from 0-40 m GMSL equivalent. We argue that refining estimates of ice volume during MPWP requires a correction for mantle flow induced dynamic topography (DT; Rowley et al., Science, 2013), a signal neglected in previous studies of ice age sea level change. We present estimates of GIA- and DT-corrected elevations of MPWP shorelines from the U.S. east coast, Australia and South Africa in an attempt to reconcile these records with a single GMSL value.

  9. A volumetric technique for fossil body mass estimation applied to Australopithecus afarensis.

    PubMed

    Brassey, Charlotte A; O'Mahoney, Thomas G; Chamberlain, Andrew T; Sellers, William I

    2018-02-01

    Fossil body mass estimation is a well established practice within the field of physical anthropology. Previous studies have relied upon traditional allometric approaches, in which the relationship between one/several skeletal dimensions and body mass in a range of modern taxa is used in a predictive capacity. The lack of relatively complete skeletons has thus far limited the potential application of alternative mass estimation techniques, such as volumetric reconstruction, to fossil hominins. Yet across vertebrate paleontology more broadly, novel volumetric approaches are resulting in predicted values for fossil body mass very different to those estimated by traditional allometry. Here we present a new digital reconstruction of Australopithecus afarensis (A.L. 288-1; 'Lucy') and a convex hull-based volumetric estimate of body mass. The technique relies upon identifying a predictable relationship between the 'shrink-wrapped' volume of the skeleton and known body mass in a range of modern taxa, and subsequent application to an articulated model of the fossil taxa of interest. Our calibration dataset comprises whole body computed tomography (CT) scans of 15 species of modern primate. The resulting predictive model is characterized by a high correlation coefficient (r 2  = 0.988) and a percentage standard error of 20%, and performs well when applied to modern individuals of known body mass. Application of the convex hull technique to A. afarensis results in a relatively low body mass estimate of 20.4 kg (95% prediction interval 13.5-30.9 kg). A sensitivity analysis on the articulation of the chest region highlights the sensitivity of our approach to the reconstruction of the trunk, and the incomplete nature of the preserved ribcage may explain the low values for predicted body mass here. We suggest that the heaviest of previous estimates would require the thorax to be expanded to an unlikely extent, yet this can only be properly tested when more complete fossils are available. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. A glucose monitoring system for on line estimation in man of blood glucose concentration using a miniaturized glucose sensor implanted in the subcutaneous tissue and a wearable control unit.

    PubMed

    Poitout, V; Moatti-Sirat, D; Reach, G; Zhang, Y; Wilson, G S; Lemonnier, F; Klein, J C

    1993-07-01

    We have developed a miniaturized glucose sensor which has been shown previously to function adequately when implanted in the subcutaneous tissue of rats and dogs. Following a glucose load, the sensor output increases, making it possible to calculate a sensitivity coefficient to glucose in vivo, and an extrapolated background current in the absence of glucose. These parameters are used for estimating at any time the apparent subcutaneous glucose concentration from the current. In the previous studies, this calibration was performed a posteriori, on the basis of the retrospective analysis of the changes in blood glucose and in the current generated by the sensor. However, for clinical application of the system, an on line estimation of glucose concentration would be necessary. Thus, this study was undertaken in order to assess the possibility of calibrating the sensor in real time, using a novel calibration procedure and a monitoring unit which was specifically designed for this purpose. This electronic device is able to measure, to filter and to store the current. During an oral glucose challenge, when a stable current is reached, it is possible to feed the unit with two different values of blood glucose and their corresponding times. The unit calculates the in vivo parameters, transforms every single value of current into an estimation of the glucose concentration, and then displays this estimation. In this study, 11 sensors were investigated of which two did not respond to glucose. In the other nine trials, the volunteers were asked to record every 30 s what appeared on the display during the secondary decrease in blood glucose.(ABSTRACT TRUNCATED AT 250 WORDS)

  11. Turbulent heat fluxes by profile and inertial dissipation methods: analysis of the atmospheric surface layer from shipboard measurements during the SOFIA/ASTEX and SEMAPHORE experiments

    NASA Astrophysics Data System (ADS)

    Dupuis, Hélène; Weill, Alain; Katsaros, Kristina; Taylor, Peter K.

    1995-10-01

    Heat flux estimates obtained using the inertial dissipation method, and the profile method applied to radiosonde soundings, are assessed with emphasis on the parameterization of the roughness lengths for temperature and specific humidity. Results from the inertial dissipation method show a decrease of the temperature and humidity roughness lengths for increasing neutral wind speed, in agreement with previous studies. The sensible heat flux estimates were obtained using the temperature estimated from the speed of sound determined by a sonic anemometer. This method seems very attractive for estimating heat fluxes over the ocean. However allowance must be made in the inertial dissipation method for non-neutral stratification. The SOFIA/ASTEX and SEMAPHORE results show that, in unstable stratification, a term due to the transport terms in the turbulent kinetic energy budget, has to be included in order to determine the friction velocity with better accuracy. Using the profile method with radiosonde data, the roughness length values showed large scatter. A reliable estimate of the temperature roughness length could not be obtained. The humidity roughness length values were compatible with those found using the inertial dissipation method.

  12. Parsimonious estimation of the Wechsler Memory Scale, Fourth Edition demographically adjusted index scores: immediate and delayed memory.

    PubMed

    Miller, Justin B; Axelrod, Bradley N; Schutte, Christian

    2012-01-01

    The recent release of the Wechsler Memory Scale Fourth Edition contains many improvements from a theoretical and administration perspective, including demographic corrections using the Advanced Clinical Solutions. Although the administration time has been reduced from previous versions, a shortened version may be desirable in certain situations given practical time limitations in clinical practice. The current study evaluated two- and three-subtest estimations of demographically corrected Immediate and Delayed Memory index scores using both simple arithmetic prorating and regression models. All estimated values were significantly associated with observed index scores. Use of Lin's Concordance Correlation Coefficient as a measure of agreement showed a high degree of precision and virtually zero bias in the models, although the regression models showed a stronger association than prorated models. Regression-based models proved to be more accurate than prorated estimates with less dispersion around observed values, particularly when using three subtest regression models. Overall, the present research shows strong support for estimating demographically corrected index scores on the WMS-IV in clinical practice with an adequate performance using arithmetically prorated models and a stronger performance using regression models to predict index scores.

  13. Assessing the Value of Frost Forecasts to Orchardists: A Dynamic Decision-Making Approach.

    NASA Astrophysics Data System (ADS)

    Katz, Richard W.; Murphy, Allan H.; Winkler, Robert L.

    1982-04-01

    The methodology of decision analysis is used to investigate the economic value of frost (i.e., minimum temperature) forecasts to orchardists. First, the fruit-frost situation and previous studies of the value of minimum temperature forecasts in this context are described. Then, after a brief overview of decision analysis, a decision-making model for the fruit-frost problem is presented. The model involves identifying the relevant actions and events (or outcomes), specifying the effect of taking protective action, and describing the relationships among temperature, bud loss, and yield loss. A bivariate normal distribution is used to model the relationship between forecast and observed temperatures, thereby characterizing the quality of different types of information. Since the orchardist wants to minimize expenses (or maximize payoffs) over the entire frost-protection season and since current actions and outcomes at any point in the season are related to both previous and future actions and outcomes, the decision-making problem is inherently dynamic in nature. As a result, a class of dynamic models known as Markov decision processes is considered. A computational technique called dynamic programming is used in conjunction with these models to determine the optimal actions and to estimate the value of meteorological information.Some results concerning the value of frost forecasts to orchardists in the Yakima Valley of central Washington are presented for the cases of red delicious apples, bartlett pears, and elberta peaches. Estimates of the parameter values in the Markov decision process are obtained from relevant physical and economic data. Twenty years of National Weather Service forecast and observed temperatures for the Yakima key station are used to estimate the quality of different types of information, including perfect forecasts, current forecasts, and climatological information. The orchardist's optimal actions over the frost-protection season and the expected expenses associated with the use of such information are determined using a dynamic programming algorithm. The value of meteorological information is defined as the difference between the expected expense for the information of interest and the expected expense for climatological information. Over the entire frost-protection season, the value estimates (in 1977 dollars) for current forecasts were $808 per acre for red delicious apples, $492 per acre for bartlett pears, and $270 per acre for elberta peaches. These amounts account for 66, 63, and 47%, respectively, of the economic value associated with decisions based on perfect forecasts. Varying the quality of the minimum temperature forecasts reveals that the relationship between the accuracy and value of such forecasts is nonlinear and that improvements in current forecasts would not be as significant in terms of economic value as were comparable improvements in the past.Several possible extensions of this study of the value of frost forecasts to orchardists are briefly described. Finally, the application of the dynamic model formulated in this paper to other decision-making problems involving the use of meteorological information is mentioned.

  14. Fault Isolation Filter for Networked Control System with Event-Triggered Sampling Scheme

    PubMed Central

    Li, Shanbin; Sauter, Dominique; Xu, Bugong

    2011-01-01

    In this paper, the sensor data is transmitted only when the absolute value of difference between the current sensor value and the previously transmitted one is greater than the given threshold value. Based on this send-on-delta scheme which is one of the event-triggered sampling strategies, a modified fault isolation filter for a discrete-time networked control system with multiple faults is then implemented by a particular form of the Kalman filter. The proposed fault isolation filter improves the resource utilization with graceful fault estimation performance degradation. An illustrative example is given to show the efficiency of the proposed method. PMID:22346590

  15. Interval-based reconstruction for uncertainty quantification in PET

    NASA Astrophysics Data System (ADS)

    Kucharczak, Florentin; Loquin, Kevin; Buvat, Irène; Strauss, Olivier; Mariano-Goulart, Denis

    2018-02-01

    A new directed interval-based tomographic reconstruction algorithm, called non-additive interval based expectation maximization (NIBEM) is presented. It uses non-additive modeling of the forward operator that provides intervals instead of single-valued projections. The detailed approach is an extension of the maximum likelihood—expectation maximization algorithm based on intervals. The main motivation for this extension is that the resulting intervals have appealing properties for estimating the statistical uncertainty associated with the reconstructed activity values. After reviewing previously published theoretical concepts related to interval-based projectors, this paper describes the NIBEM algorithm and gives examples that highlight the properties and advantages of this interval valued reconstruction.

  16. A new exact and more powerful unconditional test of no treatment effect from binary matched pairs.

    PubMed

    Lloyd, Chris J

    2008-09-01

    We consider the problem of testing for a difference in the probability of success from matched binary pairs. Starting with three standard inexact tests, the nuisance parameter is first estimated and then the residual dependence is eliminated by maximization, producing what I call an E+M P-value. The E+M P-value based on McNemar's statistic is shown numerically to dominate previous suggestions, including partially maximized P-values as described in Berger and Sidik (2003, Statistical Methods in Medical Research 12, 91-108). The latter method, however, may have computational advantages for large samples.

  17. Estimation of Road Friction Coefficient in Different Road Conditions Based on Vehicle Braking Dynamics

    NASA Astrophysics Data System (ADS)

    Zhao, You-Qun; Li, Hai-Qing; Lin, Fen; Wang, Jian; Ji, Xue-Wu

    2017-07-01

    The accurate estimation of road friction coefficient in the active safety control system has become increasingly prominent. Most previous studies on road friction estimation have only used vehicle longitudinal or lateral dynamics and often ignored the load transfer, which tends to cause inaccurate of the actual road friction coefficient. A novel method considering load transfer of front and rear axles is proposed to estimate road friction coefficient based on braking dynamic model of two-wheeled vehicle. Sliding mode control technique is used to build the ideal braking torque controller, which control target is to control the actual wheel slip ratio of front and rear wheels tracking the ideal wheel slip ratio. In order to eliminate the chattering problem of the sliding mode controller, integral switching surface is used to design the sliding mode surface. A second order linear extended state observer is designed to observe road friction coefficient based on wheel speed and braking torque of front and rear wheels. The proposed road friction coefficient estimation schemes are evaluated by simulation in ADAMS/Car. The results show that the estimated values can well agree with the actual values in different road conditions. The observer can estimate road friction coefficient exactly in real-time and resist external disturbance. The proposed research provides a novel method to estimate road friction coefficient with strong robustness and more accurate.

  18. Estimating sensitivity and specificity for technology assessment based on observer studies.

    PubMed

    Nishikawa, Robert M; Pesce, Lorenzo L

    2013-07-01

    The goal of this study was to determine the accuracy and precision of using scores from a receiver operating characteristic rating scale to estimate sensitivity and specificity. We used data collected in a previous study that measured the improvements in radiologists' ability to classify mammographic microcalcification clusters as benign or malignant with and without the use of a computer-aided diagnosis scheme. Sensitivity and specificity were estimated from the rating data from a question that directly asked the radiologists their biopsy recommendations, which was used as the "truth," because it is the actual recall decision, thus it is their subjective truth. By thresholding the rating data, sensitivity and specificity were estimated for different threshold values. Because of interreader and intrareader variability, estimated sensitivity and specificity values for individual readers could be as much as 100% in error when using rating data compared to using the biopsy recommendation data. When pooled together, the estimates using thresholding the rating data were in good agreement with sensitivity and specificity estimated from the recommendation data. However, the statistical power of the rating data estimates was lower. By simply asking the observer his or her explicit recommendation (eg, biopsy or no biopsy), sensitivity and specificity can be measured directly, giving a more accurate description of empirical variability and the power of the study can be maximized. Copyright © 2013 AUR. Published by Elsevier Inc. All rights reserved.

  19. Expanding the 2011 Prague, OK Event Catalog: Detections, Relocations, and Stress Drop Estimates

    NASA Astrophysics Data System (ADS)

    Clerc, F.; Cochran, E. S.; Dougherty, S. L.; Keranen, K. M.; Harrington, R. M.

    2016-12-01

    The Mw 5.6 earthquake occurring on 6 Nov. 2011, near Prague, OK, is thought to have been triggered by a Mw 4.8 foreshock, which was likely induced by fluid injection into local wastewater disposal wells [Keranen et al., 2013; Sumy et al., 2014]. Previous stress drop estimates for the sequence have suggested values lower than those for most Central and Eastern U.S. tectonic events of similar magnitudes [Hough, 2014; Sun & Hartzell, 2014; Sumy & Neighbors et al., 2016]. Better stress drop estimates allow more realistic assessment of seismic hazard and more effective regulation of wastewater injection. More reliable estimates of source properties may help to differentiate induced events from natural ones. Using data from local and regional networks, we perform event detections, relocations, and stress drop calculations of the Prague aftershock sequence. We use the Match & Locate method, a variation on the matched-filter method which detects events of lower magnitudes by stacking cross-correlograms from different stations [Zhang & Wen, 2013; 2015], in order to create a more complete catalog from 6 Nov to 31 Dec 2011. We then relocate the detected events using the HypoDD double-difference algorithm. Using our enhanced catalog and relocations, we examine the seismicity distribution for evidence of migration and investigate implications for triggering mechanisms. To account for path and site effects, we calculate stress drops using the Empirical Green's Function (EGF) spectral ratio method, beginning with 2730 previously relocated events. We determine whether there is a correlation between the stress drop magnitudes and the spatial and temporal distribution of events, including depth, position relative to existing faults, and proximity to injection wells. Finally, we consider the range of stress drop values and scaling with respect to event magnitudes within the context of previously published work for the Prague sequence as well as other induced and natural sequences.

  20. The costs of managing genital warts in the UK by devolved nation: England, Scotland, Wales and Northern Ireland.

    PubMed

    Coles, V A H; Chapman, R; Lanitis, T; Carroll, S M

    2016-01-01

    Genital warts, 90% of which are caused by human papillomavirus types 6 and 11, are a significant problem in the UK. The cost of managing genital warts was previously estimated at £52.4 million for 2010. The objective of this study was to estimate the cost of genital warts management up to 2012 in the UK and by jurisdiction. Population statistics and the number of reported genital warts cases in genito-urinary medicine clinics were obtained and extrapolated to 2012. Cases of genital warts treated in primary care were estimated from The Health Improvement Network database. The number of visits and therapy required were estimated by genito-urinary medicine experts. Costs were obtained from the appropriate national tariffs. The model estimated there were 220,875 genital warts cases in the UK in 2012, costing £58.44 million (£265/patient). It estimated 157,793 cases in England costing £41.74 million; 7468 cases in Scotland costing £1.90 million; 7095 cases in Wales costing £1.87 million; and 3621 cases in Northern Ireland costing £948,000. The full National Health Service costs for the management of genital warts have never previously been estimated separately for each jurisdiction. Findings reveal a significant economic burden, which is important to quantify when understanding the value of quadrivalent human papilloma virus vaccination. © The Author(s) 2015.

  1. Estimating the cost of a smoking employee.

    PubMed

    Berman, Micah; Crane, Rob; Seiber, Eric; Munur, Mehmet

    2014-09-01

    We attempted to estimate the excess annual costs that a US private employer may attribute to employing an individual who smokes tobacco as compared to a non-smoking employee. Reviewing and synthesising previous literature estimating certain discrete costs associated with smoking employees, we developed a cost estimation approach that approximates the total of such costs for U.S. employers. We examined absenteeism, presenteesim, smoking breaks, healthcare costs and pension benefits for smokers. Our best estimate of the annual excess cost to employ a smoker is $5816. This estimate should be taken as a general indicator of the extent of excess costs, not as a predictive point value. Employees who smoke impose significant excess costs on private employers. The results of this study may help inform employer decisions about tobacco-related policies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  2. Determination of hydrologic properties needed to calculate average linear velocity and travel time of ground water in the principal aquifer underlying the southeastern part of Salt Lake Valley, Utah

    USGS Publications Warehouse

    Freethey, G.W.; Spangler, L.E.; Monheiser, W.J.

    1994-01-01

    A 48-square-mile area in the southeastern part of the Salt Lake Valley, Utah, was studied to determine if generalized information obtained from geologic maps, water-level maps, and drillers' logs could be used to estimate hydraulic conduc- tivity, porosity, and slope of the potentiometric surface: the three properties needed to calculate average linear velocity of ground water. Estimated values of these properties could be used by water- management and regulatory agencies to compute values of average linear velocity, which could be further used to estimate travel time of ground water along selected flow lines, and thus to determine wellhead protection areas around public- supply wells. The methods used to estimate the three properties are based on assumptions about the drillers' descriptions, the depositional history of the sediments, and the boundary con- ditions of the hydrologic system. These assump- tions were based on geologic and hydrologic infor- mation determined from previous investigations. The reliability of the estimated values for hydro- logic properties and average linear velocity depends on the accuracy of these assumptions. Hydraulic conductivity of the principal aquifer was estimated by calculating the thickness- weighted average of values assigned to different drillers' descriptions of material penetrated during the construction of 98 wells. Using these 98 control points, the study area was divided into zones representing approximate hydraulic- conductivity values of 20, 60, 100, 140, 180, 220, and 250 feet per day. This range of values is about the same range of values used in developing a ground-water flow model of the principal aquifer in the early 1980s. Porosity of the principal aquifer was estimated by compiling the range of porosity values determined or estimated during previous investigations of basin-fill sediments, and then using five different values ranging from 15 to 35 percent to delineate zones in the study area that were assumed to be underlain by similar deposits. Delineation of the zones was based on depositional history of the area and the distri- bution of sediments shown on a surficial geologic map. Water levels in wells were measured twice in 1990: during late winter when ground-water with- drawals were the least and water levels the highest, and again in late summer, when ground- water withdrawals were the greatest and water levels the lowest. These water levels were used to construct potentiometric-contour maps and subsequently to determine the variability of the slope in the potentiometric surface in the area. Values for the three properties, derived from the described sources of information, were used to produce a map showing the general distribution of average linear velocity of ground water moving through the principal aquifer of the study area. Velocity derived ranged from 0.06 to 144 feet per day with a median of about 3 feet per day. Values were slightly faster for late summer 1990 than for late winter 1990, mainly because increased with- drawal of water during the summer created slightly steeper hydraulic-head gradients between the recharge area near the mountain front and the well fields farther to the west. The fastest average linear-velocity values were located at the mouth of Little Cottonwood Canyon and south of Dry Creek near the mountain front, where the hydraulic con- ductivity was estimated to be the largest because the drillers described the sediments to be pre- dominantly clean and coarse grained. Both of these areas also had steep slopes in the potentiometric surface. Other areas where average linear velocity was fast included small areas near pumping wells where the slope in the potentiometric surface was locally steepened. No apparent relation between average linear velocity and porosity could be seen in the mapped distributions of these two properties. Calculation of travel time along a flow line to a well in the southwestern part of the study area during the sum

  3. Precise Absolute Astrometry from the VLBA Imaging and Polarimetry Survey at 5 GHz

    NASA Technical Reports Server (NTRS)

    Petrov, L.; Taylor, G. B.

    2011-01-01

    We present accurate positions for 857 sources derived from the astrometric analysis of 16 eleven-hour experiments from the Very Long Baseline Array imaging and polarimetry survey at 5 GHz (VIPS). Among the observed sources, positions of 430 objects were not previously determined at milliarcsecond-level accuracy. For 95% of the sources the uncertainty of their positions ranges from 0.3 to 0.9 mas, with a median value of 0.5 mas. This estimate of accuracy is substantiated by the comparison of positions of 386 sources that were previously observed in astrometric programs simultaneously at 2.3/8.6 GHz. Surprisingly, the ionosphere contribution to group delay was adequately modeled with the use of the total electron content maps derived from GPS observations and only marginally affected estimates of source coordinates.

  4. The importance of information on relatives for the prediction of genomic breeding values and the implications for the makeup of reference data sets in livestock breeding schemes.

    PubMed

    Clark, Samuel A; Hickey, John M; Daetwyler, Hans D; van der Werf, Julius H J

    2012-02-09

    The theory of genomic selection is based on the prediction of the effects of genetic markers in linkage disequilibrium with quantitative trait loci. However, genomic selection also relies on relationships between individuals to accurately predict genetic value. This study aimed to examine the importance of information on relatives versus that of unrelated or more distantly related individuals on the estimation of genomic breeding values. Simulated and real data were used to examine the effects of various degrees of relationship on the accuracy of genomic selection. Genomic Best Linear Unbiased Prediction (gBLUP) was compared to two pedigree based BLUP methods, one with a shallow one generation pedigree and the other with a deep ten generation pedigree. The accuracy of estimated breeding values for different groups of selection candidates that had varying degrees of relationships to a reference data set of 1750 animals was investigated. The gBLUP method predicted breeding values more accurately than BLUP. The most accurate breeding values were estimated using gBLUP for closely related animals. Similarly, the pedigree based BLUP methods were also accurate for closely related animals, however when the pedigree based BLUP methods were used to predict unrelated animals, the accuracy was close to zero. In contrast, gBLUP breeding values, for animals that had no pedigree relationship with animals in the reference data set, allowed substantial accuracy. An animal's relationship to the reference data set is an important factor for the accuracy of genomic predictions. Animals that share a close relationship to the reference data set had the highest accuracy from genomic predictions. However a baseline accuracy that is driven by the reference data set size and the overall population effective population size enables gBLUP to estimate a breeding value for unrelated animals within a population (breed), using information previously ignored by pedigree based BLUP methods.

  5. An extension of the Saltykov method to quantify 3D grain size distributions in mylonites

    NASA Astrophysics Data System (ADS)

    Lopez-Sanchez, Marco A.; Llana-Fúnez, Sergio

    2016-12-01

    The estimation of 3D grain size distributions (GSDs) in mylonites is key to understanding the rheological properties of crystalline aggregates and to constraining dynamic recrystallization models. This paper investigates whether a common stereological method, the Saltykov method, is appropriate for the study of GSDs in mylonites. In addition, we present a new stereological method, named the two-step method, which estimates a lognormal probability density function describing the 3D GSD. Both methods are tested for reproducibility and accuracy using natural and synthetic data sets. The main conclusion is that both methods are accurate and simple enough to be systematically used in recrystallized aggregates with near-equant grains. The Saltykov method is particularly suitable for estimating the volume percentage of particular grain-size fractions with an absolute uncertainty of ±5 in the estimates. The two-step method is suitable for quantifying the shape of the actual 3D GSD in recrystallized rocks using a single value, the multiplicative standard deviation (MSD) parameter, and providing a precision in the estimate typically better than 5%. The novel method provides a MSD value in recrystallized quartz that differs from previous estimates based on apparent 2D GSDs, highlighting the inconvenience of using apparent GSDs for such tasks.

  6. Estimation of state and material properties during heat-curing molding of composite materials using data assimilation: A numerical study.

    PubMed

    Matsuzaki, Ryosuke; Tachikawa, Takeshi; Ishizuka, Junya

    2018-03-01

    Accurate simulations of carbon fiber-reinforced plastic (CFRP) molding are vital for the development of high-quality products. However, such simulations are challenging and previous attempts to improve the accuracy of simulations by incorporating the data acquired from mold monitoring have not been completely successful. Therefore, in the present study, we developed a method to accurately predict various CFRP thermoset molding characteristics based on data assimilation, a process that combines theoretical and experimental values. The degree of cure as well as temperature and thermal conductivity distributions during the molding process were estimated using both temperature data and numerical simulations. An initial numerical experiment demonstrated that the internal mold state could be determined solely from the surface temperature values. A subsequent numerical experiment to validate this method showed that estimations based on surface temperatures were highly accurate in the case of degree of cure and internal temperature, although predictions of thermal conductivity were more difficult.

  7. Determination of kinetic parameters for 123-I thyroid uptake in healthy Japanese

    NASA Astrophysics Data System (ADS)

    Kusuhara, Hiroyuki; Maeda, Kazuya

    2017-09-01

    The purpose of this study was to compare the kinetic parameters for iodide thyroid accumulation in Japanese today with previously reported values. We determined the thyroid uptake of 123-I at 24 hours after the oral administration in healthy male Japanese without any diet restriction. The mean value was 16.1±5.4%, which was similar or rather lower than those previously reported in Japan (1958-1972). Kinetic model analysis was conducted to obtain the clearance for thyroid uptake from the blood circulation. The thyroid uptake clearance of 123-I was 0.540±0.073 ml/min, which was almost similar to those reported previously. There is no obvious difference in the thyroid uptake for 24 hours, and kinetic parameters in healthy Japanese for these 50 years. The fraction of distributed to the thyroid gland is lower than the ICRP reference man, and such difference must be taken into consideration to estimate the radiation exposure upon Fukushima accident in Japan.

  8. An inventory of undiscovered Canadian mineral resources

    NASA Technical Reports Server (NTRS)

    Labovitz, M. L.; Griffiths, J. C.

    1982-01-01

    Unit regional value (URV) and unit regional weight are area standardized measures of the expected value and quantity, respectively, of the mineral resources of a region. Estimation and manipulation of the URV statistic is the basis of an approach to mineral resource evaluation. Estimates of the kind and value of exploitable mineral resources yet to be discovered in the provinces of Canada are used as an illustration of the procedure. The URV statistic is set within a previously developed model wherein geology, as measured by point counting geologic maps, is related to the historical record of mineral resource production of well-developed regions of the world, such as the 50 states of the U.S.A.; these may be considered the training set. The Canadian provinces are related to this training set using geological information obtained in the same way from geologic maps of the provinces. The desired predictions of yet to be discovered mineral resources in the Canadian provinces arise as a consequence. The implicit assumption is that regions of similar geology, if equally well developed, will produce similar weights and values of mineral resources.

  9. Correction of misclassification bias induced by the residential mobility in studies examining the link between socioeconomic environment and cancer incidence.

    PubMed

    Bryere, Josephine; Pornet, Carole; Dejardin, Olivier; Launay, Ludivine; Guittet, Lydia; Launoy, Guy

    2015-04-01

    Many international ecological studies that examine the link between social environment and cancer incidence use a deprivation index based on the subjects' address at the time of diagnosis to evaluate socioeconomic status. Thus, social past details are ignored, which leads to misclassification bias in the estimations. The objectives of this study were to include the latency delay in such estimations and to observe the effects. We adapted a previous methodology to correct estimates of the influence of socioeconomic environment on cancer incidence considering the latency delay in measuring socioeconomic status. We implemented this method using French data. We evaluated the misclassification due to social mobility with census data and corrected the relative risks. Inclusion of misclassification affected the values of relative risks, and the corrected values showed a greater departure from the value 1 than the uncorrected ones. For cancer of lung, colon-rectum, lips-mouth-pharynx, kidney and esophagus in men, the over incidence in the deprived categories was augmented by the correction. By not taking into account the latency period in measuring socioeconomic status, the burden of cancer associated with social inequality may be underestimated. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Oxygen isotope fractionation between bird bone phosphate and drinking water.

    PubMed

    Amiot, Romain; Angst, Delphine; Legendre, Serge; Buffetaut, Eric; Fourel, François; Adolfssen, Jan; André, Aurore; Bojar, Ana Voica; Canoville, Aurore; Barral, Abel; Goedert, Jean; Halas, Stanislaw; Kusuhashi, Nao; Pestchevitskaya, Ekaterina; Rey, Kevin; Royer, Aurélien; Saraiva, Antônio Álamo Feitosa; Savary-Sismondini, Bérengère; Siméon, Jean-Luc; Touzeau, Alexandra; Zhou, Zhonghe; Lécuyer, Christophe

    2017-06-01

    Oxygen isotope compositions of bone phosphate (δ 18 O p ) were measured in broiler chickens reared in 21 farms worldwide characterized by contrasted latitudes and local climates. These sedentary birds were raised during an approximately 3 to 4-month period, and local precipitation was the ultimate source of their drinking water. This sampling strategy allowed the relationship to be determined between the bone phosphate δ 18 O p values (from 9.8 to 22.5‰ V-SMOW) and the local rainfall δ 18 O w values estimated from nearby IAEA/WMO stations (from -16.0 to -1.0‰ V-SMOW). Linear least square fitting of data provided the following isotopic fractionation equation: δ 18 O w  = 1.119 (±0.040) δ 18 O p  - 24.222 (±0.644); R 2  = 0.98. The δ 18 O p -δ 18 O w couples of five extant mallard ducks, a common buzzard, a European herring gull, a common ostrich, and a greater rhea fall within the predicted range of the equation, indicating that the relationship established for extant chickens can also be applied to birds of various ecologies and body masses. Applied to published oxygen isotope compositions of Miocene and Pliocene penguins from Peru, this new equation computes estimates of local seawater similar to those previously calculated. Applied to the basal bird Confuciusornis from the Early Cretaceous of Northeastern China, our equation gives a slightly higher δ 18 O w value compared to the previously estimated one, possibly as a result of lower body temperature. These data indicate that caution should be exercised when the relationship estimated for modern birds is applied to their basal counterparts that likely had a metabolism intermediate between that of their theropod dinosaur ancestors and that of advanced ornithurines.

  11. Frequency domain analysis of errors in cross-correlations of ambient seismic noise

    NASA Astrophysics Data System (ADS)

    Liu, Xin; Ben-Zion, Yehuda; Zigone, Dimitri

    2016-12-01

    We analyse random errors (variances) in cross-correlations of ambient seismic noise in the frequency domain, which differ from previous time domain methods. Extending previous theoretical results on ensemble averaged cross-spectrum, we estimate confidence interval of stacked cross-spectrum of finite amount of data at each frequency using non-overlapping windows with fixed length. The extended theory also connects amplitude and phase variances with the variance of each complex spectrum value. Analysis of synthetic stationary ambient noise is used to estimate the confidence interval of stacked cross-spectrum obtained with different length of noise data corresponding to different number of evenly spaced windows of the same duration. This method allows estimating Signal/Noise Ratio (SNR) of noise cross-correlation in the frequency domain, without specifying filter bandwidth or signal/noise windows that are needed for time domain SNR estimations. Based on synthetic ambient noise data, we also compare the probability distributions, causal part amplitude and SNR of stacked cross-spectrum function using one-bit normalization or pre-whitening with those obtained without these pre-processing steps. Natural continuous noise records contain both ambient noise and small earthquakes that are inseparable from the noise with the existing pre-processing steps. Using probability distributions of random cross-spectrum values based on the theoretical results provides an effective way to exclude such small earthquakes, and additional data segments (outliers) contaminated by signals of different statistics (e.g. rain, cultural noise), from continuous noise waveforms. This technique is applied to constrain values and uncertainties of amplitude and phase velocity of stacked noise cross-spectrum at different frequencies, using data from southern California at both regional scale (˜35 km) and dense linear array (˜20 m) across the plate-boundary faults. A block bootstrap resampling method is used to account for temporal correlation of noise cross-spectrum at low frequencies (0.05-0.2 Hz) near the ocean microseismic peaks.

  12. Asymmetrical effects of mesophyll conductance on fundamental photosynthetic parameters and their relationships estimated from leaf gas exchange measurements

    USDA-ARS?s Scientific Manuscript database

    Most previous analyses of leaf gas exchange measurements assumed an infinite value of mesophyll conductance (gm) and thus equaled CO2 partial pressures in the substomatal cavity and chloroplast. Yet an increasing number of studies have recognized that gm is finite and there is a drawdown of CO2 part...

  13. Re-Examining the Role of Teacher Quality in the Educational Production Function. Working Paper 2007-03

    ERIC Educational Resources Information Center

    Koedel, Cory; Betts, Julian R.

    2007-01-01

    This study uses administrative data linking students and teachers at the classroom level to estimate teacher value-added to student test scores. We find that variation in teacher quality is an important contributor to student achievement--more important than has been implied by previous work. This result is attributable, at least in part, to the…

  14. QUENCHING OF CARBON MONOXIDE AND METHANE IN THE ATMOSPHERES OF COOL BROWN DWARFS AND HOT JUPITERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Visscher, Channon; Moses, Julianne I., E-mail: visscher@lpi.usra.edu, E-mail: jmoses@spacescience.org

    We explore CO{r_reversible}CH{sub 4} quench kinetics in the atmospheres of substellar objects using updated timescale arguments, as suggested by a thermochemical kinetics and diffusion model that transitions from the thermochemical-equilibrium regime in the deep atmosphere to a quench-chemical regime at higher altitudes. More specifically, we examine CO quench chemistry on the T dwarf Gliese 229B and CH{sub 4} quench chemistry on the hot-Jupiter HD 189733b. We describe a method for correctly calculating reverse rate coefficients for chemical reactions, discuss the predominant pathways for CO{r_reversible}CH{sub 4} interconversion as indicated by the model, and demonstrate that a simple timescale approach can bemore » used to accurately describe the behavior of quenched species when updated reaction kinetics and mixing-length-scale assumptions are used. Proper treatment of quench kinetics has important implications for estimates of molecular abundances and/or vertical mixing rates in the atmospheres of substellar objects. Our model results indicate significantly higher K{sub zz} values than previously estimated near the CO quench level on Gliese 229B, whereas current-model-data comparisons using CH{sub 4} permit a wide range of K{sub zz} values on HD 189733b. We also use updated reaction kinetics to revise previous estimates of the Jovian water abundance, based upon the observed abundance and chemical behavior of carbon monoxide. The CO chemical/observational constraint, along with Galileo entry probe data, suggests a water abundance of approximately 0.51-2.6 x solar (for a solar value of H{sub 2}O/H{sub 2} = 9.61 x 10{sup -4}) in Jupiter's troposphere, assuming vertical mixing from the deep atmosphere is the only source of tropospheric CO.« less

  15. Glacial conditions in the Red Sea

    NASA Astrophysics Data System (ADS)

    Rohling, Eelco J.

    1994-10-01

    In this paper, results from previous studies on planktonic foraminifera, δ18O, and global sea level are combined to discuss climatic conditions in the Red Sea during the last glacial maximum (18,000 B.P.). First, the influence of 120-m sea level lowering on the exchange transport through the strait of Bab-el-Mandab is considered. This strait is the only natural connection of the Red Sea to the open ocean. Next, glacial Red Sea outflow salinity is estimated (about 48 parts per thousand) from the foraminiferal record. Combined, these results yield an estimate of the glacial net water deficit, which appears to have been quite similar to the present (about 2 m yr-1). Finally, budget calculation of δ18O fluxes suggests that the glacial δ18O value of evaporation was about 50% of the present value. This is considered to have resulted from substantially increased mean wind speeds over the glacial Red Sea, which would have caused a rapid drop in the kinematic fractionation factor for 18O. The sensitivity of the calculated values for water deficit and isotopic fractionation to the various assumptions and estimates is evaluated in the discussion. Improvents are to be expected especially through research on the glacial salinity contrast between the Red Sea and Gulf of Aden. It is argued, however, that such future improvement will likely result in a worsening of the isotopic discrepancy, thus increasing the need for an additional mechanism that influenced fractionation (such as mean wind speed). This study demonstrates the need for caution when calculating paleosalinities from δ18O records under the assumption that the modern S∶δ18O relation has remained constant through time. Previously overlooked factors, such as mean wind speed, may have significantly altered that relation in the past.

  16. Permeability estimates in histopathology-proved treatment-induced necrosis using perfusion CT: can these add to other perfusion parameters in differentiating from recurrent/progressive tumors?

    PubMed

    Jain, R; Narang, J; Schultz, L; Scarpace, L; Saksena, S; Brown, S; Rock, J P; Rosenblum, M; Gutierrez, J; Mikkelsen, T

    2011-04-01

    Differentiating treatment effects from RPT is a common yet challenging task in a busy neuro-oncologic practice. PS probably represents a different aspect of angiogenesis and vasculature and can provide additional physiologic information about recurrent/progressive enhancing lesions. The purpose of the study was to use PS measured by using PCT to differentiate TIN from RPT in patients with previously irradiated brain tumor who presented with a recurrent/progressive enhancing lesion. Seventy-two patients underwent PCT for assessment of a recurrent/progressive enhancing lesion from January 2006 to November 2009. Thirty-eight patients who underwent surgery and histopathologic diagnosis were included in this analysis. Perfusion parameters such as PS, CBV, CBF, and MTT were obtained from the enhancing lesion as well as from the NAWM. Of 38 patients, 11 were diagnosed with pure TIN and 27 had RPT. Patients with TIN showed significantly lower mean PS values than those with RPT (1.8 ± 0.8 versus 3.6 ± 1.6 mL/100 g/min; P value=.001). The TIN group also showed lower rCBV (1.2 ± 0.3 versus 2.1 ± 0.7; P value<.001), lower rCBF (1.2 ± 0.5 versus 2.6 ± 1.7; P value=.004), and higher rMTT (1.4 ± 0.4 versus 1.0 ± 0.4; P value=.018) compared with the RPT group. PCT and particularly PS can be used in patients with previously treated brain tumors to differentiate TIN from RPT. PS estimates can help increase the accuracy of PCT in differentiating these 2 entities.

  17. Compressional reactivation of hyperextended domains on a rifted margin: a requirement for a reappraisal of traditional restoration procedures?

    NASA Astrophysics Data System (ADS)

    Cadenas Martínez, P.; Fernandez Viejo, G.; Pulgar, J. A.

    2017-12-01

    The North Iberian margin is an inverted hyperextended rifted margin that preserves the initial stages of compressional reactivation. Rift inheritance conditioned in a determinant way the contractional reactivation. The underthrusting of the hyperextended distal domains beneath the platform and the formation of an accretionary wedge at the toe of the slope focused most of the compression. The underthrusting gave place to the formation of a crustal root and the uplifting of the Cantabrian Mountains onshore. Meanwhile, the main rift basins within the continental platform were slightly inverted. Plate kinematic reconstructions and palinspatic restorations have provided different shortening values. Thereby, the amount of shortening linked with the Cenozoic compression is still unclear and a matter of debate on this area.In this work, we present a full cross-section at the central part of the North Iberian margin developed from the restoration of a high quality depth migrated seismic profile running from the continental platform to the Biscay abyssal plain. A shortening calculation gives an estimate of about 1 km within the Asturian Basin, in the continental platform, while in the accretionary wedge at the bottom of the slope, shortening values ranges between 12 km and 15 km. The limited values estimated within the Asturian Basin support the mild inversion observed within this basin, which preserves most of the extensional imprint. Within the abyssal plain, shortening values differ from previous estimations and cannot account for a high amount of compression in the upper crust. Deformation of the hyperextended crust and the exhumed mantle domains inherited from the rifting processes would have accommodated most of the compression. Restoration of these domains seems to be the key to decipher the structure and the tectonic evolution of the reactivated rifted margin but cannot be solved accurately using traditional restoration methods. This leads to a reappraisal of the traditional way of restoring compressional belt transects and particularly, when previous hyperextended domains within the rifted margins are involved.

  18. Estimating the relative utility of screening mammography.

    PubMed

    Abbey, Craig K; Eckstein, Miguel P; Boone, John M

    2013-05-01

    The concept of diagnostic utility is a fundamental component of signal detection theory, going back to some of its earliest works. Attaching utility values to the various possible outcomes of a diagnostic test should, in principle, lead to meaningful approaches to evaluating and comparing such systems. However, in many areas of medical imaging, utility is not used because it is presumed to be unknown. In this work, we estimate relative utility (the utility benefit of a detection relative to that of a correct rejection) for screening mammography using its known relation to the slope of a receiver operating characteristic (ROC) curve at the optimal operating point. The approach assumes that the clinical operating point is optimal for the goal of maximizing expected utility and therefore the slope at this point implies a value of relative utility for the diagnostic task, for known disease prevalence. We examine utility estimation in the context of screening mammography using the Digital Mammographic Imaging Screening Trials (DMIST) data. We show how various conditions can influence the estimated relative utility, including characteristics of the rating scale, verification time, probability model, and scope of the ROC curve fit. Relative utility estimates range from 66 to 227. We argue for one particular set of conditions that results in a relative utility estimate of 162 (±14%). This is broadly consistent with values in screening mammography determined previously by other means. At the disease prevalence found in the DMIST study (0.59% at 365-day verification), optimal ROC slopes are near unity, suggesting that utility-based assessments of screening mammography will be similar to those found using Youden's index.

  19. Inferring recent outcrossing rates using multilocus individual heterozygosity: application to evolving wheat populations.

    PubMed Central

    Enjalbert, J; David, J L

    2000-01-01

    Using multilocus individual heterozygosity, a method is developed to estimate the outcrossing rates of a population over a few previous generations. Considering that individuals originate either from outcrossing or from n successive selfing generations from an outbred ancestor, a maximum-likelihood (ML) estimator is described that gives estimates of past outcrossing rates in terms of proportions of individuals with different n values. Heterozygosities at several unlinked codominant loci are used to assign n values to each individual. This method also allows a test of whether populations are in inbreeding equilibrium. The estimator's reliability was checked using simulations for different mating histories. We show that this ML estimator can provide estimates of outcrossing rates for the final generation outcrossing rate (t(0)) and a mean of the preceding rates (t(p)) and can detect major temporal variation in the mating system. The method is most efficient for low to intermediate outcrossing levels. Applied to nine populations of wheat, this method gave estimates of t(0) and t(p). These estimates confirmed the absence of outcrossing t(0) = 0 in the two populations subjected to manual selfing. For free-mating wheat populations, it detected lower final generation outcrossing rates t(0) = 0-0.06 than those expected from global heterozygosity t = 0.02-0.09. This estimator appears to be a new and efficient way to describe the multilocus heterozygosity of a population, complementary to Fis and progeny analysis approaches. PMID:11102388

  20. Transfer of radiocarbon liquid releases from the AREVA La Hague spent fuel reprocessing plant in the English Channel.

    PubMed

    Fiévet, Bruno; Voiseux, Claire; Rozet, Marianne; Masson, Michel; Bailly du Bois, Pascal

    2006-01-01

    The recent risk assessment by the North-Cotentin Radioecology Group (, 1999) outlined that (14)C has become one of the major sources of the low dose to man through seafood consumption. It was recommended that more data should be collected about (14)C in the local marine environment. The present study aims to respond to this recommendation. The estimation of (14)C activity in marine species is based on concentration factor values. The values reported here ranged from 1x10(3) to 5x10(3)Bqkg(-1)ww/BqL(-1). A comparison was made between the observed and predicted values. The accuracy of (14)C activity calculations was estimated between underestimation by a factor of 2 and over-estimation by 50% (95% confidence interval). However, the use of the concentration factor parameter is based on the biological and seawater compartments being in steady state. This assumption may not be met at short distances from the point of release of discharges, where rapid changes in seawater concentration may be smoothed out in living organisms due to transfer kinetics. The data processing technique, previously published by Fiévet and Plet (2003. Estimating biological half-lives of radionuclides in marine compartments from environmental time-series measurements. Journal of Environmental Radioactivity 65, 91-107), was used to deal with (14)C transfer kinetics, and carbon half-lives between seawater and a few biological compartments were thus estimated.

  1. Annual and average estimates of water-budget components based on hydrograph separation and PRISM precipitation for gaged basins in the Appalachian Plateaus Region, 1900-2011

    USGS Publications Warehouse

    Nelms, David L.; Messinger, Terence; McCoy, Kurt J.

    2015-07-14

    As part of the U.S. Geological Survey’s Groundwater Resources Program study of the Appalachian Plateaus aquifers, annual and average estimates of water-budget components based on hydrograph separation and precipitation data from parameter-elevation regressions on independent slopes model (PRISM) were determined at 849 continuous-record streamflow-gaging stations from Mississippi to New York and covered the period of 1900 to 2011. Only complete calendar years (January to December) of streamflow record at each gage were used to determine estimates of base flow, which is that part of streamflow attributed to groundwater discharge; such estimates can serve as a proxy for annual recharge. For each year, estimates of annual base flow, runoff, and base-flow index were determined using computer programs—PART, HYSEP, and BFI—that have automated the separation procedures. These streamflow-hydrograph analysis methods are provided with version 1.0 of the U.S. Geological Survey Groundwater Toolbox, which is a new program that provides graphing, mapping, and analysis capabilities in a Windows environment. Annual values of precipitation were estimated by calculating the average of cell values intercepted by basin boundaries where previously defined in the GAGES–II dataset. Estimates of annual evapotranspiration were then calculated from the difference between precipitation and streamflow.

  2. Perfluorinated compounds in human breast milk from several Asian countries, and in infant formula and dairy milk from the United States.

    PubMed

    Tao, Lin; Ma, Jing; Kunisue, Tatsuya; Libelo, E Laurence; Tanabe, Shinsuke; Kannan, Kurunthachalam

    2008-11-15

    The occurrence of perfluorinated compounds (PFCs) in human blood is known to be widespread; nevertheless, the sources of exposure to humans, including infants, are not well understood. In this study, breast milk collected from seven countries in Asia was analyzed (n=184) for nine PFCs, including perfluorooctanesulfonate (PFOS) and perfluorooctanoate (PFOA). In addition, five brands of infant formula (n=21) and 11 brands of dairy milk (n=12) collected from retail stores in the United States were analyzed, for comparison with PFC concentrations previously reported for breast milk from the U.S. PFOS was the predominant PFC detected in almost all Asian breast milk samples, followed by perfluorohexanesulfonate (PFHxS) and PFOA. Median concentrations of PFOS in breast milk from Asian countries varied significantly;the lowest concentration of 39.4 pg/mL was found in India, and the highest concentration of 196 pg/mL was found in Japan. The measured concentrations were similarto or less than the concentrations previously reported from Sweden, the United States, and Germany (median, 106-166 pg/mL). PFHxS was found in more than 70% of the samples analyzed from Japan, Malaysia, Philippines, and Vietnam, at mean concentrations ranging from 6.45 (Malaysia) to 15.8 (Philippines) pg/mL PFOA was found frequently only in samples from Japan; the mean concentration for that country was 77.7 pg/mL. None of the PFCs were detected in the infant-formula or dairy-milk samples from the U.S. except a few samples that contained concentrations close to the limit of detection. The estimated average daily intake of PFOS by infants from seven Asian countries, via breastfeeding, was 11.8 +/- 10.6 ng/kg bw/ day; this value is 7-12 times higher than the estimated adult dietary intakes previously reported from Germany, Canada, and Spain. The average daily intake of PFOA by Japanese infants was 9.6 +/- 4.9 ng/kg bw/day, a value 3-10 times greater than the estimated adult dietary intakes reported from Germany and Canada. The highest estimated daily intakes of PFOS and PFOA by infants from seven Asian countries studied were 1-2 orders of magnitude below the tolerable daily intake values recommended by the U.K. Food Standards Agency.

  3. Three Dimensional Constraint Effects on the Estimated (Delta)CTOD during the Numerical Simulation of Different Fatigue Threshold Testing Techniques

    NASA Technical Reports Server (NTRS)

    Seshadri, Banavara R.; Smith, Stephen W.

    2007-01-01

    Variation in constraint through the thickness of a specimen effects the cyclic crack-tip-opening displacement (DELTA CTOD). DELTA CTOD is a valuable measure of crack growth behavior, indicating closure development, constraint variations and load history effects. Fatigue loading with a continual load reduction was used to simulate the load history associated with fatigue crack growth threshold measurements. The constraint effect on the estimated DELTA CTOD is studied by carrying out three-dimensional elastic-plastic finite element simulations. The analysis involves numerical simulation of different standard fatigue threshold test schemes to determine how each test scheme affects DELTA CTOD. The American Society for Testing and Materials (ASTM) prescribes standard load reduction procedures for threshold testing using either the constant stress ratio (R) or constant maximum stress intensity (K(sub max)) methods. Different specimen types defined in the standard, namely the compact tension, C(T), and middle cracked tension, M(T), specimens were used in this simulation. The threshold simulations were conducted with different initial K(sub max) values to study its effect on estimated DELTA CTOD. During each simulation, the DELTA CTOD was estimated at every load increment during the load reduction procedure. Previous numerical simulation results indicate that the constant R load reduction method generates a plastic wake resulting in remote crack closure during unloading. Upon reloading, this remote contact location was observed to remain in contact well after the crack tip was fully open. The final region to open is located at the point at which the load reduction was initiated and at the free surface of the specimen. However, simulations carried out using the constant Kmax load reduction procedure did not indicate remote crack closure. Previous analysis results using various starting K(sub max) values and different load reduction rates have indicated DELTA CTOD is independent of specimen size. A study of the effect of specimen thickness and geometry on the measured DELTA CTOD for various load reduction procedures and its implication in the estimation of fatigue crack growth threshold values is discussed.

  4. Parametric cost estimation for space science missions

    NASA Astrophysics Data System (ADS)

    Lillie, Charles F.; Thompson, Bruce E.

    2008-07-01

    Cost estimation for space science missions is critically important in budgeting for successful missions. The process requires consideration of a number of parameters, where many of the values are only known to a limited accuracy. The results of cost estimation are not perfect, but must be calculated and compared with the estimates that the government uses for budgeting purposes. Uncertainties in the input parameters result from evolving requirements for missions that are typically the "first of a kind" with "state-of-the-art" instruments and new spacecraft and payload technologies that make it difficult to base estimates on the cost histories of previous missions. Even the cost of heritage avionics is uncertain due to parts obsolescence and the resulting redesign work. Through experience and use of industry best practices developed in participation with the Aerospace Industries Association (AIA), Northrop Grumman has developed a parametric modeling approach that can provide a reasonably accurate cost range and most probable cost for future space missions. During the initial mission phases, the approach uses mass- and powerbased cost estimating relationships (CER)'s developed with historical data from previous missions. In later mission phases, when the mission requirements are better defined, these estimates are updated with vendor's bids and "bottoms- up", "grass-roots" material and labor cost estimates based on detailed schedules and assigned tasks. In this paper we describe how we develop our CER's for parametric cost estimation and how they can be applied to estimate the costs for future space science missions like those presented to the Astronomy & Astrophysics Decadal Survey Study Committees.

  5. Implications of the New Centers for Disease Control and Prevention Blood Lead Reference Value

    PubMed Central

    Burns, Mackenzie S.; Gerstenberger, Shawn L.

    2014-01-01

    The Centers for Disease Control and Prevention recently established a new reference value (≥ 5 μg/dL) as the standard for identifying children with elevated blood lead levels (EBLs). At present, 535 000 US children aged 1 to 5 years (2.6%) are estimated to have EBLs according to the new standard, versus 0.8% according to the previous standard (≥ 10 μg/dL). Because EBLs signify the threshold for public health intervention, this new definition increases demands on lead poisoning prevention efforts. Primary prevention has been proven to reduce lead poisoning cases and is also cost effective; however, federal budget cuts threaten the existence of such programs. Protection for the highest-risk children necessitates a reinstatement of federal funding to previous levels. PMID:24825227

  6. Estimating health state utility values for comorbid health conditions using SF-6D data.

    PubMed

    Ara, Roberta; Brazier, John

    2011-01-01

    When health state utility values for comorbid health conditions are not available, data from cohorts with single conditions are used to estimate scores. The methods used can produce very different results and there is currently no consensus on which is the most appropriate approach. The objective of the current study was to compare the accuracy of five different methods within the same dataset. Data collected during five Welsh Health Surveys were subgrouped by health status. Mean short-form 6 dimension (SF-6D) scores for cohorts with a specific health condition were used to estimate mean SF-6D scores for cohorts with comorbid conditions using the additive, multiplicative, and minimum methods, the adjusted decrement estimator (ADE), and a linear regression model. The mean SF-6D for subgroups with comorbid health conditions ranged from 0.4648 to 0.6068. The linear model produced the most accurate scores for the comorbid health conditions with 88% of values accurate to within the minimum important difference for the SF-6D. The additive and minimum methods underestimated or overestimated the actual SF-6D scores respectively. The multiplicative and ADE methods both underestimated the majority of scores. However, both methods performed better when estimating scores smaller than 0.50. Although the range in actual health state utility values (HSUVs) was relatively small, our data covered the lower end of the index and the majority of previous research has involved actual HSUVs at the upper end of possible ranges. Although the linear model gave the most accurate results in our data, additional research is required to validate our findings. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  7. The impact of xylem cavitation on water potential isotherms measured by the pressure chamber technique in Metasequoia glyptostroboides Hu & W.C. Cheng.

    PubMed

    Yang, Dongmei; Pan, Shaoan; Tyree, Melvin T

    2016-08-01

    Pressure-volume (PV) curve analysis is the most common and accurate way of estimating all components of the water relationships in leaves (water potential isotherms) as summarized in the Höfler diagram. PV curve analysis yields values of osmotic pressure, turgor pressure, and elastic modulus of cell walls as a function of relative water content. It allows the computation of symplasmic/apoplastic water content partitioning. For about 20 years, cavitation in xylem has been postulated as a possible source of error when estimating the above parameters, but, to the best of the authors' knowledge, no one has ever previously quantified its influence. Results in this paper provide independent estimates of osmotic pressure by PV curve analysis and by thermocouple psychrometer measurement. An anatomical evaluation was also used for the first time to compare apoplastic water fraction estimates from PV analysis with anatomical values. Conclusions include: (i) PV curve values of osmotic pressure are underestimated prior to correcting osmotic pressure for water loss by cavitation in Metasequoia glyptostroboides; (ii) psychrometer estimates of osmotic pressure obtained in tissues killed by freezing or heating agreed with PV values before correction for apoplastic water dilution; (iii) after correction for dilution effects, a solute concentration enhancement (0.27MPa or 0.11 osmolal) was revealed. The possible sources of solute enhancement were starch hydrolysis and release of ions from the Donnan free space of needle cell walls. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  8. Cost of Services Provided by the National Breast and Cervical Cancer Early Detection Program

    PubMed Central

    Ekwueme, Donatus U.; Subramanian, Sujha; Trogdon, Justin G.; Miller, Jacqueline W.; Royalty, Janet E.; Li, Chunyu; Guy, Gery P.; Crouse, Wesley; Thompson, Hope; Gardner, James G.

    2015-01-01

    BACKGROUND The National Breast and Cervical Cancer Early Detection Program (NBCCEDP) is the largest cancer screening program for low-income women in the United States. This study updates previous estimates of the costs of delivering preventive cancer screening services in the NBCCEDP. METHODS We developed a standardized web-based cost-assessment tool to collect annual activity-based cost data on screening for breast and cervical cancer in the NBCCEDP. Data were collected from 63 of the 66 programs that received funding from the Centers for Disease Control and Prevention during the 2006/2007 fiscal year. We used these data to calculate costs of delivering preventive public health services in the program. RESULTS We estimated the total cost of all NBCCEDP services to be $296 (standard deviation [SD], $123) per woman served (including the estimated value of in-kind donations, which constituted approximately 15% of this total estimated cost). The estimated cost of screening and diagnostic services was $145 (SD, $38) per women served, which represented 57.7% of the total cost excluding the value of in-kind donations. Including the value of in-kind donations, the weighted mean cost of screening a woman for breast cancer was $110 with an office visit and $88 without, the weighted mean cost of a diagnostic procedure was $401, and the weighted mean cost per breast cancer detected was $35,480. For cervical cancer, the corresponding cost estimates were $61, $21, $415, and $18,995, respectively. CONCLUSIONS These NBCCEDP cost estimates may help policy makers in planning and implementing future costs for various potential changes to the program. PMID:25099904

  9. Quantifying global dust devil occurrence from meteorological analyses

    PubMed Central

    Jemmett-Smith, Bradley C; Marsham, John H; Knippertz, Peter; Gilkeson, Carl A

    2015-01-01

    Dust devils and nonrotating dusty plumes are effective uplift mechanisms for fine particles, but their contribution to the global dust budget is uncertain. By applying known bulk thermodynamic criteria to European Centre for Medium-Range Weather Forecasts (ECMWF) operational analyses, we provide the first global hourly climatology of potential dust devil and dusty plume (PDDP) occurrence. In agreement with observations, activity is highest from late morning into the afternoon. Combining PDDP frequencies with dust source maps and typical emission values gives the best estimate of global contributions of 3.4% (uncertainty 0.9–31%), 1 order of magnitude lower than the only estimate previously published. Total global hours of dust uplift by dry convection are ∼0.002% of the dust-lifting winds resolved by ECMWF, consistent with dry convection making a small contribution to global uplift. Reducing uncertainty requires better knowledge of factors controlling PDDP occurrence, source regions, and dust fluxes induced by dry convection. Key Points Global potential dust devil occurrence quantified from meteorological analyses Climatology shows realistic diurnal cycle and geographical distribution Best estimate of global contribution of 3.4% is 10 times smaller than the previous estimate PMID:26681815

  10. An Admittance Survey of Large Volcanoes on Venus: Implications for Volcano Growth

    NASA Technical Reports Server (NTRS)

    Brian, A. W.; Smrekar, S. E.; Stofan, E. R.

    2004-01-01

    Estimates of the thickness of the venusian crust and elastic lithosphere are important in determining the rheological and thermal properties of Venus. These estimates offer insights into what conditions are needed for certain features, such as large volcanoes and coronae, to form. Lithospheric properties for much of the large volcano population on Venus are not well known. Previous studies of elastic thickness (Te) have concentrated on individual or small groups of edifices, or have used volcano models and fixed values of Te to match with observations of volcano morphologies. In addition, previous studies use different methods to estimate lithospheric parameters meaning it is difficult to compare their results. Following recent global studies of the admittance signatures exhibited by the venusian corona population, we performed a similar survey into large volcanoes in an effort to determine the range of lithospheric parameters shown by these features. This survey of the entire large volcano population used the same method throughout so that all estimates could be directly compared. By analysing a large number of edifices and comparing our results to observations of their morphology and models of volcano formation, we can help determine the controlling parameters that govern volcano growth on Venus.

  11. Radiolysis Model Sensitivity Analysis for a Used Fuel Storage Canister

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wittman, Richard S.

    2013-09-20

    This report fulfills the M3 milestone (M3FT-13PN0810027) to report on a radiolysis computer model analysis that estimates the generation of radiolytic products for a storage canister. The analysis considers radiolysis outside storage canister walls and within the canister fill gas over a possible 300-year lifetime. Previous work relied on estimates based directly on a water radiolysis G-value. This work also includes that effect with the addition of coupled kinetics for 111 reactions for 40 gas species to account for radiolytic-induced chemistry, which includes water recombination and reactions with air.

  12. Características básicas del REOSC-DS + CCD Tek1024 en el telescopio JS y extinción atmosférica en CASLEO

    NASA Astrophysics Data System (ADS)

    Baume, G.; Coronel, C.; De Bórtoli, B.; Ennis, A. I.; Fernández Lajús, E.; Filócomo, A.; Gamen, R.; Higa, R.; Pessi, P. J.; Putkuri, C.; Rodriguez, C.; Unamuno, A.

    2017-10-01

    In the framework of the activities of the subject ``Astronomia Observacional'' of FCAG (UNLP), several photometric and spectroscopic observations have been made using the Jorge Sahade telescope at the Complejo Astronomico El Leoncito. These data have allowed the estimation of the extinction coefficients in bands. They were compared with previous values, verifying a secular increase in the last years. In addition, some parameters and characteristics of the REOSC spectrograph working at simple dispersion (DS) mode and for its CCD detector Tek1024 were estimated.

  13. Physics of ultra-high bioproductivity in algal photobioreactors

    NASA Astrophysics Data System (ADS)

    Greenwald, Efrat; Gordon, Jeffrey M.; Zarmi, Yair

    2012-04-01

    Cultivating algae at high densities in thin photobioreactors engenders time scales for random cell motion that approach photosynthetic rate-limiting time scales. This synchronization allows bioproductivity above that achieved with conventional strategies. We show that a diffusion model for cell motion (1) accounts for high bioproductivity at irradiance values previously deemed restricted by photoinhibition, (2) predicts the existence of optimal culture densities and their dependence on irradiance, consistent with available data, (3) accounts for the observed degree to which mixing improves bioproductivity, and (4) provides an estimate of effective cell diffusion coefficients, in accord with independent hydrodynamic estimates.

  14. Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Posse, Christian

    2005-09-15

    The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabási-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using other methods and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.

  15. Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Posse, Christian

    2005-09-15

    The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabasi-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using standard power engineering methods, and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.

  16. Initiation of Detonation in Multiple Shock-Compressed Liquid Explosives

    NASA Astrophysics Data System (ADS)

    Yoshinaka, A. C.; Zhang, F.; Petel, O. E.; Higgins, A. J.

    2006-07-01

    Initiation and resulting propagation of detonation via multiple shock reverberations between two high impedance plates has been investigated in amine-sensitized nitromethane. Experiments were designed so that the first reflected shock strength was below the critical value for initiation found previously. Luminosity combined with a distinct pressure hump indicated onset of reaction and successful initiation after double or triple shock reflection off the bottom plate. Final temperature estimates for double or triple shock reflection immediately before initiation lie between 700-720 K, consistent with those found previously for both incident and singly reflected shock initiation.

  17. Computing nucleon EDM on a lattice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abramczyk, Michael; Izubuchi, Taku

    I will discuss briefly recent changes in the methodology of computing the baryon EDM on a lattice. The associated correction substantially reduces presently existing lattice values for the proton and neutron theta-induced EDMs, so that even the most precise previous lattice results become consistent with zero. On one hand, this change removes previous disagreements between these lattice results and the phenomenological estimates of the nucleon EDM. On the other hand, the nucleon EDM becomes much harder to compute on a lattice. In addition, I will review the progress in computing quark chromo-EDM-induced nucleon EDM using chiral quark action.

  18. Simplified quantification and whole-body distribution of [18F]FE-PE2I in nonhuman primates: prediction for human studies.

    PubMed

    Varrone, Andrea; Gulyás, Balázs; Takano, Akihiro; Stabin, Michael G; Jonsson, Cathrine; Halldin, Christer

    2012-02-01

    [(18)F]FE-PE2I is a promising dopamine transporter (DAT) radioligand. In nonhuman primates, we examined the accuracy of simplified quantification methods and the estimates of radiation dose of [(18)F]FE-PE2I. In the quantification study, binding potential (BP(ND)) values previously reported in three rhesus monkeys using kinetic and graphical analyses of [(18)F]FE-PE2I were used for comparison. BP(ND) using the cerebellum as reference region was obtained with four reference tissue methods applied to the [(18)F]FE-PE2I data that were compared with the kinetic and graphical analyses. In the whole-body study, estimates of adsorbed radiation were obtained in two cynomolgus monkeys. All reference tissue methods provided BP(ND) values within 5% of the values obtained with the kinetic and graphical analyses. The shortest imaging time for stable BP(ND) estimation was 54 min. The average effective dose of [(18)F]FE-PE2I was 0.021 mSv/MBq, similar to 2-deoxy-2-[(18)F]fluoro-d-glucose. The results in nonhuman primates suggest that [(18)F]FE-PE2I is suitable for accurate and stable DAT quantification, and its radiation dose estimates would allow for a maximal administered radioactivity of 476 MBq in human subjects. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Process value of care safety: women's willingness to pay for perinatal services.

    PubMed

    Anezaki, Hisataka; Hashimoto, Hideki

    2017-08-01

    To evaluate the process value of care safety from the patient's view in perinatal services. Cross-sectional survey. Fifty two sites of mandated public neonatal health checkup in 6 urban cities in West Japan. Mothers who attended neonatal health checkups for their babies in 2011 (n = 1316, response rate = 27.4%). Willingness to pay (WTP) for physician-attended care compared with midwife care as the process-related value of care safety. WTP was estimated using conjoint analysis based on the participants' choice over possible alternatives that were randomly assigned from among eight scenarios considering attributes such as professional attendance, amenities, painless delivery, caesarean section rate, travel time and price. The WTP for physician-attended care over midwife care was estimated 1283 USD. Women who had experienced complications in prior deliveries had a 1.5 times larger WTP. We empirically evaluated the process value for safety practice in perinatal care that was larger than a previously reported accounting-based value. Our results indicate that measurement of process value from the patient's view is informative for the evaluation of safety care, and that it is sensitive to individual risk perception for the care process. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care.

  20. Revised age estimates of the Euphrosyne family

    NASA Astrophysics Data System (ADS)

    Carruba, Valerio; Masiero, Joseph R.; Cibulková, Helena; Aljbaae, Safwan; Espinoza Huaman, Mariela

    2015-08-01

    The Euphrosyne family, a high inclination asteroid family in the outer main belt, is considered one of the most peculiar groups of asteroids. It is characterized by the steepest size frequency distribution (SFD) among families in the main belt, and it is the only family crossed near its center by the ν6 secular resonance. Previous studies have shown that the steep size frequency distribution may be the result of the dynamical evolution of the family.In this work we further explore the unique dynamical configuration of the Euphrosyne family by refining the previous age values, considering the effects of changes in shapes of the asteroids during YORP cycle (``stochastic YORP''), the long-term effect of close encounters of family members with (31) Euphrosyne itself, and the effect that changing key parameters of the Yarkovsky force (such as density and thermal conductivity) has on the estimate of the family age obtained using Monte Carlo methods. Numerical simulations accounting for the interaction with the local web of secular and mean-motion resonances allow us to refine previous estimates of the family age. The cratering event that formed the Euphrosyne family most likely occurred between 560 and 1160 Myr ago, and no earlier than 1400 Myr ago when we allow for larger uncertainties in the key parameters of the Yarkovsky force.

  1. Prevalence of Individuals Experiencing the Effects of Stroke in Canada: Trends and Projections.

    PubMed

    Krueger, Hans; Koot, Jacqueline; Hall, Ruth E; O'Callaghan, Christina; Bayley, Mark; Corbett, Dale

    2015-08-01

    Previous estimates of the number and prevalence of individuals experiencing the effects of stroke in Canada are out of date and exclude critical population groups. It is essential to have complete data that report on stroke disability for monitoring and planning purposes. The objective was to provide an updated estimate of the number of individuals experiencing the effects of stroke in Canada (and its regions), trending since 2000 and forecasted prevalence to 2038. The prevalence, trends, and projected number of individuals experiencing the effects of stroke were estimated using region-specific survey data and adjusted to account for children aged <12 years and individuals living in homes for the aged. In 2013, we estimate that there were 405 000 individuals experiencing the effects of stroke in Canada, yielding a prevalence of 1.15%. This value is expected to increase to between 654 000 and 726 000 by 2038. Trends in stroke data between 2000 and 2012 suggest a nonsignificant decrease in stroke prevalence, but a substantial and rising increase in the number of individuals experiencing the effects of stroke. Stroke prevalence varied considerably between regions. Previous estimates of stroke prevalence have underestimated the true number of individuals experiencing the effects of stroke in Canada. Furthermore, the projected increases that will result from population growth and demographic changes highlight the importance of maintaining up-to-date estimates. © 2015 American Heart Association, Inc.

  2. Stable nitrogen isotope ratios and accumulation of PCDD/F and PCB in Baltic aquatic food chains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Broman, D.; Naef, C.; Rolff, C.

    Ratios of naturally occurring stable isotopes of nitrogen ({delta}{sup 15}N) can be used to numerically classify trophic levels of organisms in food chains. By combining analyses results of PCDD/Fs and non-ortho PCBs the biomagnification of these substances can be quantitatively estimated. The two Baltic food chains studied were one pelagic (phytoplankton -- settling particulate matter (SPM) -- zooplankton -- mysids -- herring -- cod) and one littoral (phytoplankton -- SPM -- blue mussel -- eider duck). The {delta}{sup 15}N-data gave food chain descriptions qualitatively consistent with previous conceptions of trophic arrangements in the food chains. Phytoplankton showed the lowest averagemore » {delta}{sup 15}N-value and the juvenile eider duck and the cod showed the highest average {delta}{sup 15}N-values for the littoral and pelagic food chains, respectively. The PCDD/Fs and PCBs concentrations were plotted versus the {delta}{sup 15}N-values for the different trophic levels and an exponential model of the form e{sup (A + B*{delta}N)} was fitted to the data. The estimates of the constant B in the model allows for an estimation of a biomagnification power (B) of different singular, or groups of, contaminants. A B-value around zero indicates that a substance is flowing through the food chain without being magnified, whereas a value > 0 indicates that a substance is biomagnified. Negative B-values indicate that a substance is not taken up or is metabolized. The A-term of the expression is only a scaling factor depending on the background level of the contaminant.« less

  3. Technical Review of SRS Dose Reconstrruction Methods Used By CDC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpkins, Ali, A

    2005-07-20

    At the request of the Centers for Disease Control and Prevention (CDC), a subcontractor Advanced Technologies and Laboratories International, Inc.(ATL) issued a draft report estimating offsite dose as a result of Savannah River Site operations for the period 1954-1992 in support of Phase III of the SRS Dose Reconstruction Project. The doses reported by ATL differed than those previously estimated by Savannah River Site SRS dose modelers for a variety of reasons, but primarily because (1) ATL used different source terms, (2) ATL considered trespasser/poacher scenarios and (3) ATL did not consistently use site-specific parameters or correct usage parameters. Themore » receptors with the highest dose from atmospheric and liquid pathways were within about a factor of four greater than dose values previously reported by SRS. A complete set of technical comments have also been included.« less

  4. A new maximum-likelihood change estimator for two-pass SAR coherent change detection

    DOE PAGES

    Wahl, Daniel E.; Yocky, David A.; Jakowatz, Jr., Charles V.; ...

    2016-01-11

    In previous research, two-pass repeat-geometry synthetic aperture radar (SAR) coherent change detection (CCD) predominantly utilized the sample degree of coherence as a measure of the temporal change occurring between two complex-valued image collects. Previous coherence-based CCD approaches tend to show temporal change when there is none in areas of the image that have a low clutter-to-noise power ratio. Instead of employing the sample coherence magnitude as a change metric, in this paper, we derive a new maximum-likelihood (ML) temporal change estimate—the complex reflectance change detection (CRCD) metric to be used for SAR coherent temporal change detection. The new CRCD estimatormore » is a surprisingly simple expression, easy to implement, and optimal in the ML sense. As a result, this new estimate produces improved results in the coherent pair collects that we have tested.« less

  5. Forest values and the impact of the federal estate tax on family forests

    Treesearch

    Brenton J. Dickinson; Brett J. Butler; Michael A. Kilgore; Paul Catanzaro; John Greene; Jaketon H. Hewes; David Kittredge; Mary. Tyrrell

    2012-01-01

    Previous research has suggested that heirs to family forest land may sell timber and/or land in order to pay state and/or federal estate taxes, which could result in land use conversion or other adverse ecological impacts. We estimated the number of Minnesota family forest landowners and the associated acreage that could be subject to estate taxes at various exemption...

  6. Brute force meets Bruno force in parameter optimisation: introduction of novel constraints for parameter accuracy improvement by symbolic computation.

    PubMed

    Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F

    2011-09-01

    Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.

  7. A study of the earth's free core nutation using international deployment of accelerometers gravity data

    NASA Technical Reports Server (NTRS)

    Cummins, Phil R.; Wahr, John M.

    1993-01-01

    In this study we consider the influence of the earth's free core nutation (FCN) on diurnal tidal admittance estimates for 11 stations of the globally distributed International Deployment of Accelerometers network. The FCN causes a resonant enhancement of the diurnal admittances which can be used to estimate some properties of the FCN. Estimations of the parameters describing the FCN (period, Q, and resonance strength) are made using data from individual stations and many stations simultaneously. These yield a result for the period of 423-452 sidereal days, which is shorter than theory predicts but is in agreement with many previous studies and suggests that the dynamical ellipticity of the core may be greater than its hydrostatic value.

  8. Revisiting the contribution of land transport and shipping emissions to tropospheric ozone

    NASA Astrophysics Data System (ADS)

    Mertens, Mariano; Grewe, Volker; Rieger, Vanessa S.; Jöckel, Patrick

    2018-04-01

    We quantify the contribution of land transport and shipping emissions to tropospheric ozone for the first time with a chemistry-climate model including an advanced tagging method (also known as source apportionment), which considers not only the emissions of nitrogen oxides (NOx, NO, and NO2), carbon monoxide (CO), and volatile organic compounds (VOC) separately, but also their non-linear interaction in producing ozone. For summer conditions a contribution of land transport emissions to ground-level ozone of up to 18 % in North America and Southern Europe is estimated, which corresponds to 12 and 10 nmol mol-1, respectively. The simulation results indicate a contribution of shipping emissions to ground-level ozone during summer on the order of up to 30 % in the North Pacific Ocean (up to 12 nmol mol-1) and 20 % in the North Atlantic Ocean (12 nmol mol-1). With respect to the contribution to the tropospheric ozone burden, we quantified values of 8 and 6 % for land transport and shipping emissions, respectively. Overall, the emissions from land transport contribute around 20 % to the net ozone production near the source regions, while shipping emissions contribute up to 52 % to the net ozone production in the North Pacific Ocean. To put these estimates in the context of literature values, we review previous studies. Most of them used the perturbation approach, in which the results for two simulations, one with all emissions and one with changed emissions for the source of interest, are compared. For a better comparability with these studies, we also performed additional perturbation simulations, which allow for a consistent comparison of results using the perturbation and the tagging approach. The comparison shows that the results strongly depend on the chosen methodology (tagging or perturbation approach) and on the strength of the perturbation. A more in-depth analysis for the land transport emissions reveals that the two approaches give different results, particularly in regions with large emissions (up to a factor of 4 for Europe). Our estimates of the ozone radiative forcing due to land transport and shipping emissions are, based on the tagging method, 92 and 62 mW m-2, respectively. Compared to our best estimates, previously reported values using the perturbation approach are almost a factor of 2 lower, while previous estimates using NOx-only tagging are almost a factor of 2 larger. Overall our results highlight the importance of differentiating between the perturbation and the tagging approach, as they answer two different questions. In line with previous studies, we argue that only the tagging approach (or source apportionment approaches in general) can estimate the contribution of emissions, which is important to attribute emission sources to climate change and/or extreme ozone events. The perturbation approach, however, is important to investigate the effect of an emission change. To effectively assess mitigation options, both approaches should be combined. This combination allows us to track changes in the ozone production efficiency of emissions from sources which are not mitigated and shows how the ozone share caused by these unmitigated emission sources subsequently increases.

  9. Group Sequential Testing of the Predictive Accuracy of a Continuous Biomarker with Unknown Prevalence

    PubMed Central

    Koopmeiners, Joseph S.; Feng, Ziding

    2015-01-01

    Group sequential testing procedures have been proposed as an approach to conserving resources in biomarker validation studies. Previously, Koopmeiners and Feng (2011) derived the asymptotic properties of the sequential empirical positive predictive value (PPV) and negative predictive value curves, which summarize the predictive accuracy of a continuous marker, under case-control sampling. A limitation of their approach is that the prevalence can not be estimated from a case-control study and must be assumed known. In this manuscript, we consider group sequential testing of the predictive accuracy of a continuous biomarker with unknown prevalence. First, we develop asymptotic theory for the sequential empirical PPV and NPV curves when the prevalence must be estimated, rather than assumed known in a case-control study. We then discuss how our results can be combined with standard group sequential methods to develop group sequential testing procedures and bias-adjusted estimators for the PPV and NPV curve. The small sample properties of the proposed group sequential testing procedures and estimators are evaluated by simulation and we illustrate our approach in the context of a study to validate a novel biomarker for prostate cancer. PMID:26537180

  10. Low-flow frequency and flow duration of selected South Carolina streams in the Savannah and Salkehatchie River Basins through March 2014

    USGS Publications Warehouse

    Feaster, Toby D.; Guimaraes, Wladmir B.

    2016-07-14

    An ongoing understanding of streamflow characteristics of the rivers and streams in South Carolina is important for the protection and preservation of the State’s water resources. Information concerning the low-flow characteristics of streams is especially important during critical flow periods, such as during the historic droughts that South Carolina has experienced in the past few decades.In 2008, the U.S. Geological Survey, in cooperation with the South Carolina Department of Health and Environmental Control, initiated a study to update low-flow statistics at continuous-record streamgaging stations operated by the U.S. Geological Survey in South Carolina. This report presents the low-flow statistics for 28 selected streamgaging stations in the Savannah and Salkehatchie River Basins in South Carolina. The low-flow statistics include daily mean flow durations for the 5-, 10-, 25-, 50-, 75-, 90-, and 95-percent probability of exceedance and the annual minimum 1-, 3-, 7-, 14-, 30-, 60-, and 90-day mean flows with recurrence intervals of 2, 5, 10, 20, 30, and 50 years, depending on the length of record available at the streamgaging station. The low-flow statistics were computed from records available through March 31, 2014.Low-flow statistics are influenced by length of record, hydrologic regime under which the data were collected, analytical techniques used, and other factors, such as urbanization, diversions, and droughts that may have occurred in the basin. To assess changes in the low-flow statistics from the previously published values, a comparison of the low-flow statistics for the annual minimum 7-day average streamflow with a 10-year recurrence interval (7Q10) from this study was made with the most recently published values. Of the 28 streamgaging stations for which recurrence interval computations were made, 14 streamgaging stations were suitable for comparing to low-flow statistics that were previously published in U.S. Geological Survey reports. These comparisons indicated that seven of the streamgaging stations had values lower than the previous values, two streamgaging stations had values higher than the previous values, and two streamgaging stations had values that were unchanged from previous values. The remaining three stations for which previous 7Q10 values were computed, which are located on the main stem of the Savannah River, were not compared with current estimates because of differences in the way the pre-regulation and regulated flow data were analyzed.

  11. Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Cheong, R. Y.; Gabda, D.

    2017-09-01

    Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.

  12. HUBBLE SPACE TELESCOPE FAR ULTRAVIOLET SPECTROSCOPY OF THE RECURRENT NOVA T PYXIDIS

    PubMed Central

    Godon, Patrick; Sion, Edward M.; Starrfield, Sumner; Livio, Mario; Williams, Robert E.; Woodward, Charles E.; Kuin, Paul; Page, Kim L.

    2018-01-01

    With six recorded nova outbursts, the prototypical recurrent nova T Pyxidis (T Pyx) is the ideal cataclysmic variable system to assess the net change of the white dwarf mass within a nova cycle. Recent estimates of the mass ejected in the 2011 outburst ranged from a few ~10−5 M⊙ to 3.3 × 10−4 M⊙, and assuming a mass accretion rate of 10−8−10−7 M⊙ yr−1 for 44 yr, it has been concluded that the white dwarf in T Pyx is actually losing mass. Using NLTE disk modeling spectra to fit our recently obtained Hubble Space Telescope COS and STIS spectra, we find a mass accretion rate of up to two orders of magnitude larger than previously estimated. Our larger mass accretion rate is due mainly to the newly derived distance of T Pyx (4.8 kpc, larger than the previous 3.5 kpc estimate), our derived reddening of E(B − V) = 0.35 (based on combined IUE and GALEX spectra), and NLTE disk modeling (compared to blackbody and raw flux estimates in earlier works). We find that for most values of the reddening (0.25 ≤ E(B−V) ≤ 0.50) and white dwarf mass (0.70 M⊙ ≤ Mwd ≤ 1.35 M⊙) the accreted mass is larger than the ejected mass. Only for a low reddening (~0.25 and smaller) combined with a large white dwarf mass (0.9 M⊙ and larger) is the ejected mass larger than the accreted one. However, the best results are obtained for a larger value of reddening. PMID:29430290

  13. HUBBLE SPACE TELESCOPE FAR ULTRAVIOLET SPECTROSCOPY OF THE RECURRENT NOVA T PYXIDIS.

    PubMed

    Godon, Patrick; Sion, Edward M; Starrfield, Sumner; Livio, Mario; Williams, Robert E; Woodward, Charles E; Kuin, Paul; Page, Kim L

    2014-04-01

    With six recorded nova outbursts, the prototypical recurrent nova T Pyxidis (T Pyx) is the ideal cataclysmic variable system to assess the net change of the white dwarf mass within a nova cycle. Recent estimates of the mass ejected in the 2011 outburst ranged from a few ~10 -5 M ⊙ to 3.3 × 10 -4 M ⊙ , and assuming a mass accretion rate of 10 -8 -10 -7 M ⊙ yr -1 for 44 yr, it has been concluded that the white dwarf in T Pyx is actually losing mass. Using NLTE disk modeling spectra to fit our recently obtained Hubble Space Telescope COS and STIS spectra, we find a mass accretion rate of up to two orders of magnitude larger than previously estimated. Our larger mass accretion rate is due mainly to the newly derived distance of T Pyx (4.8 kpc, larger than the previous 3.5 kpc estimate), our derived reddening of E ( B - V ) = 0.35 (based on combined IUE and GALEX spectra), and NLTE disk modeling (compared to blackbody and raw flux estimates in earlier works). We find that for most values of the reddening (0.25 ≤ E ( B - V ) ≤ 0.50) and white dwarf mass (0.70 M ⊙ ≤ M wd ≤ 1.35 M ⊙ ) the accreted mass is larger than the ejected mass. Only for a low reddening (~0.25 and smaller) combined with a large white dwarf mass (0.9 M ⊙ and larger) is the ejected mass larger than the accreted one. However, the best results are obtained for a larger value of reddening.

  14. A comparative analysis of modeled and monitored ambient hazardous air pollutants in Texas: a novel approach using concordance correlation.

    PubMed

    Lupo, Philip J; Symanski, Elaine

    2009-11-01

    Often, in studies evaluating the health effects of hazardous air pollutants (HAPs), researchers rely on ambient air levels to estimate exposure. Two potential data sources are modeled estimates from the U.S. Environmental Protection Agency (EPA) Assessment System for Population Exposure Nationwide (ASPEN) and ambient air pollutant measurements from monitoring networks. The goal was to conduct comparisons of modeled and monitored estimates of HAP levels in the state of Texas using traditional approaches and a previously unexploited method, concordance correlation analysis, to better inform decisions regarding agreement. Census tract-level ASPEN estimates and monitoring data for all HAPs throughout Texas, available from the EPA Air Quality System, were obtained for 1990, 1996, and 1999. Monitoring sites were mapped to census tracts using U.S. Census data. Exclusions were applied to restrict the monitored data to measurements collected using a common sampling strategy with minimal missing values over time. Comparisons were made for 28 HAPs in 38 census tracts located primarily in urban areas throughout Texas. For each pollutant and by year of assessment, modeled and monitored air pollutant annual levels were compared using standard methods (i.e., ratios of model-to-monitor annual levels). Concordance correlation analysis was also used, which assesses linearity and agreement while providing a formal method of statistical inference. Forty-eight percent of the median model-to-monitor values fell between 0.5 and 2, whereas only 17% of concordance correlation coefficients were significant and greater than 0.5. On the basis of concordance correlation analysis, the findings indicate there is poorer agreement when compared with the previously applied ad hoc methods to assess comparability between modeled and monitored levels of ambient HAPs.

  15. Costs of food waste in South Africa: Incorporating inedible food waste.

    PubMed

    de Lange, Willem; Nahman, Anton

    2015-06-01

    The economic, social and environmental costs of food waste are being increasingly recognised. Food waste consists of both edible and inedible components. Whilst wastage of edible food is problematic for obvious reasons, there are also costs associated with the disposal of the inedible fraction to landfill. This is the third in a series of papers examining the costs of food waste throughout the value chain in South Africa. The previous papers focused on the edible portion of food waste. In this paper, costs associated with inedible food waste in South Africa are estimated, in terms of the value foregone by not recovering this waste for use in downstream applications, such as energy generation or composting; as well as costs associated with disposal to landfill. Opportunity costs are estimated at R6.4 (US$0.64) billion per annum, or R2668 (US$266) per tonne. Adding this to the previous estimate for edible food waste of R61.5 billion per annum (in 2012 prices; equivalent to R65 billion in 2013 prices) results in a total opportunity cost of food waste in South Africa (in terms of loss of a potentially valuable food source or resource) of R71.4 (US$7.14) billion per annum, or R5667 (US$567) per tonne. Thereafter, estimates of the costs associated with disposal of this food waste to landfill, including both financial costs and externalities (social and environmental costs), are taken into account. These costs amount to R255 (US$25) per tonne, giving rise to a total cost of food waste in South Africa of R75 billion (US$7.5 billion) per annum, or R5922 (US$592) per tonne. This is equivalent to 2.2% of South Africa's 2013 GDP. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. GPS-PWV Estimation and Analysis for CGPS Sites Operating in Mexico

    NASA Astrophysics Data System (ADS)

    Gutierrez, O.; Vazquez, G. E.; Bennett, R. A.; Adams, D. K.

    2014-12-01

    Eighty permanent Global Positioning System (GPS) tracking stations that belong to several networks spanning Mexico intended for diverse purposes and applications were used to estimate precipitable water vapor (PWV) using measurement series covering the period of 2000-2014. We extracted the GPS-PWV from the ionosphere-free double-difference carrier phase observations, processed using the GAMIT software. The GPS data were processed with a 30 s sampling rate, 15-degree cutoff angle, and precise GPS orbits disseminated by IGS. The time-varying part of the zenith wet delay was estimated using the Global Mapping Function (GMF), while the constant part is evaluated using the Neil tropospheric model. The data reduction to compute the zenith wet delay follows the step piecewise linear strategy, which is subsequently transformed to PWV estimated every 2-hr. Although there exist previous isolated studies for estimating PWV in Mexico, this study is an attempt to perform a more complete and comprehensive analysis of PWV estimation throughout the Mexican territory. Our resulting GPS-based PWV were compared to available PWV values for 30 stations that operate in Mexico and report the PWV to Suominet. This comparison revealed differences of 1 to 2 mm between the GPS-PWV solution and the PWV reported by Suominet. Accurate values of GPS-PWV will help enhance Mexico ability to investigate water vapor advection, convective and frontal rainfall and long-term climate variability.

  17. Accurate Estimation of Solvation Free Energy Using Polynomial Fitting Techniques

    PubMed Central

    Shyu, Conrad; Ytreberg, F. Marty

    2010-01-01

    This report details an approach to improve the accuracy of free energy difference estimates using thermodynamic integration data (slope of the free energy with respect to the switching variable λ) and its application to calculating solvation free energy. The central idea is to utilize polynomial fitting schemes to approximate the thermodynamic integration data to improve the accuracy of the free energy difference estimates. Previously, we introduced the use of polynomial regression technique to fit thermodynamic integration data (Shyu and Ytreberg, J Comput Chem 30: 2297–2304, 2009). In this report we introduce polynomial and spline interpolation techniques. Two systems with analytically solvable relative free energies are used to test the accuracy of the interpolation approach. We also use both interpolation and regression methods to determine a small molecule solvation free energy. Our simulations show that, using such polynomial techniques and non-equidistant λ values, the solvation free energy can be estimated with high accuracy without using soft-core scaling and separate simulations for Lennard-Jones and partial charges. The results from our study suggest these polynomial techniques, especially with use of non-equidistant λ values, improve the accuracy for ΔF estimates without demanding additional simulations. We also provide general guidelines for use of polynomial fitting to estimate free energy. To allow researchers to immediately utilize these methods, free software and documentation is provided via http://www.phys.uidaho.edu/ytreberg/software. PMID:20623657

  18. Influence of Temperature, Relative Humidity, and Soil Properties on the Soil-Air Partitioning of Semivolatile Pesticides: Laboratory Measurements and Predictive Models.

    PubMed

    Davie-Martin, Cleo L; Hageman, Kimberly J; Chin, Yu-Ping; Rougé, Valentin; Fujita, Yuki

    2015-09-01

    Soil-air partition coefficient (Ksoil-air) values are often employed to investigate the fate of organic contaminants in soils; however, these values have not been measured for many compounds of interest, including semivolatile current-use pesticides. Moreover, predictive equations for estimating Ksoil-air values for pesticides (other than the organochlorine pesticides) have not been robustly developed, due to a lack of measured data. In this work, a solid-phase fugacity meter was used to measure the Ksoil-air values of 22 semivolatile current- and historic-use pesticides and their degradation products. Ksoil-air values were determined for two soils (semiarid and volcanic) under a range of environmentally relevant temperature (10-30 °C) and relative humidity (30-100%) conditions, such that 943 Ksoil-air measurements were made. Measured values were used to derive a predictive equation for pesticide Ksoil-air values based on temperature, relative humidity, soil organic carbon content, and pesticide-specific octanol-air partition coefficients. Pesticide volatilization losses from soil, calculated with the newly derived Ksoil-air predictive equation and a previously described pesticide volatilization model, were compared to previous results and showed that the choice of Ksoil-air predictive equation mainly affected the more-volatile pesticides and that the way in which relative humidity was accounted for was the most critical difference.

  19. Nonmonotonic variation of seawater [sup 87]Sr/[sup 86]Sr across the Ivorian/Chadian boundary (Mississippian, Osagean): Evidence from marine cements within the Irish Waulsortian Limestone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Douthit, T.L.; Meyers, W.J.; Hanson, G.N.

    1993-05-01

    Detailed analysis of compositionally unaltered marine fibrous cements (MFC) from a single core through the Mississippian irish Waulsortian Limestone indicates that the variation of seawater [sup 87]Sr/[sup 86]Sr is nonmonotonic across the Ivorian-Chadian boundary. This nonmonotonic variation has not been recognized by previous studies. Furthermore, marine cement yielded [sup 87]Sr/[sup 86]Sr ratios lower than previously reported values for the Ivorian-Chadian (sagean). Marine fibrous cements are interpreted to be compositionally unaltered on the basis of nonluminescent character and stable isotope (C, O) composition comparable to previous estimates of Mississippian marine calcite. The isotope chemistry (C, O, Sr) and cathodoluminescent character ofmore » the marine fibrous cements therefore remained intact during their conversion from high-Mg calcite to low-Mg calcite + microdolomite, a conversion that probably took place in marine water during precipitation of Zone 1 calcite cement, the oldest non-MFC cement. High stratigraphic resolution was obtained by restricting the sample set to a single core, 429 m long, thereby eliminating chronostratigraphic correlation errors. The core is estimated to represent about 9.8 million years of Waulsortian Limestone deposition. The maximum rate of change in seawater [sup 87]Sr/[sup 86]Sr is [minus]0.00012/Ma, comparable in magnitude to Tertiary values. The authors data document the presence of fine-scale seawater [sup 87]Sr/[sup 86]Sr modulations for the Ivorian/Chadian, in contrast to the previously published monotonic seawater [sup 87]Sr/[sup 86]Sr curve for this interval, and emphasize the importance of well characterized intraformational isotopic baselines.« less

  20. Intermediary Variables and Algorithm Parameters for an Electronic Algorithm for Intravenous Insulin Infusion

    PubMed Central

    Braithwaite, Susan S.; Godara, Hemant; Song, Julie; Cairns, Bruce A.; Jones, Samuel W.; Umpierrez, Guillermo E.

    2009-01-01

    Background Algorithms for intravenous insulin infusion may assign the infusion rate (IR) by a two-step process. First, the previous insulin infusion rate (IRprevious) and the rate of change of blood glucose (BG) from the previous iteration of the algorithm are used to estimate the maintenance rate (MR) of insulin infusion. Second, the insulin IR for the next iteration (IRnext) is assigned to be commensurate with the MR and the distance of the current blood glucose (BGcurrent) from target. With use of a specific set of algorithm parameter values, a family of iso-MR curves is created, each giving IR as a function of MR and BG. Method To test the feasibility of estimating MR from the IRprevious and the previous rate of change of BG, historical hyperglycemic data points were used to compute the “maintenance rate cross step next estimate” (MRcsne). Historical cases had been treated with intravenous insulin infusion using a tabular protocol that estimated MR according to column-change rules. The mean IR on historical stable intervals (MRtrue), an estimate of the biologic value of MR, was compared to MRcsne during the hyperglycemic iteration immediately preceding the stable interval. Hypothetically calculated MRcsne-dependent IRnext was compared to IRnext assigned historically. An expanded theory of an algorithm is developed mathematically. Practical recommendations for computerization are proposed. Results The MRtrue determined on each of 30 stable intervals and the MRcsne during the immediately preceding hyperglycemic iteration differed, having medians with interquartile ranges 2.7 (1.2–3.7) and 3.2 (1.5–4.6) units/h, respectively. However, these estimates of MR were strongly correlated (R2 = 0.88). During hyperglycemia at 941 time points the IRnext assigned historically and the hypothetically calculated MRcsne-dependent IRnext differed, having medians with interquartile ranges 4.0 (3.0–6.0) and 4.6 (3.0–6.8) units/h, respectively, but these paired values again were correlated (R2 = 0.87). This article describes a programmable algorithm for intravenous insulin infusion. The fundamental equation of the algorithm gives the relationship among IR; the biologic parameter MR; and two variables expressing an instantaneous rate of change of BG, one of which must be zero at any given point in time and the other positive, negative, or zero, namely the rate of change of BG from below target (rate of ascent) and the rate of change of BG from above target (rate of descent). In addition to user-definable parameters, three special algorithm parameters discoverable in nature are described: the maximum rate of the spontaneous ascent of blood glucose during nonhypoglycemia, the glucose per daily dose of insulin exogenously mediated, and the MR at given patient time points. User-assignable parameters will facilitate adaptation to different patient populations. Conclusions An algorithm is described that estimates MR prior to the attainment of euglycemia and computes MR-dependent values for IRnext. Design features address glycemic variability, promote safety with respect to hypoglycemia, and define a method for specifying glycemic targets that are allowed to differ according to patient condition. PMID:20144334

  1. Decision-making based on emotional images.

    PubMed

    Katahira, Kentaro; Fujimura, Tomomi; Okanoya, Kazuo; Okada, Masato

    2011-01-01

    The emotional outcome of a choice affects subsequent decision making. While the relationship between decision making and emotion has attracted attention, studies on emotion and decision making have been independently developed. In this study, we investigated how the emotional valence of pictures, which was stochastically contingent on participants' choices, influenced subsequent decision making. In contrast to traditional value-based decision-making studies that used money or food as a reward, the "reward value" of the decision outcome, which guided the update of value for each choice, is unknown beforehand. To estimate the reward value of emotional pictures from participants' choice data, we used reinforcement learning models that have successfully been used in previous studies for modeling value-based decision making. Consequently, we found that the estimated reward value was asymmetric between positive and negative pictures. The negative reward value of negative pictures (relative to neutral pictures) was larger in magnitude than the positive reward value of positive pictures. This asymmetry was not observed in valence for an individual picture, which was rated by the participants regarding the emotion experienced upon viewing it. These results suggest that there may be a difference between experienced emotion and the effect of the experienced emotion on subsequent behavior. Our experimental and computational paradigm provides a novel way for quantifying how and what aspects of emotional events affect human behavior. The present study is a first step toward relating a large amount of knowledge in emotion science and in taking computational approaches to value-based decision making.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spriggs, G D

    In a previous paper, the composite exposure rate conversion factor (ECF) for nuclear fallout was calculated using a simple theoretical photon-transport model. The theoretical model was used to fill in the gaps in the FGR-12 table generated by ORNL. The FGR-12 table contains the individual conversion factors for approximate 1000 radionuclides. However, in order to calculate the exposure rate during the first 30 minutes following a nuclear detonation, the conversion factors for approximately 2000 radionuclides are needed. From a human-effects standpoint, it is also necessary to have the dose rate conversion factors (DCFs) for all 2000 radionuclides. The DCFs aremore » used to predict the whole-body dose rates that would occur if a human were standing in a radiation field of known exposure rate. As calculated by ORNL, the whole-body dose rate (rem/hr) is approximately 70% of the exposure rate (R/hr) at one meter above the surface. Hence, the individual DCFs could be estimated by multiplying the individual ECFs by 0.7. Although this is a handy rule-of-thumb, a more consistent (and perhaps, more accurate) method of estimating the individual DCFs for the missing radionuclides in the FGR-12 table is to use the linear relationship between DCF and total gamma energy released per decay. This relationship is shown in Figure 1. The DCFs for individual organs in the body can also be estimated from the estimated whole-body DCF. Using the DCFs given FGR-12, the ratio of the organ-specific DCFs to the whole-body DCF were plotted as a function of the whole-body DCF. From these plots, the asymptotic ratios were obtained (see Table 1). Using these asymptotic ratios, the organ-specific DCFs can be estimated using the estimated whole-body DCF for each of the missing radionuclides in the FGR-12 table. Although this procedure for estimating the organ-specific DCFs may over-estimate the value for some low gamma-energy emitters, having a finite value for the organ-specific DCFs in the table is probably better than having no value at all. A summary of the complete ECF and DCF values are given in Table 2.« less

  3. Evaluation of earthquake potential in China

    NASA Astrophysics Data System (ADS)

    Rong, Yufang

    I present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (that is, the probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. I test all three estimates, and another published estimate, against earthquake data. I constructed a special earthquake catalog which combines previous catalogs covering different times. I estimated moment magnitudes for some events using regression relationships that are derived in this study. I used the special catalog to construct the smoothed seismicity model and to test all models retrospectively. In all the models, I adopted a kind of Gutenberg-Richter magnitude distribution with modifications at higher magnitude. The assumed magnitude distribution depends on three parameters: a multiplicative " a-value," the slope or "b-value," and a "corner magnitude" marking a rapid decrease of earthquake rate with magnitude. I assumed the "b-value" to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and declines as a negative power of the epicentral distance out to a few hundred kilometers. I derived the upper magnitude limit from the special catalog, and estimated local "a-values" from smoothed seismicity. I have begun a "prospective" test, and earthquakes since the beginning of 2000 are quite compatible with the model. For the geologic estimations, I adopted the seismic source zones that are used in the published Global Seismic Hazard Assessment Project (GSHAP) model. The zones are divided according to geological, geodetic and seismicity data. Corner magnitudes are estimated from fault length, while fault slip rates and an assumed locking depth determine earthquake rates. The geological model fits the earthquake data better than the GSHAP model. By smoothing geodetic strain rate, another potential model was constructed and tested. I derived the upper magnitude limit from the Special catalog, and assume local "a-values" proportional to geodetic strain rates. "Prospective" tests show that the geodetic strain rate model is quite compatible with earthquakes. By assuming the smoothed seismicity model as a null hypothesis, I tested every other model against it. Test results indicate that the smoothed seismicity model performs best.

  4. Design, implementation, and first-year outcomes of a value-based drug formulary.

    PubMed

    Sullivan, Sean D; Yeung, Kai; Vogeler, Carol; Ramsey, Scott D; Wong, Edward; Murphy, Chad O; Danielson, Dan; Veenstra, David L; Garrison, Louis P; Burke, Wylie; Watkins, John B

    2015-04-01

    Value-based insurance design attempts to align drug copayment tier with value rather than cost. Previous implementations of value-based insurance design have lowered copayments for drugs indicated for select "high value" conditions and have found modest improvements in medication adherence. However, these implementations have generally not resulted in cost savings to the health plan, suggesting a need for increased copayments for "low value" drugs. Further, previous implementations have assigned equal copayment reductions to all drugs within a therapeutic area without assessing the value of individual drugs. Aligning the individual drug's copayment to its specific value may yield greater clinical and economic benefits. In 2010, Premera Blue Cross, a large not-for-profit health plan in the Pacific Northwest, implemented a value-based drug formulary (VBF) that explicitly uses cost-effectiveness analyses after safety and efficacy reviews to estimate the value of each individual drug. Concurrently, Premera increased copayments for existing tiers. To describe and evaluate the design, implementation, and first-year outcomes of the VBF. We compared observed pharmacy cost per member per month in the year following the VBF implementation with 2 comparator groups: (1) observed pharmacy costs in the year prior to implementation, and (2) expected costs if no changes were made to the pharmacy benefits. Expected costs were generated by applying autoregressive integrated moving averages to pharmacy costs over the previous 36 months. We used an interrupted time series analysis to assess drug use and adherence among individuals with diabetes, hypertension, or dyslipidemia compared with a group of members in plans that did not implement a VBF.  Pharmacy costs decreased by 3% compared with the 12 months prior and 11% compared with expected costs. There was no significant decline in medication use or adherence to treatments for patients with diabetes, hypertension, or dyslipidemia. The VBF and copayment changes enabled pharmacy plan cost savings without negatively affecting utilization in key disease states.

  5. Estimates of self, parental, and partner multiple intelligence and their relationship with personality, values, and demographic variables: a study in Britain and France.

    PubMed

    Swami, Viren; Furnham, Adrian; Zilkha, Susan

    2009-11-01

    In the present study, 151 British and 151 French participants estimated their own, their parents' and their partner's overall intelligence and 13 'multiple intelligences.' In accordance with previous studies, men rated themselves as higher on almost all measures of intelligence, but there were few cross-national differences. There were also important sex differences in ratings of parental and partner intelligence. Participants generally believed they were more intelligent than their parents but not their partners. Regressions indicated that participants believed verbal, logical-mathematical, and spatial intelligence to be the main predictors of intelligence. Regressions also showed that participants' Big Five personality scores (in particular, Extraversion and Openness), but not values or beliefs about intelligence and intelligences tests, were good predictors of intelligence. Results were discussed in terms of the influence of gender-role stereotypes.

  6. Diabatic heating rate estimates from European Centre for Medium-Range Weather Forecasts analyses

    NASA Technical Reports Server (NTRS)

    Christy, John R.

    1991-01-01

    Vertically integrated diabatic heating rate estimates (H) calculated from 32 months of European Center for Medium-Range Weather Forecasts daily analyses (May 1985-December 1987) are determined as residuals of the thermodynamic equation in pressure coordinates. Values for global, hemispheric, zonal, and grid point H are given as they vary over the time period examined. The distribution of H is compared with previous results and with outgoing longwave radiation (OLR) measurements. The most significant negative correlations between H and OLR occur for (1) tropical and Northern-Hemisphere mid-latitude oceanic areas and (2) zonal and hemispheric mean values for periods less than 90 days. Largest positive correlations are seen in periods greater than 90 days for the Northern Hemispheric mean and continental areas of North Africa, North America, northern Asia, and Antarctica. The physical basis for these relationships is discussed. An interyear comparison between 1986 and 1987 reveals the ENSO signal.

  7. Toxic exposure in America: estimating fetal and infant health outcomes from 14 years of TRI reporting.

    PubMed

    Agarwal, Nikhil; Banternghansa, Chanont; Bui, Linda T M

    2010-07-01

    We examine the effect of exposure to a set of toxic pollutants that are tracked by the Toxic Release Inventory (TRI) from manufacturing facilities on county-level infant and fetal mortality rates in the United States between 1989 and 2002. Unlike previous studies, we control for toxic pollution from both mobile sources and non-TRI reporting facilities. We find significant adverse effects of toxic air pollution concentrations on infant mortality rates. Within toxic air pollutants we find that releases of carcinogens are particularly problematic for infant health outcomes. We estimate that the average county-level decreases in various categories of TRI concentrations saved in excess of 13,800 infant lives from 1989 to 2002. Using the low end of the range for the value of a statistical life that is typically used by the EPA of $1.8M, the savings in lives would be valued at approximately $25B.

  8. On the applicability of surrogate-based Markov chain Monte Carlo-Bayesian inversion to the Community Land Model: Case studies at flux tower sites: SURROGATE-BASED MCMC FOR CLM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan

    2016-07-04

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically-average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less

  9. Impact of orbit modeling on DORIS station position and Earth rotation estimates

    NASA Astrophysics Data System (ADS)

    Štěpánek, Petr; Rodriguez-Solano, Carlos Javier; Hugentobler, Urs; Filler, Vratislav

    2014-04-01

    The high precision of estimated station coordinates and Earth rotation parameters (ERP) obtained from satellite geodetic techniques is based on the precise determination of the satellite orbit. This paper focuses on the analysis of the impact of different orbit parameterizations on the accuracy of station coordinates and the ERPs derived from DORIS observations. In a series of experiments the DORIS data from the complete year 2011 were processed with different orbit model settings. First, the impact of precise modeling of the non-conservative forces on geodetic parameters was compared with results obtained with an empirical-stochastic modeling approach. Second, the temporal spacing of drag scaling parameters was tested. Third, the impact of estimating once-per-revolution harmonic accelerations in cross-track direction was analyzed. And fourth, two different approaches for solar radiation pressure (SRP) handling were compared, namely adjusting SRP scaling parameter or fixing it on pre-defined values. Our analyses confirm that the empirical-stochastic orbit modeling approach, which does not require satellite attitude information and macro models, results for most of the monitored station parameters in comparable accuracy as the dynamical model that employs precise non-conservative force modeling. However, the dynamical orbit model leads to a reduction of the RMS values for the estimated rotation pole coordinates by 17% for x-pole and 12% for y-pole. The experiments show that adjusting atmospheric drag scaling parameters each 30 min is appropriate for DORIS solutions. Moreover, it was shown that the adjustment of cross-track once-per-revolution empirical parameter increases the RMS of the estimated Earth rotation pole coordinates. With recent data it was however not possible to confirm the previously known high annual variation in the estimated geocenter z-translation series as well as its mitigation by fixing the SRP parameters on pre-defined values.

  10. On the applicability of surrogate-based MCMC-Bayesian inversion to the Community Land Model: Case studies at Flux tower sites

    DOE PAGES

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan; ...

    2016-06-01

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. As a result, analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less

  11. On the applicability of surrogate-based MCMC-Bayesian inversion to the Community Land Model: Case studies at Flux tower sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. As a result, analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less

  12. On the applicability of surrogate-based Markov chain Monte Carlo-Bayesian inversion to the Community Land Model: Case studies at flux tower sites

    NASA Astrophysics Data System (ADS)

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan; Ren, Huiying; Liu, Ying; Swiler, Laura

    2016-07-01

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesian model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.

  13. Deformation due to the distension of cylindrical igneous contacts: A kinematic model

    NASA Astrophysics Data System (ADS)

    Morgan, John

    1980-06-01

    A simple kinematic model is described that predicts the state of overall wall-rock strain resulting from the distension of igneous contacts. It applies to the axially symmetric expansion of any pluton whose overall shape is a cylinder with circular cross section i.e. to late magmatic plutons which are circular or annular in cross section. The model is not capable of predicting the strain distribution in the zone of contact strain, but does predict components of overall strain whose magnitudes are calculated from the change in shape of the zone of contact strain. These strain components are: (1) overall radial shortening of the wall rocks overlineer; (2) overall vertical extension overlineev; and (3) overall horizontal extension parallel to the contact overlineeh (the axis of symmetry is arbitrarily oriented vertically). In addition, one local strain magnitude can be predicted, namely the horizontal extension of the contact surface ehc. The four strain parameters and {(1 + overlineev) }/{(1 + overlineeh}) are graphed as functions of two independent variables: (1) outward distension of the contact ( r - r0)/ r; and (2) depth of contact strain ( rd - r)/ r. r is the present, observed radius of the pluton, r0 is the original radius and rd is the radius of contact strain. If ( rd- r)/ r is reduced or ( r - r0)/ r is increased, absolute values of the overall strain components are increased, ehc increases with ( r - r0)/ r but is independent of ( r d - r)/r · (1 + overlineev)/(l + overlineeh) ≅ 1 over a large range of values of both independent variables. The model has been applied to two Archean plutons in northwestern Ontario. According to a previous study, strain near the contact of the Bamaji-Blackstone batholith is characterized by large values of extension parallel to the contact and shortening normal to the contact, ( r - r0)/r and ( rd - r)/ r are estimated to be less than 0.20 and 0.27 respectively. The horizontal extension parallel to the contact is apparently a minimum estimate of ehc and the depth of contact strain was previously underestimated. The range of values of ehc indicates that ( r - r0)/ r is larger than previously estimated by a factor of at least three. A similar problem has been encountered at the convex boundary of the Marmion Lake crescentic pluton. The pluton was emplaced along an older contact between greenstone and tonalitic gneiss. A minimum value of the outward displacement of the convex boundary of the pluton can be estimated from a major fold in the greenstone. It is found that the magnitude of this outward displacement is greater than the width of the pluton or ( r - r0). Apparently, the folding pre-dates the emplacement of the crescent; it probably dates from the emplacement of the tonalitic gneiss into greenstone cover.

  14. Maximum likelihood estimation of correction for dilution bias in simple linear regression using replicates from subjects with extreme first measurements.

    PubMed

    Berglund, Lars; Garmo, Hans; Lindbäck, Johan; Svärdsudd, Kurt; Zethelius, Björn

    2008-09-30

    The least-squares estimator of the slope in a simple linear regression model is biased towards zero when the predictor is measured with random error. A corrected slope may be estimated by adding data from a reliability study, which comprises a subset of subjects from the main study. The precision of this corrected slope depends on the design of the reliability study and estimator choice. Previous work has assumed that the reliability study constitutes a random sample from the main study. A more efficient design is to use subjects with extreme values on their first measurement. Previously, we published a variance formula for the corrected slope, when the correction factor is the slope in the regression of the second measurement on the first. In this paper we show that both designs improve by maximum likelihood estimation (MLE). The precision gain is explained by the inclusion of data from all subjects for estimation of the predictor's variance and by the use of the second measurement for estimation of the covariance between response and predictor. The gain of MLE enhances with stronger true relationship between response and predictor and with lower precision in the predictor measurements. We present a real data example on the relationship between fasting insulin, a surrogate marker, and true insulin sensitivity measured by a gold-standard euglycaemic insulin clamp, and simulations, where the behavior of profile-likelihood-based confidence intervals is examined. MLE was shown to be a robust estimator for non-normal distributions and efficient for small sample situations. Copyright (c) 2008 John Wiley & Sons, Ltd.

  15. Using remote sensing and GIS techniques to estimate discharge and recharge. fluxes for the Death Valley regional groundwater flow system, USA

    USGS Publications Warehouse

    D'Agnese, F. A.; Faunt, C.C.; Keith, Turner A.

    1996-01-01

    The recharge and discharge components of the Death Valley regional groundwater flow system were defined by remote sensing and GIS techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. This map provided a basis for subsequent evapotranspiration and infiltration estimations. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were then used to calculate discharge volumes for these areas. A previously used empirical method of groundwater recharge estimation was modified by GIS methods to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter estimation techniques also used these data to evaluate the reliability and sensitivity of estimated values.

  16. Subjective Estimation of Task Time and Task Difficulty of Simple Movement Tasks.

    PubMed

    Chan, Alan H S; Hoffmann, Errol R

    2017-01-01

    It has been demonstrated in previous work that the same neural structures are used for both imagined and real movements. To provide a strong test of the similarity of imagined and actual movement times, 4 simple movement tasks were used to determine the relationship between estimated task time and actual movement time. The tasks were single-component visually controlled movements, 2-component visually controlled, low index of difficulty (ID) moves and pin-to-hole transfer movements. For each task there was good correspondence between the mean estimated times and actual movement times. In all cases, the same factors determined the actual and estimated movement times: the amplitudes of movement and the IDs of the component movements, however the contribution of each of these variables differed for the imagined and real tasks. Generally, the standard deviations of the estimated times were linearly related to the estimated time values. Overall, the data provide strong evidence for the same neural structures being used for both imagined and actual movements.

  17. Seasonal Variability in Global Eddy Diffusion and the Effect on Thermospheric Neutral Density

    NASA Astrophysics Data System (ADS)

    Pilinski, M.; Crowley, G.

    2014-12-01

    We describe a method for making single-satellite estimates of the seasonal variability in global-average eddy diffusion coefficients. Eddy diffusion values as a function of time between January 2004 and January 2008 were estimated from residuals of neutral density measurements made by the CHallenging Minisatellite Payload (CHAMP) and simulations made using the Thermosphere Ionosphere Mesosphere Electrodynamics - Global Circulation Model (TIME-GCM). The eddy diffusion coefficient results are quantitatively consistent with previous estimates based on satellite drag observations and are qualitatively consistent with other measurement methods such as sodium lidar observations and eddy-diffusivity models. The eddy diffusion coefficient values estimated between January 2004 and January 2008 were then used to generate new TIME-GCM results. Based on these results, the RMS difference between the TIME-GCM model and density data from a variety of satellites is reduced by an average of 5%. This result, indicates that global thermospheric density modeling can be improved by using data from a single satellite like CHAMP. This approach also demonstrates how eddy diffusion could be estimated in near real-time from satellite observations and used to drive a global circulation model like TIME-GCM. Although the use of global values improves modeled neutral densities, there are some limitations of this method, which are discussed, including that the latitude-dependence of the seasonal neutral-density signal is not completely captured by a global variation of eddy diffusion coefficients. This demonstrates the need for a latitude-dependent specification of eddy diffusion consistent with diffusion observations made by other techniques.

  18. Seasonal variability in global eddy diffusion and the effect on neutral density

    NASA Astrophysics Data System (ADS)

    Pilinski, M. D.; Crowley, G.

    2015-04-01

    We describe a method for making single-satellite estimates of the seasonal variability in global-average eddy diffusion coefficients. Eddy diffusion values as a function of time were estimated from residuals of neutral density measurements made by the Challenging Minisatellite Payload (CHAMP) and simulations made using the thermosphere-ionosphere-mesosphere electrodynamics global circulation model (TIME-GCM). The eddy diffusion coefficient results are quantitatively consistent with previous estimates based on satellite drag observations and are qualitatively consistent with other measurement methods such as sodium lidar observations and eddy diffusivity models. Eddy diffusion coefficient values estimated between January 2004 and January 2008 were then used to generate new TIME-GCM results. Based on these results, the root-mean-square sum for the TIME-GCM model is reduced by an average of 5% when compared to density data from a variety of satellites, indicating that the fidelity of global density modeling can be improved by using data from a single satellite like CHAMP. This approach also demonstrates that eddy diffusion could be estimated in near real-time from satellite observations and used to drive a global circulation model like TIME-GCM. Although the use of global values improves modeled neutral densities, there are limitations to this method, which are discussed, including that the latitude dependence of the seasonal neutral-density signal is not completely captured by a global variation of eddy diffusion coefficients. This demonstrates the need for a latitude-dependent specification of eddy diffusion which is also consistent with diffusion observations made by other techniques.

  19. Stress drop estimates and hypocenter relocations of induced earthquakes near Fox Creek, Alberta

    NASA Astrophysics Data System (ADS)

    Clerc, F.; Harrington, R. M.; Liu, Y.; Gu, Y. J.

    2016-12-01

    This study investigates the physical differences between induced and naturally occurring earthquakes using a sequence of events potentially induced by hydraulic fracturing near Fox Creek, Alberta. We perform precise estimations of static stress drop to determine if the range of values is low compared to values estimated for naturally occurring events, as has been suggested by previous studies. Starting with the Natural Resources Canada earthquake catalog and using waveform data from regional networks, we use a spectral ratio method to calculate the static stress drop values of a group of relocated earthquakes occurring in close proximity to hydraulic fracturing wells from December 2013 to June 2015. The spectral ratio method allows us to precisely constrain the corner frequencies of the amplitude spectra by eliminating the path and site effects of co-located event pairs. Our estimated stress drop values range from 0.1 - 149 MPa over the full range of observed magnitudes, Mw 1.5-4, which are on the high side of the typical reported range of tectonic events, but consistent with other regional studies [Zhang et al., 2016; Wang et al., 2016]. , Stress drops values range from 11 to 93 MPa and appear to be scale invariant over the magnitude range Mw 3 - 4, and are less well constrained at lower magnitudes due to noise and bandwidth limitations. We observe no correlation between event stress drop and hypocenter depth or distance from the wells. Relocated hypocenters cluster around corresponding injection wells and form fine-scale lineations, suggesting the presence and orientation of fault planes. We conclude that neither the range of stress drops nor their scaling with respect to magnitude can be used to conclusively discriminate induced and tectonic earthquakes, as stress drop values may be greatly affected by the regional setting. Instead, the double-difference relocations may be a more reliable indicator of induced seismicity.

  20. Intelligent databases assist transparent and sound economic valuation of ecosystem services.

    PubMed

    Villa, Ferdinando; Ceroni, Marta; Krivov, Sergey

    2007-06-01

    Assessment and economic valuation of services provided by ecosystems to humans has become a crucial phase in environmental management and policy-making. As primary valuation studies are out of the reach of many institutions, secondary valuation or benefit transfer, where the results of previous studies are transferred to the geographical, environmental, social, and economic context of interest, is becoming increasingly common. This has brought to light the importance of environmental valuation databases, which provide reliable valuation data to inform secondary valuation with enough detail to enable the transfer of values across contexts. This paper describes the role of next-generation, intelligent databases (IDBs) in assisting the activity of valuation. Such databases employ artificial intelligence to inform the transfer of values across contexts, enforcing comparability of values and allowing users to generate custom valuation portfolios that synthesize previous studies and provide aggregated value estimates to use as a base for secondary valuation. After a general introduction, we introduce the Ecosystem Services Database, the first IDB for environmental valuation to be made available to the public, describe its functionalities and the lessons learned from its usage, and outline the remaining needs and expected future developments in the field.

  1. Evaluation of Rotor Structural and Aerodynamic Loads using Measured Blade Properties

    NASA Technical Reports Server (NTRS)

    Jung, Sung N.; You, Young-Hyun; Lau, Benton H.; Johnson, Wayne; Lim, Joon W.

    2012-01-01

    The structural properties of Higher harmonic Aeroacoustic Rotor Test (HART I) blades have been measured using the original set of blades tested in the wind tunnel in 1994. A comprehensive rotor dynamics analysis is performed to address the effect of the measured blade properties on airloads, blade motions, and structural loads of the rotor. The measurements include bending and torsion stiffness, geometric offsets, and mass and inertia properties of the blade. The measured properties are correlated against the estimated values obtained initially by the manufacturer of the blades. The previously estimated blade properties showed consistently higher stiffnesses, up to 30% for the flap bending in the blade inboard root section. The measured offset between the center of gravity and the elastic axis is larger by about 5% chord length, as compared with the estimated value. The comprehensive rotor dynamics analysis was carried out using the measured blade property set for HART I rotor with and without HHC (Higher Harmonic Control) pitch inputs. A significant improvement on blade motions and structural loads is obtained with the measured blade properties.

  2. Earthquake Potential Models for China

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Jackson, D. D.

    2002-12-01

    We present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. We tested all three estimates, and the published Global Seismic Hazard Assessment Project (GSHAP) model, against earthquake data. We constructed a special earthquake catalog which combines previous catalogs covering different times. We used the special catalog to construct our smoothed seismicity model and to evaluate all models retrospectively. All our models employ a modified Gutenberg-Richter magnitude distribution with three parameters: a multiplicative ``a-value," the slope or ``b-value," and a ``corner magnitude" marking a strong decrease of earthquake rate with magnitude. We assumed the b-value to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and approximately as the reciprocal of the epicentral distance out to a few hundred kilometers. We derived the upper magnitude limit from the special catalog and estimated local a-values from smoothed seismicity. Earthquakes since January 1, 2000 are quite compatible with the model. For the geologic forecast we adopted the seismic source zones (based on geological, geodetic and seismicity data) of the GSHAP model. For each zone, we estimated a corner magnitude by applying the Wells and Coppersmith [1994] relationship to the longest fault in the zone, and we determined the a-value from fault slip rates and an assumed locking depth. The geological model fits the earthquake data better than the GSHAP model. We also applied the Wells and Coppersmith relationship to individual faults, but the results conflicted with the earthquake record. For our geodetic model we derived the uniform upper magnitude limit from the special catalog and assumed local a-values proportional to maximum horizontal strain rate. In prospective tests the geodetic model agrees well with earthquake occurrence. The smoothed seismicity model performs best of the four models.

  3. The influence of sulfur and hair growth on stable isotope diet estimates for grizzly bears.

    PubMed

    Mowat, Garth; Curtis, P Jeff; Lafferty, Diana J R

    2017-01-01

    Stable isotope ratios of grizzly bear (Ursus arctos) guard hair collected from bears on the lower Stikine River, British Columbia (BC) were analyzed to: 1) test whether measuring δ34S values improved the precision of the salmon (Oncorhynchus spp.) diet fraction estimate relative to δ15N as is conventionally done, 2) investigate whether measuring δ34S values improves the separation of diet contributions of moose (Alces alces), marmot (Marmota caligata), and mountain goat (Oreamnos americanus) and, 3) examine the relationship between collection date and length of hair and stable isotope values. Variation in isotope signatures among hair samples from the same bear and year were not trivial. The addition of δ34S values to mixing models used to estimate diet fractions generated small improvement in the precision of salmon and terrestrial prey diet fractions. Although the δ34S value for salmon is precise and appears general among species and areas, sulfur ratios were strongly correlated with nitrogen ratios and therefore added little new information to the mixing model regarding the consumption of salmon. Mean δ34S values for the three terrestrial herbivores of interest were similar and imprecise, so these data also added little new information to the mixing model. The addition of sulfur data did confirm that at least some bears in this system ate marmots during summer and fall. We show that there are bears with short hair that assimilate >20% salmon in their diet and bears with longer hair that eat no salmon living within a few kilometers of one another in a coastal ecosystem. Grizzly bears are thought to re-grow hair between June and October however our analysis of sectioned hair suggested at least some hairs begin growing in July or August, not June and, that hair of wild bears may grow faster than observed in captive bears. Our hair samples may have been from the year of sampling or the previous year because samples were collected in summer when bears were growing new hair. The salmon diet fraction increased with later hair collection dates, as expected if samples were from the year of sampling because salmon began to arrive in mid-summer. Bears that ate salmon had shorter hair and δ15N and δ34S values declined with hair length, also suggesting some hair samples were grown the year of sampling. To be sure to capture an entire hair growth period, samples must be collected in late fall. Early spring samples are also likely to be from the previous year but the date when hair begins to grow appears to vary. Choosing the longest hair available should increase the chance the hair was grown during the previous year and, maximize the period for which diet is measured.

  4. The influence of sulfur and hair growth on stable isotope diet estimates for grizzly bears

    PubMed Central

    Curtis, P. Jeff; Lafferty, Diana J. R.

    2017-01-01

    Stable isotope ratios of grizzly bear (Ursus arctos) guard hair collected from bears on the lower Stikine River, British Columbia (BC) were analyzed to: 1) test whether measuring δ34S values improved the precision of the salmon (Oncorhynchus spp.) diet fraction estimate relative to δ15N as is conventionally done, 2) investigate whether measuring δ34S values improves the separation of diet contributions of moose (Alces alces), marmot (Marmota caligata), and mountain goat (Oreamnos americanus) and, 3) examine the relationship between collection date and length of hair and stable isotope values. Variation in isotope signatures among hair samples from the same bear and year were not trivial. The addition of δ34S values to mixing models used to estimate diet fractions generated small improvement in the precision of salmon and terrestrial prey diet fractions. Although the δ34S value for salmon is precise and appears general among species and areas, sulfur ratios were strongly correlated with nitrogen ratios and therefore added little new information to the mixing model regarding the consumption of salmon. Mean δ34S values for the three terrestrial herbivores of interest were similar and imprecise, so these data also added little new information to the mixing model. The addition of sulfur data did confirm that at least some bears in this system ate marmots during summer and fall. We show that there are bears with short hair that assimilate >20% salmon in their diet and bears with longer hair that eat no salmon living within a few kilometers of one another in a coastal ecosystem. Grizzly bears are thought to re-grow hair between June and October however our analysis of sectioned hair suggested at least some hairs begin growing in July or August, not June and, that hair of wild bears may grow faster than observed in captive bears. Our hair samples may have been from the year of sampling or the previous year because samples were collected in summer when bears were growing new hair. The salmon diet fraction increased with later hair collection dates, as expected if samples were from the year of sampling because salmon began to arrive in mid-summer. Bears that ate salmon had shorter hair and δ15N and δ34S values declined with hair length, also suggesting some hair samples were grown the year of sampling. To be sure to capture an entire hair growth period, samples must be collected in late fall. Early spring samples are also likely to be from the previous year but the date when hair begins to grow appears to vary. Choosing the longest hair available should increase the chance the hair was grown during the previous year and, maximize the period for which diet is measured. PMID:28248995

  5. Modeling of the Wegener Bergeron Findeisen process—implications for aerosol indirect effects

    NASA Astrophysics Data System (ADS)

    Storelvmo, T.; Kristjánsson, J. E.; Lohmann, U.; Iversen, T.; Kirkevåg, A.; Seland, Ø.

    2008-10-01

    A new parameterization of the Wegener-Bergeron-Findeisen (WBF) process has been developed, and implemented in the general circulation model CAM-Oslo. The new parameterization scheme has important implications for the process of phase transition in mixed-phase clouds. The new treatment of the WBF process replaces a previous formulation, in which the onset of the WBF effect depended on a threshold value of the mixing ratio of cloud ice. As no observational guidance for such a threshold value exists, the previous treatment added uncertainty to estimates of aerosol effects on mixed-phase clouds. The new scheme takes subgrid variability into account when simulating the WBF process, allowing for smoother phase transitions in mixed-phase clouds compared to the previous approach. The new parameterization yields a model state which gives reasonable agreement with observed quantities, allowing for calculations of aerosol effects on mixed-phase clouds involving a reduced number of tunable parameters. Furthermore, we find a significant sensitivity to perturbations in ice nuclei concentrations with the new parameterization, which leads to a reversal of the traditional cloud lifetime effect.

  6. The stopping rate of negative cosmic-ray muons near sea level

    NASA Technical Reports Server (NTRS)

    Spannagel, G.; Fireman, E. L.

    1971-01-01

    A production rate of 0.065 + or - 0.003 Ar-37 atom/kg min of K-39 at 2-mwe depth below sea level was measured by sweeping argon from potassium solutions. This rate is unaffected by surrounding the solution by paraffin and is attributed to negative muon captures and the electromagnetic interaction of fast muons, and not to nucleonic cosmic ray component. The Ar-37 yield from K-39 by the stopping of negative muons in a muon beam of a synchrocyclotron was measured to be 8.5 + or - 1.7%. The stopping rate of negative cosmic ray muons at 2-mwe depth below sea level from these measurements and an estimated 17% electromagnetic production is 0.63 + or - 0.13 muon(-)/kg min. Previous measurements on the muon stopping rate vary by a factor of 5. Our value is slightly higher but is consistent with two previous high values. The sensitivity of the Ar-37 radiochemical method for the detection of muons is considerably higher than that of the previous radiochemical methods and could be used to measure the negative muon capture rates at greater depths.

  7. Navigation System Design and State Estimation for a Small Rigid Hull Inflatable Boat (RHIB)

    DTIC Science & Technology

    2014-09-01

    addition of the Coriolis term as previously defined has no effect on pitch, only one measurement is compared against Condor’s true pitch angle values...33  B.  REFERENCE FRAME DEFINITIONS ......................................................33  1.  Earth Centered Inertial...the effect of higher order terms. Lastly, the zeroth weight of the scaled weight set can be modified to incorporate prior knowledge of the

  8. Demonstration and Validation of GTS Long-Term Monitoring Optimization Software at Military and Government Sites

    DTIC Science & Technology

    2011-02-01

    Defense DoE Department of Energy DPT Direct push technology EPA Environmental Protection Agency ERPIMS Enviromental Restoration Program...and 3) assessing whether new wells should be added and where (i.e., network adequacy). • Predict allows import and comparison of new sampling...data against previously estimated trends and maps. Two options include trend flagging and plume flagging to identify potentially anomalous new values

  9. Hydrochemical and 14C constraints on groundwater recharge and interbasin flow in an arid watershed: Tule Desert, Nevada

    NASA Astrophysics Data System (ADS)

    Hagedorn, Benjamin

    2015-04-01

    Geochemical data deduced from groundwater and vein calcite were used to quantify groundwater recharge and interbasin flow rates in the Tule Desert (southeastern Nevada). 14C age gradients below the water table suggest recharge rates of 1-2 mm/yr which correspond to a sustainable yield of 5 × 10-4 km3/yr to 1 × 10-3 km3/yr. Uncertainties in the applied effective porosity value and increasing horizontal interbasin flow components at greater depths may bias these estimates low compared to those previously reported using the water budget method. The deviation of the groundwater δ18O time-series pattern for the Pleistocene-Holocene transition from that of the Devils Hole vein calcite (which is considered a proxy for local climate change) allows interbasin flow rates of northerly derived groundwater to be estimated. The constrained rates (75.0-120 m/yr) are slightly higher than those previously calculated using Darcy's Law, but translate into hydraulic conductivity values strikingly similar to those obtained from pump tests. Data further indicate that production wells located closer to the western mountainous margin will be producing mainly from locally derived mountain-system recharge whereas wells located closer to the eastern margin are more influenced by older, regionally derived carbonate groundwater.

  10. Blowing Snow Sublimation and Transport over Antarctica from 11 Years of CALIPSO Observations

    NASA Technical Reports Server (NTRS)

    Palm, Stephen P.; Kayetha, Vinay; Yang, Yuekui; Pauly, Rebecca

    2017-01-01

    Blowing snow processes commonly occur over the earth's ice sheets when the 10 mile wind speed exceeds a threshold value. These processes play a key role in the sublimation and redistribution of snow thereby influencing the surface mass balance. Prior field studies and modeling results have shown the importance of blowing snow sublimation and transport on the surface mass budget and hydrological cycle of high-latitude regions. For the first time, we present continent-wide estimates of blowing snow sublimation and transport over Antarctica for the period 2006-2016 based on direct observation of blowing snow events. We use an improved version of the blowing snow detection algorithm developed for previous work that uses atmospheric backscatter measurements obtained from the CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) lidar aboard the CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation) satellite. The blowing snow events identified by CALIPSO and meteorological fields from MERRA-2 are used to compute the blowing snow sublimation and transport rates. Our results show that maximum sublimation occurs along and slightly inland of the coastline. This is contrary to the observed maximum blowing snow frequency which occurs over the interior. The associated temperature and moisture reanalysis fields likely contribute to the spatial distribution of the maximum sublimation values. However, the spatial pattern of the sublimation rate over Antarctica is consistent with modeling studies and precipitation estimates. Overall, our results show that the 2006-2016 Antarctica average integrated blowing snow sublimation is about 393 +/- 196 Gt yr(exp -1), which is considerably larger than previous model-derived estimates. We find maximum blowing snow transport amount of 5 Mt km-1 yr(exp -1) over parts of East Antarctica and estimate that the average snow transport from continent to ocean is about 3.7 Gt yr(exp -1). These continent-wide estimates are the first of their kind and can be used to help model and constrain the surface mass budget over Antarctica.

  11. Gene genealogies for genetic association mapping, with application to Crohn's disease

    PubMed Central

    Burkett, Kelly M.; Greenwood, Celia M. T.; McNeney, Brad; Graham, Jinko

    2013-01-01

    A gene genealogy describes relationships among haplotypes sampled from a population. Knowledge of the gene genealogy for a set of haplotypes is useful for estimation of population genetic parameters and it also has potential application in finding disease-predisposing genetic variants. As the true gene genealogy is unknown, Markov chain Monte Carlo (MCMC) approaches have been used to sample genealogies conditional on data at multiple genetic markers. We previously implemented an MCMC algorithm to sample from an approximation to the distribution of the gene genealogy conditional on haplotype data. Our approach samples ancestral trees, recombination and mutation rates at a genomic focal point. In this work, we describe how our sampler can be used to find disease-predisposing genetic variants in samples of cases and controls. We use a tree-based association statistic that quantifies the degree to which case haplotypes are more closely related to each other around the focal point than control haplotypes, without relying on a disease model. As the ancestral tree is a latent variable, so is the tree-based association statistic. We show how the sampler can be used to estimate the posterior distribution of the latent test statistic and corresponding latent p-values, which together comprise a fuzzy p-value. We illustrate the approach on a publicly-available dataset from a study of Crohn's disease that consists of genotypes at multiple SNP markers in a small genomic region. We estimate the posterior distribution of the tree-based association statistic and the recombination rate at multiple focal points in the region. Reassuringly, the posterior mean recombination rates estimated at the different focal points are consistent with previously published estimates. The tree-based association approach finds multiple sub-regions where the case haplotypes are more genetically related than the control haplotypes, and that there may be one or multiple disease-predisposing loci. PMID:24348515

  12. Testing mapping algorithms of the cancer-specific EORTC QLQ-C30 onto EQ-5D in malignant mesothelioma.

    PubMed

    Arnold, David T; Rowen, Donna; Versteegh, Matthijs M; Morley, Anna; Hooper, Clare E; Maskell, Nicholas A

    2015-01-23

    In order to estimate utilities for cancer studies where the EQ-5D was not used, the EORTC QLQ-C30 can be used to estimate EQ-5D using existing mapping algorithms. Several mapping algorithms exist for this transformation, however, algorithms tend to lose accuracy in patients in poor health states. The aim of this study was to test all existing mapping algorithms of QLQ-C30 onto EQ-5D, in a dataset of patients with malignant pleural mesothelioma, an invariably fatal malignancy where no previous mapping estimation has been published. Health related quality of life (HRQoL) data where both the EQ-5D and QLQ-C30 were used simultaneously was obtained from the UK-based prospective observational SWAMP (South West Area Mesothelioma and Pemetrexed) trial. In the original trial 73 patients with pleural mesothelioma were offered palliative chemotherapy and their HRQoL was assessed across five time points. This data was used to test the nine available mapping algorithms found in the literature, comparing predicted against observed EQ-5D values. The ability of algorithms to predict the mean, minimise error and detect clinically significant differences was assessed. The dataset had a total of 250 observations across 5 timepoints. The linear regression mapping algorithms tested generally performed poorly, over-estimating the predicted compared to observed EQ-5D values, especially when observed EQ-5D was below 0.5. The best performing algorithm used a response mapping method and predicted the mean EQ-5D with accuracy with an average root mean squared error of 0.17 (Standard Deviation; 0.22). This algorithm reliably discriminated between clinically distinct subgroups seen in the primary dataset. This study tested mapping algorithms in a population with poor health states, where they have been previously shown to perform poorly. Further research into EQ-5D estimation should be directed at response mapping methods given its superior performance in this study.

  13. Quantifying uncertainty in carbon and nutrient pools of coarse woody debris

    NASA Astrophysics Data System (ADS)

    See, C. R.; Campbell, J. L.; Fraver, S.; Domke, G. M.; Harmon, M. E.; Knoepp, J. D.; Woodall, C. W.

    2016-12-01

    Woody detritus constitutes a major pool of both carbon and nutrients in forested ecosystems. Estimating coarse wood stocks relies on many assumptions, even when full surveys are conducted. Researchers rarely report error in coarse wood pool estimates, despite the importance to ecosystem budgets and modelling efforts. To date, no study has attempted a comprehensive assessment of error rates and uncertainty inherent in the estimation of this pool. Here, we use Monte Carlo analysis to propagate the error associated with the major sources of uncertainty present in the calculation of coarse wood carbon and nutrient (i.e., N, P, K, Ca, Mg, Na) pools. We also evaluate individual sources of error to identify the importance of each source of uncertainty in our estimates. We quantify sampling error by comparing the three most common field methods used to survey coarse wood (two transect methods and a whole-plot survey). We quantify the measurement error associated with length and diameter measurement, and technician error in species identification and decay class using plots surveyed by multiple technicians. We use previously published values of model error for the four most common methods of volume estimation: Smalian's, conical frustum, conic paraboloid, and average-of-ends. We also use previously published values for error in the collapse ratio (cross-sectional height/width) of decayed logs that serves as a surrogate for the volume remaining. We consider sampling error in chemical concentration and density for all decay classes, using distributions from both published and unpublished studies. Analytical uncertainty is calculated using standard reference plant material from the National Institute of Standards. Our results suggest that technician error in decay classification can have a large effect on uncertainty, since many of the error distributions included in the calculation (e.g. density, chemical concentration, volume-model selection, collapse ratio) are decay-class specific.

  14. Estimating the volatilization of ammonia from synthetic nitrogenous fertilizers used in China.

    PubMed

    Zhang, Yisheng; Luan, Shengji; Chen, Liaoliao; Shao, Min

    2011-03-01

    Although it has long been recognized that significant amounts of nitrogen, typically in the form of ammonia (NH(3)) applied as fertilizer, are lost to the atmosphere, accurate estimates are lacking for many locations. In this study, a detailed, bottom-up method for estimating NH(3) emissions from synthetic fertilizers in China was used. The total amount emitted in 2005 in China was estimated to be 3.55 Tg NH(3)-N, with an uncertainty of ± 50%. This estimate was considerably lower than previously published values. Emissions from urea and ammonium bicarbonate accounted for 64.3% and 26.5%, respectively, of the 2005 total. The NH(3) emission inventory incorporated 2448 county-level data points, categorized on a monthly basis, and was developed with more accurate activity levels and emission factors than had been used in previous assessments. There was considerable variability in the emissions within a province. The NH(3) emissions generally peaked in the spring and summer, accounting for 30.1% and 48.8%, respectively, of total emissions in 2005. The peaks correlated with crop planting and fertilization schedules. The NH(3) regional distribution pattern showed strong correspondence with planting techniques and local arable land areas. The regions with the highest atmospheric losses are located in eastern China, especially the North China Plain and the Taihu region. Copyright © 2010 Elsevier Ltd. All rights reserved.

  15. Water Vapor Winds and Their Application to Climate Change Studies

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary J.; Lerner, Jeffrey A.

    2000-01-01

    The retrieval of satellite-derived winds and moisture from geostationary water vapor imagery has matured to the point where it may be applied to better understanding longer term climate changes that were previously not possible using conventional measurements or model analysis in data-sparse regions. In this paper, upper-tropospheric circulation features and moisture transport covering ENSO periods are presented and discussed. Precursors and other detectable interannual climate change signals are analyzed and compared to model diagnosed features. Estimates of winds and humidity over data-rich regions are used to show the robustness of the data and its value over regions that have previously eluded measurement.

  16. State summaries: Colorado

    USGS Publications Warehouse

    Keller, J.; Carroll, C.; Widmann, B.

    2006-01-01

    According to the Colorado Geological Survey (CGS), Colorado's mining industry enjoyed a record-breaking year in 2005. For the whole year, the total value of nonfuel minerals, coal and uranium produced in the state in 2005 amounted to $2.4 billion. The production value of $1.52 billion in the nonfuel sector broke the previous record of $1.3 billion set in 1980, and is 60% higher than the revised 2004 CGS estimate of $950.5 million. The United States Geological Survey (USGS) ranked Colorado ninth among the states in nonfuel mineral value, up from 17th in 2004. About $1 billion of the nonfuel total is from metal mining. New record-high productions were achieved not only for molybdenum but also for coal and goal.

  17. Sequential Bayesian Filters for Estimating Time Series of Wrapped and Unwrapped Angles with Hyperparameter Estimation

    NASA Astrophysics Data System (ADS)

    Umehara, Hiroaki; Okada, Masato; Naruse, Yasushi

    2018-03-01

    The estimation of angular time series data is a widespread issue relating to various situations involving rotational motion and moving objects. There are two kinds of problem settings: the estimation of wrapped angles, which are principal values in a circular coordinate system (e.g., the direction of an object), and the estimation of unwrapped angles in an unbounded coordinate system such as for the positioning and tracking of moving objects measured by the signal-wave phase. Wrapped angles have been estimated in previous studies by sequential Bayesian filtering; however, the hyperparameters that are to be solved and that control the properties of the estimation model were given a priori. The present study establishes a procedure of hyperparameter estimation from the observation data of angles only, using the framework of Bayesian inference completely as the maximum likelihood estimation. Moreover, the filter model is modified to estimate the unwrapped angles. It is proved that without noise our model reduces to the existing algorithm of Itoh's unwrapping transform. It is numerically confirmed that our model is an extension of unwrapping estimation from Itoh's unwrapping transform to the case with noise.

  18. Groundwater contaminant plume maps and volumes, 100-K and 100-N Areas, Hanford Site, Washington

    USGS Publications Warehouse

    Johnson, Kenneth H.

    2016-09-27

    This study provides an independent estimate of the areal and volumetric extent of groundwater contaminant plumes which are affected by waste disposal in the 100-K and 100-N Areas (study area) along the Columbia River Corridor of the Hanford Site. The Hanford Natural Resource Trustee Council requested that the U.S. Geological Survey perform this interpolation to assess the accuracy of delineations previously conducted by the U.S. Department of Energy and its contractors, in order to assure that the Natural Resource Damage Assessment could rely on these analyses. This study is based on previously existing chemical (or radionuclide) sampling and analysis data downloaded from publicly available Hanford Site Internet sources, geostatistically selected and interpreted as representative of current (from 2009 through part of 2012) but average conditions for groundwater contamination in the study area. The study is limited in scope to five contaminants—hexavalent chromium, tritium, nitrate, strontium-90, and carbon-14, all detected at concentrations greater than regulatory limits in the past.All recent analytical concentrations (or activities) for each contaminant, adjusted for radioactive decay, non-detections, and co-located wells, were converted to log-normal distributions and these transformed values were averaged for each well location. The log-normally linearized well averages were spatially interpolated on a 50 × 50-meter (m) grid extending across the combined 100-N and 100-K Areas study area but limited to avoid unrepresentative extrapolation, using the minimum curvature geostatistical interpolation method provided by SURFER®data analysis software. Plume extents were interpreted by interpolating the log-normally transformed data, again using SURFER®, along lines of equal contaminant concentration at an appropriate established regulatory concentration . Total areas for each plume were calculated as an indicator of relative environmental damage. These plume extents are shown graphically and in tabular form for comparison to previous estimates. Plume data also were interpolated to a finer grid (10 × 10 m) for some processing, particularly to estimate volumes of contaminated groundwater. However, hydrogeologic transport modeling was not considered for the interpolation. The compilation of plume extents for each contaminant also allowed estimates of overlap of the plumes or areas with more than one contaminant above regulatory standards.A mapping of saturated aquifer thickness also was derived across the 100-K and 100–N study area, based on the vertical difference between the groundwater level (water table) at the top and the altitude of the top of the Ringold Upper Mud geologic unit, considered the bottom of the uppermost unconfined aquifer. Saturated thickness was calculated for each cell in the finer (10 × 10 m) grid. The summation of the cells’ saturated thickness values within each polygon of plume regulatory exceedance provided an estimate of the total volume of contaminated aquifer, and the results also were checked using a SURFER® volumetric integration procedure. The total volume of contaminated groundwater in each plume was derived by multiplying the aquifer saturated thickness volume by a locally representative value of porosity (0.3).Estimates of the uncertainty of the plume delineation also are presented. “Upper limit” plume delineations were calculated for each contaminant using the same procedure as the “average” plume extent except with values at each well that are set at a 95-percent upper confidence limit around the log-normally transformed mean concentrations, based on the standard error for the distribution of the mean value in that well; “lower limit” plumes are calculated at a 5-percent confidence limit around the geometric mean. These upper- and lower-limit estimates are considered unrealistic because the statistics were increased or decreased at each well simultaneously and were not adjusted for correlation among the well distributions (i.e., it is not realistic that all wells would be high simultaneously). Sources of the variability in the distributions used in the upper- and lower-extent maps include time varying concentrations and analytical errors.The plume delineations developed in this study are similar to the previous plume descriptions developed by U.S. Department of Energy and its contractors. The differences are primarily due to data selection and interpolation methodology. The differences in delineated plumes are not sufficient to result in the Hanford Natural Resource Trustee Council adjusting its understandings of contaminant impact or remediation.

  19. Appraising into the Sun: Six-State Solar Home Paired-Sale Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lawrence Berkeley National Laboratory

    Although residential solar photovoltaic (PV) installations have proliferated, PV systems on some U.S. homes still receive no value during an appraisal because comparable home sales are lacking. To value residential PV, some previous studies have employed paired-sales appraisal methods to analyze small PV home samples in depth, while others have used statistical methods to analyze large samples. Our first-of-its-kind study connects the two approaches. It uses appraisal methods to evaluate sales price premiums for owned PV systems on single-unit detached houses that were also evaluated in a large statistical study. Independent appraisers evaluated 43 recent home sales pairs in sixmore » states: California, Oregon, Florida, Maryland, North Carolina, and Pennsylvania. We compare these results with contributory-value estimates—based on income (using the PV Value® tool), gross cost, and net cost—as well as hedonic modeling results from the recent statistical study. The results provide strong, appraisal-based evidence of PV premiums in all states. More importantly, the results support the use of cost- and incomebased PV premium estimates when paired-sales analysis is impossible. PV premiums from the paired-sales analysis are most similar to net PV cost estimates. PV Value® income results generally track the appraised premiums, although conservatively. The appraised premiums are in agreement with the hedonic modeling results as well, which bolsters the suitability of both approaches for estimating PV home premiums. Therefore, these results will benefit valuation professionals and mortgage lenders who increasingly are encountering homes equipped with PV and need to understand the factors that can both contribute to and detract from market value.« less

  20. A study of the cost-effective markets for new technology agricultural aircraft

    NASA Technical Reports Server (NTRS)

    Hazelrigg, G. A., Jr.; Clyne, F.

    1979-01-01

    A previously developed data base was used to estimate the regional and total U.S. cost-effective markets for a new technology agricultural aircraft as incorporating features which could result from NASA-sponsored aerial applications research. The results show that the long-term market penetration of a new technology aircraft would be near 3,000 aircraft. This market penetration would be attained in approximately 20 years. Annual sales would be about 200 aircraft after 5 to 6 years of introduction. The net present value of cost savings benefit which this aircraft would yield (measured on an infinite horizon basis) would be about $35 million counted at a 10 percent discount rate and $120 million at a 5 percent discount rate. At both discount rates the present value of cost savings exceeds the present value of research and development (R&D) costs estimated for the development of the technology base needed for the proposed aircraft. These results are quite conservative as they have been derived neglecting future growth in the agricultural aviation industry, which has been averaging about 12 percent per year over the past several years.

  1. Parent Stars of Extrasolar Planets. VII. New Abundance Analyses of 30 Systems

    NASA Astrophysics Data System (ADS)

    Laws, Chris; Gonzalez, Guillermo; Walker, Kyle M.; Tyagi, Sudhi; Dodsworth, Jeremey; Snider, Keely; Suntzeff, Nicholas B.

    2003-05-01

    The results of new spectroscopic analyses of 30 stars with giant planet and/or brown dwarf companions are presented. Values for Teff and [Fe/H] are used in conjunction with Hipparcos data and Padua isochrones to derive masses, ages, and theoretical surface gravities. These new data are combined with spectroscopic and photometric metallicity estimates of other stars harboring planets and published samples of F, G, and K dwarfs to compare several subsets of planet bearing stars with similarly well-constrained control groups. The distribution of [Fe/H] values continues the trend uncovered in previous studies in that stars hosting planetary companions have a higher mean value than otherwise similar nearby stars. We also investigate the relationship between stellar mass and the presence of giant planets, and we find statistically marginal but suggestive evidence of a decrease in the incidence of radial velocity companions orbiting relatively less massive stars. If confirmed with larger samples, this would represent a critical constraint to both planetary formation models, as well as to estimates of the distribution of planetary systems in our Galaxy.

  2. The Effective Dynamic Ranges for Glaucomatous Visual Field Progression With Standard Automated Perimetry and Stimulus Sizes III and V.

    PubMed

    Wall, Michael; Zamba, Gideon K D; Artes, Paul H

    2018-01-01

    It has been shown that threshold estimates below approximately 20 dB have little effect on the ability to detect visual field progression in glaucoma. We aimed to compare stimulus size V to stimulus size III, in areas of visual damage, to confirm these findings by using (1) a different dataset, (2) different techniques of progression analysis, and (3) an analysis to evaluate the effect of censoring on mean deviation (MD). In the Iowa Variability in Perimetry Study, 120 glaucoma subjects were tested every 6 months for 4 years with size III SITA Standard and size V Full Threshold. Progression was determined with three complementary techniques: pointwise linear regression (PLR), permutation of PLR, and linear regression of the MD index. All analyses were repeated on "censored'' datasets in which threshold estimates below a given criterion value were set to equal the criterion value. Our analyses confirmed previous observations that threshold estimates below 20 dB contribute much less to visual field progression than estimates above this range. These findings were broadly similar with stimulus sizes III and V. Censoring of threshold values < 20 dB has relatively little impact on the rates of visual field progression in patients with mild to moderate glaucoma. Size V, which has lower retest variability, performs at least as well as size III for longitudinal glaucoma progression analysis and appears to have a larger useful dynamic range owing to the upper sensitivity limit being higher.

  3. Duration analysis using matching pursuit algorithm reveals longer bouts of gamma rhythm.

    PubMed

    Chandran Ks, Subhash; Seelamantula, Chandra Sekhar; Ray, Supratim

    2018-03-01

    The gamma rhythm (30-80 Hz), often associated with high-level cortical functions, is believed to provide a temporal reference frame for spiking activity, for which it should have a stable center frequency and linear phase for an extended duration. However, recent studies that have estimated the power and phase of gamma as a function of time suggest that gamma occurs in short bursts and lacks the temporal structure required to act as a reference frame. Here, we show that the bursty appearance of gamma arises from the variability in the spectral estimator used in these studies. To overcome this problem, we use another duration estimator based on a matching pursuit algorithm that robustly estimates the duration of gamma in simulated data. Applying this algorithm to gamma oscillations recorded from implanted microelectrodes in the primary visual cortex of awake monkeys, we show that the median gamma duration is greater than 300 ms, which is three times longer than previously reported values. NEW & NOTEWORTHY Gamma oscillations (30-80 Hz) have been hypothesized to provide a temporal reference frame for coordination of spiking activity, but recent studies have shown that gamma occurs in very short bursts. We show that existing techniques have severely underestimated the rhythm duration, use a technique based on the Matching Pursuit algorithm, which provides a robust estimate of the duration, and show that the median duration of gamma is greater than 300 ms, much longer than previous estimates.

  4. Assessment of the point-source method for estimating dose rates to members of the public from exposure to patients with 131I thyroid treatment

    DOE PAGES

    Dewji, Shaheen Azim; Bellamy, Michael B.; Hertel, Nolan E.; ...

    2015-09-01

    The U.S. Nuclear Regulatory Commission (USNRC) initiated a contract with Oak Ridge National Laboratory (ORNL) to calculate radiation dose rates to members of the public that may result from exposure to patients recently administered iodine-131 ( 131I) as part of medical therapy. The main purpose was to compare dose rate estimates based on a point source and target with values derived from more realistic simulations that considered the time-dependent distribution of 131I in the patient and attenuation of emitted photons by the patient’s tissues. The external dose rate estimates were derived using Monte Carlo methods and two representations of themore » Phantom with Movable Arms and Legs, previously developed by ORNL and the USNRC, to model the patient and a nearby member of the public. Dose rates to tissues and effective dose rates were calculated for distances ranging from 10 to 300 cm between the phantoms and compared to estimates based on the point-source method, as well as to results of previous studies that estimated exposure from 131I patients. The point-source method overestimates dose rates to members of the public in very close proximity to an 131I patient but is a broadly accurate method of dose rate estimation at separation distances of 300 cm or more at times closer to administration.« less

  5. Duration analysis using matching pursuit algorithm reveals longer bouts of gamma rhythm

    PubMed Central

    Chandran KS, Subhash; Seelamantula, Chandra Sekhar

    2018-01-01

    The gamma rhythm (30–80 Hz), often associated with high-level cortical functions, is believed to provide a temporal reference frame for spiking activity, for which it should have a stable center frequency and linear phase for an extended duration. However, recent studies that have estimated the power and phase of gamma as a function of time suggest that gamma occurs in short bursts and lacks the temporal structure required to act as a reference frame. Here, we show that the bursty appearance of gamma arises from the variability in the spectral estimator used in these studies. To overcome this problem, we use another duration estimator based on a matching pursuit algorithm that robustly estimates the duration of gamma in simulated data. Applying this algorithm to gamma oscillations recorded from implanted microelectrodes in the primary visual cortex of awake monkeys, we show that the median gamma duration is greater than 300 ms, which is three times longer than previously reported values. NEW & NOTEWORTHY Gamma oscillations (30–80 Hz) have been hypothesized to provide a temporal reference frame for coordination of spiking activity, but recent studies have shown that gamma occurs in very short bursts. We show that existing techniques have severely underestimated the rhythm duration, use a technique based on the Matching Pursuit algorithm, which provides a robust estimate of the duration, and show that the median duration of gamma is greater than 300 ms, much longer than previous estimates. PMID:29118193

  6. Electron electric dipole moment and hyperfine interaction constants for ThO

    NASA Astrophysics Data System (ADS)

    Fleig, Timo; Nayak, Malaya K.

    2014-06-01

    A recently implemented relativistic four-component configuration interaction approach to study P- and T-odd interaction constants in atoms and molecules is employed to determine the electron electric dipole moment effective electric field in the Ω=1 first excited state of the ThO molecule. We obtain a value of Eeff=75.2GV/cm with an estimated error bar of 3% and 10% smaller than a previously reported result (Skripnikov et al., 2013). Using the same wavefunction model we obtain an excitation energy of TvΩ=1=5410 (cm), in accord with the experimental value within 2%. In addition, we report the implementation of the magnetic hyperfine interaction constant A|| as an expectation value, resulting in A||=-1339 (MHz) for the Ω=1 state in ThO. The smaller effective electric field increases the previously determined upper bound (Baron et al., 2014) on the electron electric dipole moment to |de|<9.7×10-29e cm and thus mildly mitigates constraints to possible extensions of the Standard Model of particle physics.

  7. New production in the warm waters of the tropical Pacific Ocean

    NASA Technical Reports Server (NTRS)

    Pena, M. Angelica; Lewis, Marlon R.; Cullen, John J.

    1994-01-01

    The average depth-integrated rate of new production in the tropical Pacific Ocean was estimated from a calculation of horizontal and vertical nitrate balance over the region enclosed by the climatological 26 C isotherm. The net turbulent flux of nitrate into the region was computed in terms of the climatological net surface heat flux and the nitrate-temperature relationship at the base of the 26 C isotherm. The net advective transport of nitrate into the region was estimated using the mean nitrate distribution obtained from the analysis of historical data and previous results of a general circulation model of the tropical Pacific. The rate of new production resulting from vertical turbulent fluxes of nitrate was found to be similar in magnitude to that due to advective transport. Most (about 75%) of the advective input of nitrate was due to the horizontal transport of nutrient-rich water from the eastern equatorial region rather than from equatorial upwelling. An average rate of new production of 14.5 - 16 g C/sq m/yr was found for the warm waters of the tropical Pacific region. These values are in good agreement with previous estimates for this region and are almost five times less than is estimated for the eastern equatorial Pacific, where most of the nutrient upwelling occurs.

  8. Identifying Genetic Signatures of Natural Selection Using Pooled Population Sequencing in Picea abies

    PubMed Central

    Chen, Jun; Källman, Thomas; Ma, Xiao-Fei; Zaina, Giusi; Morgante, Michele; Lascoux, Martin

    2016-01-01

    The joint inference of selection and past demography remain a costly and demanding task. We used next generation sequencing of two pools of 48 Norway spruce mother trees, one corresponding to the Fennoscandian domain, and the other to the Alpine domain, to assess nucleotide polymorphism at 88 nuclear genes. These genes are candidate genes for phenological traits, and most belong to the photoperiod pathway. Estimates of population genetic summary statistics from the pooled data are similar to previous estimates, suggesting that pooled sequencing is reliable. The nonsynonymous SNPs tended to have both lower frequency differences and lower FST values between the two domains than silent ones. These results suggest the presence of purifying selection. The divergence between the two domains based on synonymous changes was around 5 million yr, a time similar to a recent phylogenetic estimate of 6 million yr, but much larger than earlier estimates based on isozymes. Two approaches, one of them novel and that considers both FST and difference in allele frequencies between the two domains, were used to identify SNPs potentially under diversifying selection. SNPs from around 20 genes were detected, including genes previously identified as main target for selection, such as PaPRR3 and PaGI. PMID:27172202

  9. Identifying Genetic Signatures of Natural Selection Using Pooled Population Sequencing in Picea abies.

    PubMed

    Chen, Jun; Källman, Thomas; Ma, Xiao-Fei; Zaina, Giusi; Morgante, Michele; Lascoux, Martin

    2016-07-07

    The joint inference of selection and past demography remain a costly and demanding task. We used next generation sequencing of two pools of 48 Norway spruce mother trees, one corresponding to the Fennoscandian domain, and the other to the Alpine domain, to assess nucleotide polymorphism at 88 nuclear genes. These genes are candidate genes for phenological traits, and most belong to the photoperiod pathway. Estimates of population genetic summary statistics from the pooled data are similar to previous estimates, suggesting that pooled sequencing is reliable. The nonsynonymous SNPs tended to have both lower frequency differences and lower FST values between the two domains than silent ones. These results suggest the presence of purifying selection. The divergence between the two domains based on synonymous changes was around 5 million yr, a time similar to a recent phylogenetic estimate of 6 million yr, but much larger than earlier estimates based on isozymes. Two approaches, one of them novel and that considers both FST and difference in allele frequencies between the two domains, were used to identify SNPs potentially under diversifying selection. SNPs from around 20 genes were detected, including genes previously identified as main target for selection, such as PaPRR3 and PaGI. Copyright © 2016 Chen et al.

  10. Modeling the erythemal surface diffuse irradiance fraction for Badajoz, Spain

    NASA Astrophysics Data System (ADS)

    Sanchez, Guadalupe; Serrano, Antonio; Cancillo, María Luisa

    2017-10-01

    Despite its important role on the human health and numerous biological processes, the diffuse component of the erythemal ultraviolet irradiance (UVER) is scarcely measured at standard radiometric stations and therefore needs to be estimated. This study proposes and compares 10 empirical models to estimate the UVER diffuse fraction. These models are inspired from mathematical expressions originally used to estimate total diffuse fraction, but, in this study, they are applied to the UVER case and tested against experimental measurements. In addition to adapting to the UVER range the various independent variables involved in these models, the total ozone column has been added in order to account for its strong impact on the attenuation of ultraviolet radiation. The proposed models are fitted to experimental measurements and validated against an independent subset. The best-performing model (RAU3) is based on a model proposed by Ruiz-Arias et al. (2010) and shows values of r2 equal to 0.91 and relative root-mean-square error (rRMSE) equal to 6.1 %. The performance achieved by this entirely empirical model is better than those obtained by previous semi-empirical approaches and therefore needs no additional information from other physically based models. This study expands on previous research to the ultraviolet range and provides reliable empirical models to accurately estimate the UVER diffuse fraction.

  11. NASA, Navy, and AES/York sea ice concentration comparison of SSM/I algorithms with SAR derived values

    NASA Technical Reports Server (NTRS)

    Jentz, R. R.; Wackerman, C. C.; Shuchman, R. A.; Onstott, R. G.; Gloersen, Per; Cavalieri, Don; Ramseier, Rene; Rubinstein, Irene; Comiso, Joey; Hollinger, James

    1991-01-01

    Previous research studies have focused on producing algorithms for extracting geophysical information from passive microwave data regarding ice floe size, sea ice concentration, open water lead locations, and sea ice extent. These studies have resulted in four separate algorithms for extracting these geophysical parameters. Sea ice concentration estimates generated from each of these algorithms (i.e., NASA/Team, NASA/Comiso, AES/York, and Navy) are compared to ice concentration estimates produced from coincident high-resolution synthetic aperture radar (SAR) data. The SAR concentration estimates are produced from data collected in both the Beaufort Sea and the Greenland Sea in March 1988 and March 1989, respectively. The SAR data are coincident to the passive microwave data generated by the Special Sensor Microwave/Imager (SSM/I).

  12. [ESTIMATION OF IONIZING RADIATION EFFECTIVE DOSES IN THE INTERNATIONAL SPACE STATION CREWS BY THE METHOD OF CALCULATION MODELING].

    PubMed

    Mitrikas, V G

    2015-01-01

    Monitoring of the radiation loading on cosmonauts requires calculation of absorbed dose dynamics with regard to the stay of cosmonauts in specific compartments of the space vehicle that differ in shielding properties and lack means of radiation measurement. The paper discusses different aspects of calculation modeling of radiation effects on human body organs and tissues and reviews the effective dose estimates for cosmonauts working in one or another compartment over the previous period of the International space station operation. It was demonstrated that doses measured by a real or personal dosimeters can be used to calculate effective dose values. Correct estimation of accumulated effective dose can be ensured by consideration for time course of the space radiation quality factor.

  13. Stochastic precision analysis of 2D cardiac strain estimation in vivo

    NASA Astrophysics Data System (ADS)

    Bunting, E. A.; Provost, J.; Konofagou, E. E.

    2014-11-01

    Ultrasonic strain imaging has been applied to echocardiography and carries great potential to be used as a tool in the clinical setting. Two-dimensional (2D) strain estimation may be useful when studying the heart due to the complex, 3D deformation of the cardiac tissue. Increasing the framerate used for motion estimation, i.e. motion estimation rate (MER), has been shown to improve the precision of the strain estimation, although maintaining the spatial resolution necessary to view the entire heart structure in a single heartbeat remains challenging at high MERs. Two previously developed methods, the temporally unequispaced acquisition sequence (TUAS) and the diverging beam sequence (DBS), have been used in the past to successfully estimate in vivo axial strain at high MERs without compromising spatial resolution. In this study, a stochastic assessment of 2D strain estimation precision is performed in vivo for both sequences at varying MERs (65, 272, 544, 815 Hz for TUAS; 250, 500, 1000, 2000 Hz for DBS). 2D incremental strains were estimated during left ventricular contraction in five healthy volunteers using a normalized cross-correlation function and a least-squares strain estimator. Both sequences were shown capable of estimating 2D incremental strains in vivo. The conditional expected value of the elastographic signal-to-noise ratio (E(SNRe|ɛ)) was used to compare strain estimation precision of both sequences at multiple MERs over a wide range of clinical strain values. The results here indicate that axial strain estimation precision is much more dependent on MER than lateral strain estimation, while lateral estimation is more affected by strain magnitude. MER should be increased at least above 544 Hz to avoid suboptimal axial strain estimation. Radial and circumferential strain estimations were influenced by the axial and lateral strain in different ways. Furthermore, the TUAS and DBS were found to be of comparable precision at similar MERs.

  14. An evaluation of study design for estimating a time-of-day noise weighting

    NASA Technical Reports Server (NTRS)

    Fields, J. M.

    1986-01-01

    The relative importance of daytime and nighttime noise of the same noise level is represented by a time-of-day weight in noise annoyance models. The high correlations between daytime and nighttime noise were regarded as a major reason that previous social surveys of noise annoyance could not accurately estimate the value of the time-of-day weight. Study designs which would reduce the correlation between daytime and nighttime noise are described. It is concluded that designs based on short term variations in nighttime noise levels would not be able to provide valid measures of response to nighttime noise. The accuracy of the estimate of the time-of-day weight is predicted for designs which are based on long term variations in nighttime noise levels. For these designs it is predicted that it is not possible to form satisfactorily precise estimates of the time-of-day weighting.

  15. Quantum metrology and estimation of Unruh effect

    PubMed Central

    Wang, Jieci; Tian, Zehua; Jing, Jiliang; Fan, Heng

    2014-01-01

    We study the quantum metrology for a pair of entangled Unruh-Dewitt detectors when one of them is accelerated and coupled to a massless scalar field. Comparing with previous schemes, our model requires only local interaction and avoids the use of cavities in the probe state preparation process. We show that the probe state preparation and the interaction between the accelerated detector and the external field have significant effects on the value of quantum Fisher information, correspondingly pose variable ultimate limit of precision in the estimation of Unruh effect. We find that the precision of the estimation can be improved by a larger effective coupling strength and a longer interaction time. Alternatively, the energy gap of the detector has a range that can provide us a better precision. Thus we may adjust those parameters and attain a higher precision in the estimation. We also find that an extremely high acceleration is not required in the quantum metrology process. PMID:25424772

  16. Estimation of descriptive statistics for multiply censored water quality data

    USGS Publications Warehouse

    Helsel, Dennis R.; Cohn, Timothy A.

    1988-01-01

    This paper extends the work of Gilliom and Helsel (1986) on procedures for estimating descriptive statistics of water quality data that contain “less than” observations. Previously, procedures were evaluated when only one detection limit was present. Here we investigate the performance of estimators for data that have multiple detection limits. Probability plotting and maximum likelihood methods perform substantially better than simple substitution procedures now commonly in use. Therefore simple substitution procedures (e.g., substitution of the detection limit) should be avoided. Probability plotting methods are more robust than maximum likelihood methods to misspecification of the parent distribution and their use should be encouraged in the typical situation where the parent distribution is unknown. When utilized correctly, less than values frequently contain nearly as much information for estimating population moments and quantiles as would the same observations had the detection limit been below them.

  17. The influence of preburial insect access on the decomposition rate.

    PubMed

    Bachmann, Jutta; Simmons, Tal

    2010-07-01

    This study compared total body score (TBS) in buried remains (35 cm depth) with and without insect access prior to burial. Sixty rabbit carcasses were exhumed at 50 accumulated degree day (ADD) intervals. Weight loss, TBS, intra-abdominal decomposition, carcass/soil interface temperature, and below-carcass soil pH were recorded and analyzed. Results showed significant differences (p < 0.001) in decomposition rates between carcasses with and without insect access prior to burial. An approximately 30% enhanced decomposition rate with insects was observed. TBS was the most valid tool in postmortem interval (PMI) estimation. All other variables showed only weak relationships to decomposition stages, adding little value to PMI estimation. Although progress in estimating the PMI for surface remains has been made, no previous studies have accomplished this for buried remains. This study builds a framework to which further comparable studies can contribute, to produce predictive models for PMI estimation in buried human remains.

  18. Thunderstorm vertical velocities and mass flux estimated from satellite data

    NASA Technical Reports Server (NTRS)

    Adler, R. F.; Fenn, D. D.

    1979-01-01

    Infrared geosynchronous satellite data with an interval of five minutes between images are used to estimate thunderstorm top ascent rates on two case study days. A mean vertical velocity of 3.5/ms for 19 clouds is calculated at a height of 8.7 km. This upward motion is representative of an area of approximately 10km on a side. Thunderstorm mass flux of approximately 2x10 to the 11th power/gs is calculated, which compares favorably with previous estimates. There is a significant difference in the mean calculated vertical velocity between elements associated with severe weather reports (w bar=4.6/ms) and those with no such reports (2.5/ms). Calculations were made using a velocity profile for an axially symmetric jet to estimate the peak updraft velocity. For the largest observed w value of 7.8/ms the calculation indicates a peak updraft of approximately 50/ms.

  19. Nuclear dna amounts in angiosperms.

    PubMed

    Bennett, M D; Smith, J B

    1976-05-27

    The number of angiosperm species for which nuclear DNA amount estimates have been made has nearly trebled since the last collected lists of such values were published, and therefore, publication of a more comprehensive list is over due. This paper lists absolute nuclear DNA amounts for 753 angiosperm species. The dats were assembled primarily for reference purposes, and so the species are listed in alphabetical order, as this was felt to be more helpful to cyto- and biochemists whom, it is anticipated, will be among its major users. The paper also reviews aspects of the history, nomenclature, methods, accuracy and problems of nuclear DNA estimation in angiosperms. No attempt is made to reconsider those aspects of nuclear DNA estimation which have been fully revised previously, although the bibliography of such aspects is given. Instead, the paper is intended as a source of basic information regarding the terminology, practice and limitations of nuclear DNA estimation, especially by Feulgen microdensitometry, as currently practiced.

  20. Novel blood pressure and pulse pressure estimation based on pulse transit time and stroke volume approximation.

    PubMed

    Lee, Joonnyong; Sohn, JangJay; Park, Jonghyun; Yang, SeungMan; Lee, Saram; Kim, Hee Chan

    2018-06-18

    Non-invasive continuous blood pressure monitors are of great interest to the medical community due to their value in hypertension management. Recently, studies have shown the potential of pulse pressure as a therapeutic target for hypertension, but not enough attention has been given to non-invasive continuous monitoring of pulse pressure. Although accurate pulse pressure estimation can be of direct value to hypertension management and indirectly to the estimation of systolic blood pressure, as it is the sum of pulse pressure and diastolic blood pressure, only a few inadequate methods of pulse pressure estimation have been proposed. We present a novel, non-invasive blood pressure and pulse pressure estimation method based on pulse transit time and pre-ejection period. Pre-ejection period and pulse transit time were measured non-invasively using electrocardiogram, seismocardiogram, and photoplethysmogram measured from the torso. The proposed method used the 2-element Windkessel model to model pulse pressure with the ratio of stroke volume, approximated by pre-ejection period, and arterial compliance, estimated by pulse transit time. Diastolic blood pressure was estimated using pulse transit time, and systolic blood pressure was estimated as the sum of the two estimates. The estimation method was verified in 11 subjects in two separate conditions with induced cardiovascular response and the results were compared against a reference measurement and values obtained from a previously proposed method. The proposed method yielded high agreement with the reference (pulse pressure correlation with reference R ≥ 0.927, diastolic blood pressure correlation with reference R ≥ 0.854, systolic blood pressure correlation with reference R ≥ 0.914) and high estimation accuracy in pulse pressure (mean root-mean-squared error ≤ 3.46 mmHg) and blood pressure (mean root-mean-squared error ≤ 6.31 mmHg for diastolic blood pressure and ≤ 8.41 mmHg for systolic blood pressure) over a wide range of hemodynamic changes. The proposed pulse pressure estimation method provides accurate estimates in situations with and without significant changes in stroke volume. The proposed method improves upon the currently available systolic blood pressure estimation methods by providing accurate pulse pressure estimates.

  1. State and Parameter Estimation for a Coupled Ocean--Atmosphere Model

    NASA Astrophysics Data System (ADS)

    Ghil, M.; Kondrashov, D.; Sun, C.

    2006-12-01

    The El-Nino/Southern-Oscillation (ENSO) dominates interannual climate variability and plays, therefore, a key role in seasonal-to-interannual prediction. Much is known by now about the main physical mechanisms that give rise to and modulate ENSO, but the values of several parameters that enter these mechanisms are an important unknown. We apply Extended Kalman Filtering (EKF) for both model state and parameter estimation in an intermediate, nonlinear, coupled ocean--atmosphere model of ENSO. The coupled model consists of an upper-ocean, reduced-gravity model of the Tropical Pacific and a steady-state atmospheric response to the sea surface temperature (SST). The model errors are assumed to be mainly in the atmospheric wind stress, and assimilated data are equatorial Pacific SSTs. Model behavior is very sensitive to two key parameters: (i) μ, the ocean-atmosphere coupling coefficient between SST and wind stress anomalies; and (ii) δs, the surface-layer coefficient. Previous work has shown that δs determines the period of the model's self-sustained oscillation, while μ measures the degree of nonlinearity. Depending on the values of these parameters, the spatio-temporal pattern of model solutions is either that of a delayed oscillator or of a westward propagating mode. Estimation of these parameters is tested first on synthetic data and allows us to recover the delayed-oscillator mode starting from model parameter values that correspond to the westward-propagating case. Assimilation of SST data from the NCEP-NCAR Reanalysis-2 shows that the parameters can vary on fairly short time scales and switch between values that approximate the two distinct modes of ENSO behavior. Rapid adjustments of these parameters occur, in particular, during strong ENSO events. Ways to apply EKF parameter estimation efficiently to state-of-the-art coupled ocean--atmosphere GCMs will be discussed.

  2. Error induced by the estimation of the corneal power and the effective lens position with a rotationally asymmetric refractive multifocal intraocular lens

    PubMed Central

    Piñero, David P.; Camps, Vicente J.; Ramón, María L.; Mateo, Verónica; Pérez-Cambrodí, Rafael J.

    2015-01-01

    AIM To evaluate the prediction error in intraocular lens (IOL) power calculation for a rotationally asymmetric refractive multifocal IOL and the impact on this error of the optimization of the keratometric estimation of the corneal power and the prediction of the effective lens position (ELP). METHODS Retrospective study including a total of 25 eyes of 13 patients (age, 50 to 83y) with previous cataract surgery with implantation of the Lentis Mplus LS-312 IOL (Oculentis GmbH, Germany). In all cases, an adjusted IOL power (PIOLadj) was calculated based on Gaussian optics using a variable keratometric index value (nkadj) for the estimation of the corneal power (Pkadj) and on a new value for ELP (ELPadj) obtained by multiple regression analysis. This PIOLadj was compared with the IOL power implanted (PIOLReal) and the value proposed by three conventional formulas (Haigis, Hoffer Q and Holladay I). RESULTS PIOLReal was not significantly different than PIOLadj and Holladay IOL power (P>0.05). In the Bland and Altman analysis, PIOLadj showed lower mean difference (-0.07 D) and limits of agreement (of 1.47 and -1.61 D) when compared to PIOLReal than the IOL power value obtained with the Holladay formula. Furthermore, ELPadj was significantly lower than ELP calculated with other conventional formulas (P<0.01) and was found to be dependent on axial length, anterior chamber depth and Pkadj. CONCLUSION Refractive outcomes after cataract surgery with implantation of the multifocal IOL Lentis Mplus LS-312 can be optimized by minimizing the keratometric error and by estimating ELP using a mathematical expression dependent on anatomical factors. PMID:26085998

  3. Error induced by the estimation of the corneal power and the effective lens position with a rotationally asymmetric refractive multifocal intraocular lens.

    PubMed

    Piñero, David P; Camps, Vicente J; Ramón, María L; Mateo, Verónica; Pérez-Cambrodí, Rafael J

    2015-01-01

    To evaluate the prediction error in intraocular lens (IOL) power calculation for a rotationally asymmetric refractive multifocal IOL and the impact on this error of the optimization of the keratometric estimation of the corneal power and the prediction of the effective lens position (ELP). Retrospective study including a total of 25 eyes of 13 patients (age, 50 to 83y) with previous cataract surgery with implantation of the Lentis Mplus LS-312 IOL (Oculentis GmbH, Germany). In all cases, an adjusted IOL power (PIOLadj) was calculated based on Gaussian optics using a variable keratometric index value (nkadj) for the estimation of the corneal power (Pkadj) and on a new value for ELP (ELPadj) obtained by multiple regression analysis. This PIOLadj was compared with the IOL power implanted (PIOLReal) and the value proposed by three conventional formulas (Haigis, Hoffer Q and Holladay I). PIOLReal was not significantly different than PIOLadj and Holladay IOL power (P>0.05). In the Bland and Altman analysis, PIOLadj showed lower mean difference (-0.07 D) and limits of agreement (of 1.47 and -1.61 D) when compared to PIOLReal than the IOL power value obtained with the Holladay formula. Furthermore, ELPadj was significantly lower than ELP calculated with other conventional formulas (P<0.01) and was found to be dependent on axial length, anterior chamber depth and Pkadj. Refractive outcomes after cataract surgery with implantation of the multifocal IOL Lentis Mplus LS-312 can be optimized by minimizing the keratometric error and by estimating ELP using a mathematical expression dependent on anatomical factors.

  4. Filler Network Model of Filled Rubber Materials to Estimate System Size Dependence of Two-Dimensional Small-Angle Scattering Patterns

    NASA Astrophysics Data System (ADS)

    Hagita, Katsumi; Tominaga, Tetsuo; Hatazoe, Takumi; Sone, Takuo; Takano, Hiroshi

    2018-01-01

    We proposed a filler network toy (FN-toy) model in order to approximately forecast changes in two-dimensional scattering patterns (2DSPs) of nanoparticles (NPs) in crosslinked polymer networks in ultrasmall-angle X-ray scattering (USAXS) experiments under uniaxial elongation. It enables us to estimate the system size dependence of the 2DSP of the NPs. In the FN-toy model, we considered NPs connected by harmonic springs with excluded-volume interactions among the NPs. In this study, we used the NP configurations estimated by reverse Monte Carlo (RMC) analysis for USAXS data observed in SPring-8 experiments on filler-filled styrene butadiene rubber (SBR). In the FN-toy model, we set a bond between every pair of NPs whose distance is less than Cd, where d is the diameter of an NP and C is a parameter that characterizes network properties. We determined the optimal value of C by comparison with 2DSPs of the NPs at 200% elongation for end-modified and unmodified SBR. These 2DSPs are obtained from the results of a large-scale coarse-grained molecular dynamics (CGMD) simulation with 8,192 NPs and 160 million Lennard-Jones (LJ) particles in previous works. For the end-modified SBR, the fitted value is C = 1.367 and for the unmodified SBR, C = 1.258. The difference in C can be regarded as originating from the difference in polymer-NP interactions. We found that the harmonic potential used in the current FN-toy model is not sufficient to reproduce stress-strain curves and local structures of NPs obtained in the previous CGMD simulations, although the FN-toy model can reproduce the 2DSPs. Using the FN-toy model with the fitted value of C, we calculated the 2DSPs of 65,536 and 524,288 NPs, whose initial positions were estimated by RMC analysis for the same USAXS data. It was found that CGMD simulations with 10 billion LJ particles and 524,288 NPs can provide a high-resolution 2DSP that is comparable to the 2DSP observed in USAXS experiments.

  5. Prediction future asset price which is non-concordant with the historical distribution

    NASA Astrophysics Data System (ADS)

    Seong, Ng Yew; Hin, Pooi Ah

    2015-12-01

    This paper attempts to predict the major characteristics of the future asset price which is non-concordant with the distribution estimated from the price today and the prices on a large number of previous days. The three major characteristics of the i-th non-concordant asset price are the length of the interval between the occurrence time of the previous non-concordant asset price and that of the present non-concordant asset price, the indicator which denotes that the non-concordant price is extremely small or large by its values -1 and 1 respectively, and the degree of non-concordance given by the negative logarithm of the probability of the left tail or right tail of which one of the end points is given by the observed future price. The vector of three major characteristics of the next non-concordant price is modelled to be dependent on the vectors corresponding to the present and l - 1 previous non-concordant prices via a 3-dimensional conditional distribution which is derived from a 3(l + 1)-dimensional power-normal mixture distribution. The marginal distribution for each of the three major characteristics can then be derived from the conditional distribution. The mean of the j-th marginal distribution is an estimate of the value of the j-th characteristics of the next non-concordant price. Meanwhile, the 100(α/2) % and 100(1 - α/2) % points of the j-th marginal distribution can be used to form a prediction interval for the j-th characteristic of the next non-concordant price. The performance measures of the above estimates and prediction intervals indicate that the fitted conditional distribution is satisfactory. Thus the incorporation of the distribution of the characteristics of the next non-concordant price in the model for asset price has a good potential of yielding a more realistic model.

  6. Incineration of different types of medical wastes: emission factors for gaseous emissions

    NASA Astrophysics Data System (ADS)

    Alvim-Ferraz, M. C. M.; Afonso, S. A. V.

    Previous research works showed that to protect public health, the hospital incinerators should be provided with air pollution control devices. As most hospital incinerators do not possess such equipment, efficient methodologies should be developed to evaluate the safety of incineration procedure. Emission factors (EF) can be used for an easy estimation of legal parameters. Nevertheless, the actual knowledge is yet very scarce, mainly because EF previously published do not include enough information about the incinerated waste composition, besides considering many different waste classifications. This paper reports the first EF estimated for CO, SO 2, NO x and HCl, associated to the incineration of medical waste, segregated in different types according to the classification of the Portuguese legislation. The results showed that those EF are strongly influenced by incinerated waste composition, directly affected by incinerated waste type, waste classification, segregation practice and management methodology. The correspondence between different waste classifications was analysed comparing the estimated EF with the sole results previously published for specific waste types, being observed that the correspondence is not always possible. The legal limit for pollutant concentrations could be obeyed for NO x, but concentrations were higher than the limit for CO (11-24 times), SO 2 (2-5 times), and HCl (9-200 times), confirming that air pollution control devices must be used to protect human health. The small heating value of medical wastes with compulsory incineration implied the requirement of a bigger amount of auxiliary fuel for their incineration, which affects the emitted amounts of CO, NO x and SO 2 (28, 20 and practically 100% of the respective values were related with fuel combustion). Nevertheless, the incineration of those wastes lead to the smallest amount of emitted pollutants, the emitted amount of SO 2 and NO x reducing to 93% and the emitted amount of CO and HCl to more than 99%.

  7. A Particle Batch Smoother Approach to Snow Water Equivalent Estimation

    NASA Technical Reports Server (NTRS)

    Margulis, Steven A.; Girotto, Manuela; Cortes, Gonzalo; Durand, Michael

    2015-01-01

    This paper presents a newly proposed data assimilation method for historical snow water equivalent SWE estimation using remotely sensed fractional snow-covered area fSCA. The newly proposed approach consists of a particle batch smoother (PBS), which is compared to a previously applied Kalman-based ensemble batch smoother (EnBS) approach. The methods were applied over the 27-yr Landsat 5 record at snow pillow and snow course in situ verification sites in the American River basin in the Sierra Nevada (United States). This basin is more densely vegetated and thus more challenging for SWE estimation than the previous applications of the EnBS. Both data assimilation methods provided significant improvement over the prior (modeling only) estimates, with both able to significantly reduce prior SWE biases. The prior RMSE values at the snow pillow and snow course sites were reduced by 68%-82% and 60%-68%, respectively, when applying the data assimilation methods. This result is encouraging for a basin like the American where the moderate to high forest cover will necessarily obscure more of the snow-covered ground surface than in previously examined, less-vegetated basins. The PBS generally outperformed the EnBS: for snow pillows the PBSRMSE was approx.54%of that seen in the EnBS, while for snow courses the PBSRMSE was approx.79%of the EnBS. Sensitivity tests show relative insensitivity for both the PBS and EnBS results to ensemble size and fSCA measurement error, but a higher sensitivity for the EnBS to the mean prior precipitation input, especially in the case where significant prior biases exist.

  8. Inverse analysis and regularisation in conditional source-term estimation modelling

    NASA Astrophysics Data System (ADS)

    Labahn, Jeffrey W.; Devaud, Cecile B.; Sipkens, Timothy A.; Daun, Kyle J.

    2014-05-01

    Conditional Source-term Estimation (CSE) obtains the conditional species mass fractions by inverting a Fredholm integral equation of the first kind. In the present work, a Bayesian framework is used to compare two different regularisation methods: zeroth-order temporal Tikhonov regulatisation and first-order spatial Tikhonov regularisation. The objectives of the current study are: (i) to elucidate the ill-posedness of the inverse problem; (ii) to understand the origin of the perturbations in the data and quantify their magnitude; (iii) to quantify the uncertainty in the solution using different priors; and (iv) to determine the regularisation method best suited to this problem. A singular value decomposition shows that the current inverse problem is ill-posed. Perturbations to the data may be caused by the use of a discrete mixture fraction grid for calculating the mixture fraction PDF. The magnitude of the perturbations is estimated using a box filter and the uncertainty in the solution is determined based on the width of the credible intervals. The width of the credible intervals is significantly reduced with the inclusion of a smoothing prior and the recovered solution is in better agreement with the exact solution. The credible intervals for temporal and spatial smoothing are shown to be similar. Credible intervals for temporal smoothing depend on the solution from the previous time step and a smooth solution is not guaranteed. For spatial smoothing, the credible intervals are not dependent upon a previous solution and better predict characteristics for higher mixture fraction values. These characteristics make spatial smoothing a promising alternative method for recovering a solution from the CSE inversion process.

  9. Measurement Uncertainty of Dew-Point Temperature in a Two-Pressure Humidity Generator

    NASA Astrophysics Data System (ADS)

    Martins, L. Lages; Ribeiro, A. Silva; Alves e Sousa, J.; Forbes, Alistair B.

    2012-09-01

    This article describes the measurement uncertainty evaluation of the dew-point temperature when using a two-pressure humidity generator as a reference standard. The estimation of the dew-point temperature involves the solution of a non-linear equation for which iterative solution techniques, such as the Newton-Raphson method, are required. Previous studies have already been carried out using the GUM method and the Monte Carlo method but have not discussed the impact of the approximate numerical method used to provide the temperature estimation. One of the aims of this article is to take this approximation into account. Following the guidelines presented in the GUM Supplement 1, two alternative approaches can be developed: the forward measurement uncertainty propagation by the Monte Carlo method when using the Newton-Raphson numerical procedure; and the inverse measurement uncertainty propagation by Bayesian inference, based on prior available information regarding the usual dispersion of values obtained by the calibration process. The measurement uncertainties obtained using these two methods can be compared with previous results. Other relevant issues concerning this research are the broad application to measurements that require hygrometric conditions obtained from two-pressure humidity generators and, also, the ability to provide a solution that can be applied to similar iterative models. The research also studied the factors influencing both the use of the Monte Carlo method (such as the seed value and the convergence parameter) and the inverse uncertainty propagation using Bayesian inference (such as the pre-assigned tolerance, prior estimate, and standard deviation) in terms of their accuracy and adequacy.

  10. Carbon budget for a British upland peat catchment.

    PubMed

    Worrall, Fred; Reed, Mark; Warburton, Jeff; Burt, Tim

    2003-08-01

    This study describes the analysis of fluvial carbon flux from an upland peat catchment in the North Pennines. Dissolved organic carbon (DOC), pH, alkalinity and calcium were measured in weekly samples, with particulate organic carbon (POC) measured from the suspended sediment load from the stream outlet of an 11.4-km(2) catchment. For calendar year 1999, regular monitoring of the catchment was supplemented with detailed quasi-continuous measurements of flow and stream temperature, and DOC for the months September through November. The measurements were used to calculate the annual flux of dissolved CO(2), dissolved inorganic carbon, DOC and POC from the catchment and were combined with CO(2) and CH(4) gaseous exchanges calculated from previously published values and the observations of water table height within the peat. The study catchment represents a net sink of 15.4+/-11.9 gC/m(2)/yr. Carbon flows calculated for the study catchment are combined with values in the literature, using a Monte Carlo method, to estimate the carbon budget for British upland peat. For all British upland peat the calculation suggests a net carbon sink of between 0.15 and 0.29 MtC/yr. This is the first study to include a comprehensive study of the fluvial export of carbon within carbon budgets and shows the size of the peat carbon sink to be smaller than previous estimates, although sensitivity analysis shows that the primary productivity rather than fluvial carbon flux is a more important element in estimating the carbon budget in this regard.

  11. Low-energy pion-nucleon scattering

    NASA Astrophysics Data System (ADS)

    Gibbs, W. R.; Ai, Li; Kaufmann, W. B.

    1998-02-01

    An analysis of low-energy charged pion-nucleon data from recent π+/-p experiments is presented. From the scattering lengths and the Goldberger-Miyazawa-Oehme (GMO) sum rule we find a value of the pion-nucleon coupling constant of f2=0.0756+/-0.0007. We also find, contrary to most previous analyses, that the scattering volumes for the P31 and P13 partial waves are equal, within errors, corresponding to a symmetry found in the Hamiltonian of many theories. For the potential models used, the amplitudes are extrapolated into the subthreshold region to estimate the value of the Σ term. Off-shell amplitudes are also provided.

  12. Molecular insights into the colonization and chromosomal diversification of Madeiran house mice.

    PubMed

    Förster, D W; Gündüz, I; Nunes, A C; Gabriel, S; Ramalhinho, M G; Mathias, M L; Britton-Davidian, J; Searle, J B

    2009-11-01

    The colonization history of Madeiran house mice was investigated by analysing the complete mitochondrial (mt) D-loop sequences of 156 mice from the island of Madeira and mainland Portugal, extending on previous studies. The numbers of mtDNA haplotypes from Madeira and mainland Portugal were substantially increased (17 and 14 new haplotypes respectively), and phylogenetic analysis confirmed the previously reported link between the Madeiran archipelago and northern Europe. Sequence analysis revealed the presence of four mtDNA lineages in mainland Portugal, of which one was particularly common and widespread (termed the 'Portugal Main Clade'). There was no support for population bottlenecks during the formation of the six Robertsonian chromosome races on the island of Madeira, and D-loop sequence variation was not found to be structured according to karyotype. The colonization time of the Madeiran archipelago by Mus musculus domesticus was approached using two molecular dating methods (mismatch distribution and Bayesian skyline plot). Time estimates based on D-loop sequence variation at mainland sites (including previously published data from France and Turkey) were evaluated in the context of the zooarchaeological record of M. m. domesticus. A range of values for mutation rate (mu) and number of mouse generations per year was considered in these analyses because of the uncertainty surrounding these two parameters. The colonization of Portugal and Madeira by house mice is discussed in the context of the best-supported parameter values. In keeping with recent studies, our results suggest that mutation rate estimates based on interspecific divergence lead to gross overestimates concerning the timing of recent within-species events.

  13. Filtration and transport of Bacillus subtilis spores and the F-RNA phage MS2 in a coarse alluvial gravel aquifer: implications in the estimation of setback distances.

    PubMed

    Pang, Liping; Close, Murray; Goltz, Mark; Noonan, Mike; Sinton, Lester

    2005-04-01

    Filtration of Bacillus subtilis spores and the F-RNA phage MS2 (MS2) on a field scale in a coarse alluvial gravel aquifer was evaluated from the authors' previously published data. An advection-dispersion model that is coupled with first-order attachment kinetics was used in this study to interpret microbial concentration vs. time breakthrough curves (BTC) at sampling wells. Based on attachment rates (katt) that were determined by applying the model to the breakthrough data, filter factors (f) were calculated and compared with f values estimated from the slopes of log (cmax/co) vs. distance plots. These two independent approaches resulted in nearly identical filter factors, suggesting that both approaches are useful in determining reductions in microbial concentrations over transport distance. Applying the graphic approach to analyse spatial data, we have also estimated the f values for different aquifers using information provided by some other published field studies. The results show that values of f, in units of log (cmax/co) m(-1), are consistently in the order of 10(-2) for clean coarse gravel aquifers, 10(-3) for contaminated coarse gravel aquifers, and generally 10(-1) for sandy fine gravel aquifers and river and coastal sand aquifers. For each aquifer category, the f values for bacteriophages and bacteria are in the same order-of-magnitude. The f values estimated in this study indicate that for every one-log reduction in microbial concentration in groundwater, it requires a few tens of meters of travel in clean coarse gravel aquifers, but a few hundreds of meters in contaminated coarse gravel aquifers. In contrast, a one-log reduction generally only requires a few meters of travel in sandy fine gravel aquifers and sand aquifers. Considering the highest concentration in human effluent is in the order of 10(4) pfu/l for enteroviruses and 10(6) cfu/100 ml for faecal coliform bacteria, a 7-log reduction in microbial concentration would comply with the drinking water standards for the downgradient wells under natural gradient conditions. Based on the results of this study, a 7-log reduction would require 125-280 m travel in clean coarse gravel aquifers, 1.7-3.9 km travel in contaminated coarse gravel aquifers, 33-61 m travel in clean sandy fine gravel aquifers, 33-129 m travel in contaminated sandy fine gravel aquifers, and 37-44 m travel in contaminated river and coastal sand aquifers. These recommended setback distances are for a worst-case scenario, assuming direct discharge of raw effluent into the saturated zone of an aquifer. Filtration theory was applied to calculate collision efficiency (alpha) from model-derived attachment rates (katt), and the results are compared with those reported in the literature. The calculated alpha values vary by two orders-of-magnitude, depending on whether collision efficiency is estimated from the effective particle size (d10) or the mean particle size (d50). Collision efficiency values for MS-2 are similar to those previously reported in the literature (e.g. ) [DeBorde, D.C., Woessner, W.W., Kiley, QT., Ball, P., 1999. Rapid transport of viruses in a floodplain aquifer. Water Res. 33 (10), 2229-2238]. However, the collision efficiency values calculated for Bacillus subtilis spores were unrealistic, suggesting that filtration theory is not appropriate for theoretically estimating filtration capacity for poorly sorted coarse gravel aquifer media. This is not surprising, as filtration theory was developed for uniform sand filters and does not consider particle size distribution. Thus, we do not recommend the use of filtration theory to estimate the filter factor or setback distances. Either of the methods applied in this work (BTC or concentration vs. distance analyses), which takes into account aquifer heterogeneities and site-specific conditions, appear to be most useful in determining filter factors and setback distances.

  14. Filtration and transport of Bacillus subtilis spores and the F-RNA phage MS2 in a coarse alluvial gravel aquifer: Implications in the estimation of setback distances

    NASA Astrophysics Data System (ADS)

    Pang, Liping; Close, Murray; Goltz, Mark; Noonan, Mike; Sinton, Lester

    2005-04-01

    Filtration of Bacillus subtilis spores and the F-RNA phage MS2 (MS2) on a field scale in a coarse alluvial gravel aquifer was evaluated from the authors' previously published data. An advection-dispersion model that is coupled with first-order attachment kinetics was used in this study to interpret microbial concentration vs. time breakthrough curves (BTC) at sampling wells. Based on attachment rates ( katt) that were determined by applying the model to the breakthrough data, filter factors ( f) were calculated and compared with f values estimated from the slopes of log ( cmax/ co) vs. distance plots. These two independent approaches resulted in nearly identical filter factors, suggesting that both approaches are useful in determining reductions in microbial concentrations over transport distance. Applying the graphic approach to analyse spatial data, we have also estimated the f values for different aquifers using information provided by some other published field studies. The results show that values of f, in units of log ( cmax/ co) m -1, are consistently in the order of 10 -2 for clean coarse gravel aquifers, 10 -3 for contaminated coarse gravel aquifers, and generally 10 -1 for sandy fine gravel aquifers and river and coastal sand aquifers. For each aquifer category, the f values for bacteriophages and bacteria are in the same order-of-magnitude. The f values estimated in this study indicate that for every one-log reduction in microbial concentration in groundwater, it requires a few tens of meters of travel in clean coarse gravel aquifers, but a few hundreds of meters in contaminated coarse gravel aquifers. In contrast, a one-log reduction generally only requires a few meters of travel in sandy fine gravel aquifers and sand aquifers. Considering the highest concentration in human effluent is in the order of 10 4 pfu/l for enteroviruses and 10 6 cfu/100 ml for faecal coliform bacteria, a 7-log reduction in microbial concentration would comply with the drinking water standards for the downgradient wells under natural gradient conditions. Based on the results of this study, a 7-log reduction would require 125-280 m travel in clean coarse gravel aquifers, 1.7-3.9 km travel in contaminated coarse gravel aquifers, 33-61 m travel in clean sandy fine gravel aquifers, 33-129 m travel in contaminated sandy fine gravel aquifers, and 37-44 m travel in contaminated river and coastal sand aquifers. These recommended setback distances are for a worst-case scenario, assuming direct discharge of raw effluent into the saturated zone of an aquifer. Filtration theory was applied to calculate collision efficiency ( α) from model-derived attachment rates ( katt), and the results are compared with those reported in the literature. The calculated α values vary by two orders-of-magnitude, depending on whether collision efficiency is estimated from the effective particle size ( d10) or the mean particle size ( d50). Collision efficiency values for MS-2 are similar to those previously reported in the literature (e.g. DeBorde et al., 1999) [DeBorde, D.C., Woessner, W.W., Kiley, QT., Ball, P., 1999. Rapid transport of viruses in a floodplain aquifer. Water Res. 33 (10), 2229-2238]. However, the collision efficiency values calculated for Bacillus subtilis spores were unrealistic, suggesting that filtration theory is not appropriate for theoretically estimating filtration capacity for poorly sorted coarse gravel aquifer media. This is not surprising, as filtration theory was developed for uniform sand filters and does not consider particle size distribution. Thus, we do not recommend the use of filtration theory to estimate the filter factor or setback distances. Either of the methods applied in this work (BTC or concentration vs. distance analyses), which takes into account aquifer heterogeneities and site-specific conditions, appear to be most useful in determining filter factors and setback distances.

  15. Estimating the value of a Country's built assets: investment-based exposure modelling for global risk assessment

    NASA Astrophysics Data System (ADS)

    Daniell, James; Pomonis, Antonios; Gunasekera, Rashmin; Ishizawa, Oscar; Gaspari, Maria; Lu, Xijie; Aubrecht, Christoph; Ungar, Joachim

    2017-04-01

    In order to quantify disaster risk, there is a demand and need for determining consistent and reliable economic value of built assets at national or sub national level exposed to natural hazards. The value of the built stock in the context of a city or a country is critical for risk modelling applications as it allows for the upper bound in potential losses to be established. Under the World Bank probabilistic disaster risk assessment - Country Disaster Risk Profiles (CDRP) Program and rapid post-disaster loss analyses in CATDAT, key methodologies have been developed that quantify the asset exposure of a country. In this study, we assess the complementary methods determining value of building stock through capital investment data vs aggregated ground up values based on built area and unit cost of construction analyses. Different approaches to modelling exposure around the world, have resulted in estimated values of built assets of some countries differing by order(s) of magnitude. Using the aforementioned methodology of comparing investment data based capital stock and bottom-up unit cost of construction values per square meter of assets; a suitable range of capital stock estimates for built assets have been created. A blind test format was undertaken to compare the two types of approaches from top-down (investment) and bottom-up (construction cost per unit), In many cases, census data, demographic, engineering and construction cost data are key for bottom-up calculations from previous years. Similarly for the top-down investment approach, distributed GFCF (Gross Fixed Capital Formation) data is also required. Over the past few years, numerous studies have been undertaken through the World Bank Caribbean and Central America disaster risk assessment program adopting this methodology initially developed by Gunasekera et al. (2015). The range of values of the building stock is tested for around 15 countries. In addition, three types of costs - Reconstruction cost (building back to the standard required by building codes); Replacement cost (gross capital stock) and Book value (net capital stock - depreciated value of assets) are discussed and the differences in methodologies assessed. We then examine historical costs (reconstruction and replacement) and losses (book value) of natural disasters versus this upper bound of capital stock in various locations to examine the impact of a reasonable capital stock estimate. It is found that some historic loss estimates in publications are not reasonable given the value of assets at the time of the event. This has applications for quantitative disaster risk assessment and development of country disaster risk profiles, economic analyses and benchmarking upper loss limits of built assets damaged due to natural hazards.

  16. Information content of slug tests for estimating hydraulic properties in realistic, high-conductivity aquifer scenarios

    NASA Astrophysics Data System (ADS)

    Cardiff, Michael; Barrash, Warren; Thoma, Michael; Malama, Bwalya

    2011-06-01

    SummaryA recently developed unified model for partially-penetrating slug tests in unconfined aquifers ( Malama et al., in press) provides a semi-analytical solution for aquifer response at the wellbore in the presence of inertial effects and wellbore skin, and is able to model the full range of responses from overdamped/monotonic to underdamped/oscillatory. While the model provides a unifying framework for realistically analyzing slug tests in aquifers (with the ultimate goal of determining aquifer properties such as hydraulic conductivity K and specific storage Ss), it is currently unclear whether parameters of this model can be well-identified without significant prior information and, thus, what degree of information content can be expected from such slug tests. In this paper, we examine the information content of slug tests in realistic field scenarios with respect to estimating aquifer properties, through analysis of both numerical experiments and field datasets. First, through numerical experiments using Markov Chain Monte Carlo methods for gauging parameter uncertainty and identifiability, we find that: (1) as noted by previous researchers, estimation of aquifer storage parameters using slug test data is highly unreliable and subject to significant uncertainty; (2) joint estimation of aquifer and skin parameters contributes to significant uncertainty in both unless prior knowledge is available; and (3) similarly, without prior information joint estimation of both aquifer radial and vertical conductivity may be unreliable. These results have significant implications for the types of information that must be collected prior to slug test analysis in order to obtain reliable aquifer parameter estimates. For example, plausible estimates of aquifer anisotropy ratios and bounds on wellbore skin K should be obtained, if possible, a priori. Secondly, through analysis of field data - consisting of over 2500 records from partially-penetrating slug tests in a heterogeneous, highly conductive aquifer, we present some general findings that have applicability to slug testing. In particular, we find that aquifer hydraulic conductivity estimates obtained from larger slug heights tend to be lower on average (presumably due to non-linear wellbore losses) and tend to be less variable (presumably due to averaging over larger support volumes), supporting the notion that using the smallest slug heights possible to produce measurable water level changes is an important strategy when mapping aquifer heterogeneity. Finally, we present results specific to characterization of the aquifer at the Boise Hydrogeophysical Research Site. Specifically, we note that (1) K estimates obtained using a range of different slug heights give similar results, generally within ±20%; (2) correlations between estimated K profiles with depth at closely-spaced wells suggest that K values obtained from slug tests are representative of actual aquifer heterogeneity and not overly affected by near-well media disturbance (i.e., "skin"); (3) geostatistical analysis of K values obtained indicates reasonable correlation lengths for sediments of this type; and (4) overall, K values obtained do not appear to correlate well with porosity data from previous studies.

  17. Tracking a convoy of multiple targets using acoustic sensor data

    NASA Astrophysics Data System (ADS)

    Damarla, T. R.

    2003-08-01

    In this paper we present an algorithm to track a convoy of several targets in a scene using acoustic sensor array data. The tracking algorithm is based on template of the direction of arrival (DOA) angles for the leading target. Often the first target is the closest target to the sensor array and hence the loudest with good signal to noise ratio. Several steps were used to generate a template of the DOA angle for the leading target, namely, (a) the angle at the present instant should be close to the angle at the previous instant and (b) the angle at the present instant should be within error bounds of the predicted value based on the previous values. Once the template of the DOA angles of the leading target is developed, it is used to predict the DOA angle tracks of the remaining targets. In order to generate the tracks for the remaining targets, a track is established if the angles correspond to the initial track values of the first target. Second the time delay between the first track and the remaining tracks are estimated at the highest correlation points between the first track and the remaining tracks. As the vehicles move at different speeds the tracks either compress or expand depending on whether a target is moving fast or slow compared to the first target. The expansion and compression ratios are estimated and used to estimate the predicted DOA angle values of the remaining targets. Based on these predicted DOA angles of the remaining targets the DOA angles obtained from the MVDR or Incoherent MUSIC will be appropriately assigned to proper tracks. Several other rules were developed to avoid mixing the tracks. The algorithm is tested on data collected at Aberdeen Proving Ground with a convoy of 3, 4 and 5 vehicles. Some of the vehicles are tracked and some are wheeled vehicles. The tracking algorithm results are found to be good. The results will be presented at the conference and in the paper.

  18. Cataloging the 1811-1812 New Madrid, central U.S., earthquake sequence

    USGS Publications Warehouse

    Hough, S.E.

    2009-01-01

    The three principal New Madrid, central U.S., mainshocks of 1811-1812 were followed by extensive aftershock sequences that included numerous felt events. Although no instrumental data are available for the sequence, historical accounts provide information that can be used to estimate magnitudes and locations for the large aftershocks as well as the mainshocks. Several detailed eyewitness accounts of the sequence provide sufficient information to identify times and rough magnitude estimates for a number of aftershocks that have not been analyzed previously. I also use three extended compilations of felt events to explore the overall sequence productivity. Although one generally cannot estimate magnitudes or locations for individual events, the intensity distributions of recent, instrumentally recorded earthquakes in the region provide a basis for estimation of the magnitude distribution of 1811-1812 aftershocks. The distribution is consistent with a b-value distribution. I estimate Mw 6-6.3 for the three largest identifiable aftershocks, apart from the so-called dawn aftershock on 16 December 1811.

  19. A study of methods to estimate debris flow velocity

    USGS Publications Warehouse

    Prochaska, A.B.; Santi, P.M.; Higgins, J.D.; Cannon, S.H.

    2008-01-01

    Debris flow velocities are commonly back-calculated from superelevation events which require subjective estimates of radii of curvature of bends in the debris flow channel or predicted using flow equations that require the selection of appropriate rheological models and material property inputs. This research investigated difficulties associated with the use of these conventional velocity estimation methods. Radii of curvature estimates were found to vary with the extent of the channel investigated and with the scale of the media used, and back-calculated velocities varied among different investigated locations along a channel. Distinct populations of Bingham properties were found to exist between those measured by laboratory tests and those back-calculated from field data; thus, laboratory-obtained values would not be representative of field-scale debris flow behavior. To avoid these difficulties with conventional methods, a new preliminary velocity estimation method is presented that statistically relates flow velocity to the channel slope and the flow depth. This method presents ranges of reasonable velocity predictions based on 30 previously measured velocities. ?? 2008 Springer-Verlag.

  20. An estimate of the number of tropical tree species.

    PubMed

    Slik, J W Ferry; Arroyo-Rodríguez, Víctor; Aiba, Shin-Ichiro; Alvarez-Loayza, Patricia; Alves, Luciana F; Ashton, Peter; Balvanera, Patricia; Bastian, Meredith L; Bellingham, Peter J; van den Berg, Eduardo; Bernacci, Luis; da Conceição Bispo, Polyanna; Blanc, Lilian; Böhning-Gaese, Katrin; Boeckx, Pascal; Bongers, Frans; Boyle, Brad; Bradford, Matt; Brearley, Francis Q; Breuer-Ndoundou Hockemba, Mireille; Bunyavejchewin, Sarayudh; Calderado Leal Matos, Darley; Castillo-Santiago, Miguel; Catharino, Eduardo L M; Chai, Shauna-Lee; Chen, Yukai; Colwell, Robert K; Chazdon, Robin L; Robin, Chazdon L; Clark, Connie; Clark, David B; Clark, Deborah A; Culmsee, Heike; Damas, Kipiro; Dattaraja, Handanakere S; Dauby, Gilles; Davidar, Priya; DeWalt, Saara J; Doucet, Jean-Louis; Duque, Alvaro; Durigan, Giselda; Eichhorn, Karl A O; Eisenlohr, Pedro V; Eler, Eduardo; Ewango, Corneille; Farwig, Nina; Feeley, Kenneth J; Ferreira, Leandro; Field, Richard; de Oliveira Filho, Ary T; Fletcher, Christine; Forshed, Olle; Franco, Geraldo; Fredriksson, Gabriella; Gillespie, Thomas; Gillet, Jean-François; Amarnath, Giriraj; Griffith, Daniel M; Grogan, James; Gunatilleke, Nimal; Harris, David; Harrison, Rhett; Hector, Andy; Homeier, Jürgen; Imai, Nobuo; Itoh, Akira; Jansen, Patrick A; Joly, Carlos A; de Jong, Bernardus H J; Kartawinata, Kuswata; Kearsley, Elizabeth; Kelly, Daniel L; Kenfack, David; Kessler, Michael; Kitayama, Kanehiro; Kooyman, Robert; Larney, Eileen; Laumonier, Yves; Laurance, Susan; Laurance, William F; Lawes, Michael J; Amaral, Ieda Leao do; Letcher, Susan G; Lindsell, Jeremy; Lu, Xinghui; Mansor, Asyraf; Marjokorpi, Antti; Martin, Emanuel H; Meilby, Henrik; Melo, Felipe P L; Metcalfe, Daniel J; Medjibe, Vincent P; Metzger, Jean Paul; Millet, Jerome; Mohandass, D; Montero, Juan C; de Morisson Valeriano, Márcio; Mugerwa, Badru; Nagamasu, Hidetoshi; Nilus, Reuben; Ochoa-Gaona, Susana; Onrizal; Page, Navendu; Parolin, Pia; Parren, Marc; Parthasarathy, Narayanaswamy; Paudel, Ekananda; Permana, Andrea; Piedade, Maria T F; Pitman, Nigel C A; Poorter, Lourens; Poulsen, Axel D; Poulsen, John; Powers, Jennifer; Prasad, Rama C; Puyravaud, Jean-Philippe; Razafimahaimodison, Jean-Claude; Reitsma, Jan; Dos Santos, João Roberto; Roberto Spironello, Wilson; Romero-Saltos, Hugo; Rovero, Francesco; Rozak, Andes Hamuraby; Ruokolainen, Kalle; Rutishauser, Ervan; Saiter, Felipe; Saner, Philippe; Santos, Braulio A; Santos, Fernanda; Sarker, Swapan K; Satdichanh, Manichanh; Schmitt, Christine B; Schöngart, Jochen; Schulze, Mark; Suganuma, Marcio S; Sheil, Douglas; da Silva Pinheiro, Eduardo; Sist, Plinio; Stevart, Tariq; Sukumar, Raman; Sun, I-Fang; Sunderland, Terry; Sunderand, Terry; Suresh, H S; Suzuki, Eizi; Tabarelli, Marcelo; Tang, Jangwei; Targhetta, Natália; Theilade, Ida; Thomas, Duncan W; Tchouto, Peguy; Hurtado, Johanna; Valencia, Renato; van Valkenburg, Johan L C H; Van Do, Tran; Vasquez, Rodolfo; Verbeeck, Hans; Adekunle, Victor; Vieira, Simone A; Webb, Campbell O; Whitfeld, Timothy; Wich, Serge A; Williams, John; Wittmann, Florian; Wöll, Hannsjoerg; Yang, Xiaobo; Adou Yao, C Yves; Yap, Sandra L; Yoneda, Tsuyoshi; Zahawi, Rakan A; Zakaria, Rahmad; Zang, Runguo; de Assis, Rafael L; Garcia Luize, Bruno; Venticinque, Eduardo M

    2015-06-16

    The high species richness of tropical forests has long been recognized, yet there remains substantial uncertainty regarding the actual number of tropical tree species. Using a pantropical tree inventory database from closed canopy forests, consisting of 657,630 trees belonging to 11,371 species, we use a fitted value of Fisher's alpha and an approximate pantropical stem total to estimate the minimum number of tropical forest tree species to fall between ∼ 40,000 and ∼ 53,000, i.e., at the high end of previous estimates. Contrary to common assumption, the Indo-Pacific region was found to be as species-rich as the Neotropics, with both regions having a minimum of ∼ 19,000-25,000 tree species. Continental Africa is relatively depauperate with a minimum of ∼ 4,500-6,000 tree species. Very few species are shared among the African, American, and the Indo-Pacific regions. We provide a methodological framework for estimating species richness in trees that may help refine species richness estimates of tree-dependent taxa.

  1. Usefulness of automatic measurement of contrast flow intensity: an innovative tool in contrast-enhanced ultrasound imaging of atherosclerotic carotid plaque neovascularization. A pilot study.

    PubMed

    Lisowska, A; Knapp, M; Tycinska, A; Sawicki, R; Kralisz, P; Lisowski, P; Sobkowicz, B; Musial, W I

    2014-02-01

    Contrast-enhanced ultrasound imaging of the carotid arteries (CECU) permits direct, real-time visualization of neovascularization in atherosclerotic plaques and is a confirmed predictor of unstable atheromatous lesions. The aim of the study was the assessment of a new, automatically measured index of intensity in quantitative estimation of the contrast flow through the carotid plaque (till now assessed only visually). Forty-four patients (mean age 70.4±11.4) with ultrasound diagnosed significant stenosis of internal carotid artery (ICA), after cerebrovascular or cardiovascular events, qualified for carotid artery stenting (CAS) were examined. The carotid ultrasound examinations with contrast agent Sonovue were performed. Visually in 22 patients (50%) contrast flow through the atherosclerotic plaques was found. In 17 patients (38.6%) massive, calcified atherosclerotic plaques were present. Patients with preserved contrast flow through the plaque more frequently had a history of cerebral stroke (P=0.04). Massive calcifications of atherosclerotic plaques correlated with a previous MI (P=0.03) and the degree of advancement of coronary artery disease (P=0.04), but not with a previous cerebral stroke. Contrast flow through the atherosclerotic plaque positively correlated with values of the index of intensity (r=0.69, P<0.00001). In patients with preserved contrast flow the mean value of the index of intensity was 22.24±3.55 dB as compared with 12.37±7.67 dB - a value present in patients without preserved contrast flow. No significant relation for the degree of calcifications and the value of the index of intensity was found. The assessment of the index of intensity is a novel, simple and automatic method to estimate the degree of contrast flow through the carotid plaque. The values of the index of intensity correlate with the contrast flow through the atherosclerotic plaque, but not with its calcification.

  2. Estimation of rate constants of PCB dechlorination reactions using an anaerobic dehalogenation model.

    PubMed

    Karakas, Filiz; Imamoglu, Ipek

    2017-02-15

    This study aims to estimate anaerobic dechlorination rate constants (k m ) of reactions of individual PCB congeners using data from four laboratory microcosms set up using sediment from Baltimore Harbor. Pathway k m values are estimated by modifying a previously developed model as Anaerobic Dehalogenation Model (ADM) which can be applied to any halogenated hydrophobic organic (HOC). Improvements such as handling multiple dechlorination activities (DAs) and co-elution of congeners, incorporating constraints, using new goodness of fit evaluation led to an increase in accuracy, speed and flexibility of ADM. DAs published in the literature in terms of chlorine substitutions as well as specific microorganisms and their combinations are used for identification of pathways. The best fit explaining the congener pattern changes was found for pathways of Phylotype DEH10, which has the ability to remove doubly flanked chlorines in meta and para positions, para flanked chlorines in meta position. The range of estimated k m values is between 0.0001-0.133d -1 , the median of which is found to be comparable to the few available published biologically confirmed rate constants. Compound specific modelling studies such as that performed by ADM can enable monitoring and prediction of concentration changes as well as toxicity during bioremediation. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Anisotropy estimation of compacted municipal solid waste using pressurized vertical well liquids injection.

    PubMed

    Singh, Karamjit; Kadambala, Ravi; Jain, Pradeep; Xu, Qiyong; Townsend, Timothy G

    2014-06-01

    Waste hydraulic conductivity and anisotropy represent two important parameters controlling fluid movement in landfills, and thus are the key inputs in design methods where predictions of moisture movement are necessary. Although municipal waste hydraulic conductivity has been estimated in multiple laboratory and field studies, measurements of anisotropy, particularly at full scale, are rare, even though landfilled municipal waste is generally understood to be anisotropic. Measurements from a buried liquids injection well surrounded by pressure transducers at a full-scale landfill in Florida were collected and examined to provide an estimate of in-situ waste anisotropy. Liquids injection was performed at a constant pressure and the resulting pore pressures in the surrounding waste were monitored. Numerical fluid flow modeling was employed to simulate the pore pressures expected to occur under the conditions operated. Nine different simulations were performed at three different lateral hydraulic conductivity values and three different anisotropy values. Measured flowrate and pore pressures collected from conditions of approximate steady state were compared with the simulation results to assess the range of anisotropies. The results support that compacted municipal waste in landfills is anisotropic, provide anisotropy estimates greater than previous measurements, and suggest that anisotropy decreases with landfill depth. © The Author(s) 2014.

  4. Display gamma is an important factor in Web image viewing

    NASA Astrophysics Data System (ADS)

    Zhang, Xuemei; Lavin, Yingmei; Silverstein, D. Amnon

    2001-06-01

    We conducted a perceptual image preference experiment over the web to find our (1) if typical computer users have significant variations in their display gamma settings, and (2) if so, do the gamma settings have significant perceptual effect on the appearance of images in their web browsers. The digital image renderings used were found to have preferred tone characteristics from a previous lab- controlled experiment. They were rendered with 4 different gamma settings. The subjects were asked to view the images over the web, with their own computer equipment and web browsers. The subjects werewe asked to view the images over the web, with their own computer equipment and web browsers. The subjects made pair-wise subjective preference judgements on which rendering they liked bets for each image. Each subject's display gamma setting was estimated using a 'gamma estimator' tool, implemented as a Java applet. The results indicated that (1) the user's gamma settings, as estimated in the experiment, span a wide range from about 1.8 to about 3.0; (2) the subjects preferred images that werewe rendered with a 'correct' gamma value matching their display setting. Subjects disliked images rendered with a gamma value not matching their displays'. This indicates that display gamma estimation is a perceptually significant factor in web image optimization.

  5. Estimating soil matric potential in Owens Valley, California

    USGS Publications Warehouse

    Sorenson, Stephen K.; Miller, R.F.; Welch, M.R.; Groeneveld, D.P.; Branson, F.A.

    1988-01-01

    Much of the floor of the Owens Valley, California, is covered with alkaline scrub and alkaline meadow plant communities, whose existence is dependent partly on precipitation and partly on water infiltrated into the rooting zone from the shallow water table. The extent to which these plant communities are capable of adapting to and surviving fluctuations in the water table depends on physiological adaptations of the plants and on the water content, matric potential characteristics of the soils. Two methods were used to estimate soil matric potential in test sites in Owens Valley. The first was the filter-paper method, which uses water content of filter papers equilibrated to water content of soil samples taken with a hand auger. The other method of estimating soil matric potential was a modeling approach based on data from this and previous investigations. These data indicate that the base 10 logarithm of soil matric potential is a linear function of gravimetric soil water content for a particular soil. Estimates of soil water characteristic curves were made at two sites by averaging the gravimetric soil water content and soil matric potential values from multiple samples at 0.1 m depths derived by using the hand auger and filter paper method and entering these values in the soil water model. The characteristic curves then were used to estimate soil matric potential from estimates of volumetric soil water content derived from neutron-probe readings. Evaluation of the modeling technique at two study sites indicated that estimates of soil matric potential within 0.5 pF units of the soil matric potential value derived by using the filter paper method could be obtained 90 to 95% of the time in soils where water content was less than field capacity. The greatest errors occurred at depths where there was a distinct transition between soils of different textures. (Lantz-PTT)

  6. Further Estimates of (T-T_{90}) Close to the Triple Point of Water

    NASA Astrophysics Data System (ADS)

    Underwood, R.; de Podesta, M.; Sutton, G.; Stanger, L.; Rusby, R.; Harris, P.; Morantz, P.; Machin, G.

    2017-03-01

    Recent advances in primary acoustic gas thermometry (AGT) have revealed significant differences between temperature measurements using the International Temperature Scale of 1990, T_{90}, and thermodynamic temperature, T. In 2015, we published estimates of the differences (T-T_{90}) from 118 K to 303 K, which showed interesting behavior in the region around the triple point of water, T_TPW=273.16 K. In that work, the T_{90} measurements below T_TPW used a different ensemble of capsule standard platinum resistance thermometers (SPRTs) than the T_{90} measurements above T_TPW. In this work, we extend our earlier measurements using the same ensemble of SPRTs above and below T_TPW, enabling a deeper analysis of the slope d(T-T_{90})/dT around T_TPW. In this article, we present the results of seven AGT isotherms in the temperature range 258 K to 323 K. The derived values of (T-T_{90}) have exceptionally low uncertainties and are in good agreement with our previous data and other AGT results. We present the values (T-T_{90}) alongside our previous estimates, with the resistance ratios W( T) from two SPRTs which have been used across the full range 118 K to 323 K. Additionally, our measurements show discontinuities in d(T-T_{90})/dT at T_TPW which are consistent with the slope discontinuity in the SPRT deviation functions. Since this discontinuity is by definition non-unique, and can take a range of values including zero, we suggest that mathematical representations of (T-T_{90}), such as those in the mise en pratique for the kelvin (Fellmuth et al. in Philos Trans R Soc A 374:20150037, 2016. doi: 10.1098/rsta.2015.0037), should have continuity of d(T-T_{90})/dT at T_TPW.

  7. Structure in the secular variation of seawater sup 87 Sr/ sup 86 Sr for the Ivorian/Chadian (Osagean, Lower Carboniferous)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Douthit, T.L.; Hanson, G.N.; Meyers, W.J.

    1990-05-01

    The secular variations of {sup 87}Sr/{sup 86}Sr in seawater for the Ivorian/Chadian, (equivalent to the Osagean, Lower Carboniferous) were determined through detailed analysis of well-preserved marine cements from the Waulsortian facies of Ireland. The results indicate that marine cements have utility in characterizing marine paleochemistries. Marine cements were judged pristine on the basis of nonluminescent character and stable isotopic composition comparable to previous estimates of Mississippian marine calcite. Analysis of the marine cements yielded {sup 87}Sr/{sup 86}Sr ratios lower than previously reported values for the Ivorian/Chadian. Error resulting from chronostratigraphic correlation between different geographic areas was avoided by restricting themore » sample set to a single 1,406-ft-long core (core P-1). The P-1 core is estimated to represent a minimum of 8.7 m.y. of continuous Waulsortian Limestone deposition. The {sup 87}Sr/{sup 86}Sr ratios of 11 nonluminescent cements document a non-monotonic variation in seawater {sup 87}Sr/{sup 86}Sr along the length of the core. {sup 87}Sr/{sup 86}Sr ranges from a high of 0.707908 in the early Ivorian to a low of about 0.707650 in the late Ivorian and middle Chadian with an early Chadian maximum at 0.707800 (all data are adjusted to a value of 0.710140 for SRM 987). The indicated maximum rate of change in seawater {sup 87}Sr/{sup 86}Sr is {minus}0.00011/Ma, comparable in magnitude to Tertiary values. The secular variation curve of seawater {sup 87}Sr/{sup 86}Sr for the Ivorian/Chadian has previously been thought to decrease monotonically with decreasing age. These data suggest that the seawater {sup 87}Sr/{sup 86}Sr variation over this interval may be sinusoidal in nature and emphasize the importance of well-characterized intraformational isotopic base lines.« less

  8. Hurricane Harvey Rainfall, Did It Exceed PMP and What are the Implications?

    NASA Astrophysics Data System (ADS)

    Kappel, B.; Hultstrand, D.; Muhlestein, G.

    2017-12-01

    Rainfall resulting from Hurricane Harvey reached historic levels over the coastal regions of Texas and Louisiana during the last week of August 2017. Although extreme rainfall from this landfalling tropical system is not uncommon in the region, Harvey was unique in that it persisted over the same general location for several days, producing volumes of rainfall not previously observed in the United States. Devastating flooding and severe stress to infrastructure in the region was the result. Coincidentally, Applied Weather Associates had recently completed an updated statewide Probable Maximum Precipitation (PMP) study for Texas. This storm proved to be a real-time test of the adequacy of those values. AWA calculates PMP following a storm-based approach. This same approach was use in the HMRs. Therefore inclusion of all PMP-type storms is critically important to ensuring that appropriate PMP values are produced. This presentation will discuss the analysis of the Harvey rainfall using the Storm Precipitation Analysis System (SPAS) program used to analyze all storms used in PMP development, compare the results of the Harvey rainfall analysis against previous similar storms, and provide comparisons of the Harvey rainfall against previous and current PMP depths. Discussion will be included regarding the implications of the storm on previous and future PMP estimates, dam safety design, and infrastructure vulnerable to extreme flooding.

  9. Viscoelastic properties of soft gels: comparison of magnetic resonance elastography and dynamic shear testing in the shear wave regime

    NASA Astrophysics Data System (ADS)

    Okamoto, R. J.; Clayton, E. H.; Bayly, P. V.

    2011-10-01

    Magnetic resonance elastography (MRE) is used to quantify the viscoelastic shear modulus, G*, of human and animal tissues. Previously, values of G* determined by MRE have been compared to values from mechanical tests performed at lower frequencies. In this study, a novel dynamic shear test (DST) was used to measure G* of a tissue-mimicking material at higher frequencies for direct comparison to MRE. A closed-form solution, including inertial effects, was used to extract G* values from DST data obtained between 20 and 200 Hz. MRE was performed using cylindrical 'phantoms' of the same material in an overlapping frequency range of 100-400 Hz. Axial vibrations of a central rod caused radially propagating shear waves in the phantom. Displacement fields were fit to a viscoelastic form of Navier's equation using a total least-squares approach to obtain local estimates of G*. DST estimates of the storage G' (Re[G*]) and loss modulus G'' (Im[G*]) for the tissue-mimicking material increased with frequency from 0.86 to 0.97 kPa (20-200 Hz, n = 16), while MRE estimates of G' increased from 1.06 to 1.15 kPa (100-400 Hz, n = 6). The loss factor (Im[G*]/Re[G*]) also increased with frequency for both test methods: 0.06-0.14 (20-200 Hz, DST) and 0.11-0.23 (100-400 Hz, MRE). Close agreement between MRE and DST results at overlapping frequencies indicates that G* can be locally estimated with MRE over a wide frequency range. Low signal-to-noise ratio, long shear wavelengths and boundary effects were found to increase residual fitting error, reinforcing the use of an error metric to assess confidence in local parameter estimates obtained by MRE.

  10. Viscoelastic properties of soft gels: comparison of magnetic resonance elastography and dynamic shear testing in the shear wave regime.

    PubMed

    Okamoto, R J; Clayton, E H; Bayly, P V

    2011-10-07

    Magnetic resonance elastography (MRE) is used to quantify the viscoelastic shear modulus, G*, of human and animal tissues. Previously, values of G* determined by MRE have been compared to values from mechanical tests performed at lower frequencies. In this study, a novel dynamic shear test (DST) was used to measure G* of a tissue-mimicking material at higher frequencies for direct comparison to MRE. A closed-form solution, including inertial effects, was used to extract G* values from DST data obtained between 20 and 200 Hz. MRE was performed using cylindrical 'phantoms' of the same material in an overlapping frequency range of 100-400 Hz. Axial vibrations of a central rod caused radially propagating shear waves in the phantom. Displacement fields were fit to a viscoelastic form of Navier's equation using a total least-squares approach to obtain local estimates of G*. DST estimates of the storage G' (Re[G*]) and loss modulus G″ (Im[G*]) for the tissue-mimicking material increased with frequency from 0.86 to 0.97 kPa (20-200 Hz, n = 16), while MRE estimates of G' increased from 1.06 to 1.15 kPa (100-400 Hz, n = 6). The loss factor (Im[G*]/Re[G*]) also increased with frequency for both test methods: 0.06-0.14 (20-200 Hz, DST) and 0.11-0.23 (100-400 Hz, MRE). Close agreement between MRE and DST results at overlapping frequencies indicates that G* can be locally estimated with MRE over a wide frequency range. Low signal-to-noise ratio, long shear wavelengths and boundary effects were found to increase residual fitting error, reinforcing the use of an error metric to assess confidence in local parameter estimates obtained by MRE.

  11. Proxy Constraints on a Warm, Fresh Late Cretaceous Arctic Ocean

    NASA Astrophysics Data System (ADS)

    Super, J. R.; Li, H.; Pagani, M.; Chin, K.

    2015-12-01

    The warm Late Cretaceous is thought to have been characterized by open Arctic Ocean temperatures upwards of 15°C (Jenkyns et al., 2004). The high temperatures and low equator-to-pole temperature gradient have proven difficult to reproduce in paleoclimate models, with the role of the atmospheric hydrologic cycle in heat transport being particularly uncertain. Here, sediments, coprolites and fish teeth of Santonian-Campanian age from two high-latitude mixed terrestrial and marine sections on Devon Island in the Canadian High Arctic (Chin et al., 2008) were analyzed using a suite of organic and inorganic proxies to evaluate the temperature and salinity of Arctic seawater. Surface temperature estimates were derived from TEX86 estimates of near-shore, shallow (~100 meters depth) marine sediments (Witkowski et al., 2011) and MBT-CBT estimates from terrestrial intervals and both suggest mean annual temperatures of ~20°C, consistent with previous estimates considering the more southerly location of Devon Island. The oxygen isotope composition of non-diagenetic phosphate from vertebrate coprolites and bony fish teeth were then measured, giving values ranging from +13‰ to +19‰. Assuming the TEX86 temperatures are valid and using the temperature calibration of Puceat 2010, the δ18O values of coprolites imply Arctic Ocean seawater δ18O values between -4‰ and -10‰, implying very fresh conditions. Lastly, the δD of precipitation will be estimated from the hydrogen isotope composition of higher plant leaf waxes (C-25, C-27, C-29 and C-31 n-alkanes) from both terrestrial and marine intervals. Data are used to model the salinity of seawater and the meteoric relationship between δD and δ18O, thereby helping to evaluate the northern high-latitude meteoric water line of the Late Cretaceous.

  12. A comparison of direct and indirect methods for the estimation of health utilities from clinical outcomes.

    PubMed

    Hernández Alava, Mónica; Wailoo, Allan; Wolfe, Fred; Michaud, Kaleb

    2014-10-01

    Analysts frequently estimate health state utility values from other outcomes. Utility values like EQ-5D have characteristics that make standard statistical methods inappropriate. We have developed a bespoke, mixture model approach to directly estimate EQ-5D. An indirect method, "response mapping," first estimates the level on each of the 5 dimensions of the EQ-5D and then calculates the expected tariff score. These methods have never previously been compared. We use a large observational database from patients with rheumatoid arthritis (N = 100,398). Direct estimation of UK EQ-5D scores as a function of the Health Assessment Questionnaire (HAQ), pain, and age was performed with a limited dependent variable mixture model. Indirect modeling was undertaken with a set of generalized ordered probit models with expected tariff scores calculated mathematically. Linear regression was reported for comparison purposes. Impact on cost-effectiveness was demonstrated with an existing model. The linear model fits poorly, particularly at the extremes of the distribution. The bespoke mixture model and the indirect approaches improve fit over the entire range of EQ-5D. Mean average error is 10% and 5% lower compared with the linear model, respectively. Root mean squared error is 3% and 2% lower. The mixture model demonstrates superior performance to the indirect method across almost the entire range of pain and HAQ. These lead to differences in cost-effectiveness of up to 20%. There are limited data from patients in the most severe HAQ health states. Modeling of EQ-5D from clinical measures is best performed directly using the bespoke mixture model. This substantially outperforms the indirect method in this example. Linear models are inappropriate, suffer from systematic bias, and generate values outside the feasible range. © The Author(s) 2013.

  13. Semimajor Axis Estimation Strategies

    NASA Technical Reports Server (NTRS)

    How, Jonathan P.; Alfriend, Kyle T.; Breger, Louis; Mitchell, Megan

    2004-01-01

    This paper extends previous analysis on the impact of sensing noise for the navigation and control aspects of formation flying spacecraft. We analyze the use of Carrier-phase Differential GPS (CDGPS) in relative navigation filters, with a particular focus on the filter correlation coefficient. This work was motivated by previous publications which suggested that a "good" navigation filter would have a strong correlation (i.e., coefficient near -1) to reduce the semimajor axis (SMA) error, and therefore, the overall fuel use. However, practical experience with CDGPS-based filters has shown this strong correlation seldom occurs (typical correlations approx. -0.1), even when the estimation accuracies are very good. We derive an analytic estimate of the filter correlation coefficient and demonstrate that, for the process and sensor noises levels expected with CDGPS, the expected value will be very low. It is also demonstrated that this correlation can be improved by increasing the time step of the discrete Kalman filter, but since the balance condition is not satisfied, the SMA error also increases. These observations are verified with several linear simulations. The combination of these simulations and analysis provide new insights on the crucial role of the process noise in determining the semimajor axis knowledge.

  14. Earth-atmosphere system and surface reflectivities in arid regions from Landsat MSS data

    NASA Technical Reports Server (NTRS)

    Otterman, J.; Fraser, R. S.

    1976-01-01

    Previously developed programs for computing atmospheric transmission and scattering of the solar radiation are used to compute the ratios of the earth-atmosphere system (space) directional reflectivities in the nadir direction to the surface Lambertian reflectivity, for the four bands of the Landsat multispectral scanner (MSS). These ratios are presented as graphs for two water vapor levels, as a function of the surface reflectivity, for various sun elevation angles. Space directional reflectivities in the vertical direction are reported for selected arid regions in Asia, Africa, and Central America from the spectral radiance levels measured by the Landsat MSS. From these space reflectivities, surface reflectivities are computed applying the pertinent graphs. These surface reflectivities are used to estimate the surface albedo for the entire solar spectrum. The estimated albedos are in the range 0.34-0.52, higher than the values reported by most previous researchers from space measurements, but are consistent with laboratory and in situ measurements.

  15. Adaptive power priors with empirical Bayes for clinical trials.

    PubMed

    Gravestock, Isaac; Held, Leonhard

    2017-09-01

    Incorporating historical information into the design and analysis of a new clinical trial has been the subject of much discussion as a way to increase the feasibility of trials in situations where patients are difficult to recruit. The best method to include this data is not yet clear, especially in the case when few historical studies are available. This paper looks at the power prior technique afresh in a binomial setting and examines some previously unexamined properties, such as Box P values, bias, and coverage. Additionally, it proposes an empirical Bayes-type approach to estimating the prior weight parameter by marginal likelihood. This estimate has advantages over previously criticised methods in that it varies commensurably with differences in the historical and current data and can choose weights near 1 when the data are similar enough. Fully Bayesian approaches are also considered. An analysis of the operating characteristics shows that the adaptive methods work well and that the various approaches have different strengths and weaknesses. Copyright © 2017 John Wiley & Sons, Ltd.

  16. Models for estimating photosynthesis parameters from in situ production profiles

    NASA Astrophysics Data System (ADS)

    Kovač, Žarko; Platt, Trevor; Sathyendranath, Shubha; Antunović, Suzana

    2017-12-01

    The rate of carbon assimilation in phytoplankton primary production models is mathematically prescribed with photosynthesis irradiance functions, which convert a light flux (energy) into a material flux (carbon). Information on this rate is contained in photosynthesis parameters: the initial slope and the assimilation number. The exactness of parameter values is crucial for precise calculation of primary production. Here we use a model of the daily production profile based on a suite of photosynthesis irradiance functions and extract photosynthesis parameters from in situ measured daily production profiles at the Hawaii Ocean Time-series station Aloha. For each function we recover parameter values, establish parameter distributions and quantify model skill. We observe that the choice of the photosynthesis irradiance function to estimate the photosynthesis parameters affects the magnitudes of parameter values as recovered from in situ profiles. We also tackle the problem of parameter exchange amongst the models and the effect it has on model performance. All models displayed little or no bias prior to parameter exchange, but significant bias following parameter exchange. The best model performance resulted from using optimal parameter values. Model formulation was extended further by accounting for spectral effects and deriving a spectral analytical solution for the daily production profile. The daily production profile was also formulated with time dependent growing biomass governed by a growth equation. The work on parameter recovery was further extended by exploring how to extract photosynthesis parameters from information on watercolumn production. It was demonstrated how to estimate parameter values based on a linearization of the full analytical solution for normalized watercolumn production and from the solution itself, without linearization. The paper complements previous works on photosynthesis irradiance models by analysing the skill and consistency of photosynthesis irradiance functions and parameters for modeling in situ production profiles. In light of the results obtained in this work we argue that the choice of the primary production model should reflect the available data and these models should be data driven regarding parameter estimation.

  17. Inferring the source of evaporated waters using stable H and O isotopes

    NASA Astrophysics Data System (ADS)

    Bowen, G. J.; Putman, A.; Brooks, J. R.; Bowling, D. R.; Oerter, E.; Good, S. P.

    2017-12-01

    Stable isotope ratios of H and O are widely used identify the source of water, e.g., in aquifers, river runoff, soils, plant xylem, and plant-based beverages. In situations where the sampled water is partially evaporated, its isotope values will have evolved along an evaporation line (EL) in δ2H/δ18O space, and back-correction along the EL to its intersection with a meteoric water line (MWL) has been used to estimate the source water's isotope ratios. Several challenges and potential pitfalls exist with traditional approaches to this problem, including potential for bias from a commonly used regression-based approach for EL slope estimation and incomplete estimation of uncertainty in most studies. We suggest the value of a model-based approach to EL estimation, and introduce a mathematical framework that eliminates the need to explicitly estimate the EL-MWL intersection, simplifying analysis and facilitating more rigorous uncertainty estimation. We apply this analysis framework to data from 1,000 lakes sampled in EPA's 2007 National Lakes Assessment. We find that data for most lakes is consistent with a water source similar to annual runoff, estimated from monthly precipitation and evaporation within the lake basin. Strong evidence for both summer- and winter-biased sources exists, however, with winter bias pervasive in most snow-prone regions. The new analytical framework should improve the rigor of source-water inference from evaporated samples in ecohydrology and related sciences, and our initial results from U.S. lakes suggest that previous interpretations of lakes as unbiased isotope integrators may only be valid in certain climate regimes.

  18. Estimation of Saxophone Control Parameters by Convex Optimization.

    PubMed

    Wang, Cheng-I; Smyth, Tamara; Lipton, Zachary C

    2014-12-01

    In this work, an approach to jointly estimating the tone hole configuration (fingering) and reed model parameters of a saxophone is presented. The problem isn't one of merely estimating pitch as one applied fingering can be used to produce several different pitches by bugling or overblowing. Nor can a fingering be estimated solely by the spectral envelope of the produced sound (as it might for estimation of vocal tract shape in speech) since one fingering can produce markedly different spectral envelopes depending on the player's embouchure and control of the reed. The problem is therefore addressed by jointly estimating both the reed (source) parameters and the fingering (filter) of a saxophone model using convex optimization and 1) a bank of filter frequency responses derived from measurement of the saxophone configured with all possible fingerings and 2) sample recordings of notes produced using all possible fingerings, played with different overblowing, dynamics and timbre. The saxophone model couples one of several possible frequency response pairs (corresponding to the applied fingering), and a quasi-static reed model generating input pressure at the mouthpiece, with control parameters being blowing pressure and reed stiffness. Applied fingering and reed parameters are estimated for a given recording by formalizing a minimization problem, where the cost function is the error between the recording and the synthesized sound produced by the model having incremental parameter values for blowing pressure and reed stiffness. The minimization problem is nonlinear and not differentiable and is made solvable using convex optimization. The performance of the fingering identification is evaluated with better accuracy than previous reported value.

  19. Radiation-force-based Estimation of Acoustic Attenuation Using Harmonic Motion Imaging (HMI) in Phantoms and in vitro Livers Before and After HIFU Ablation

    PubMed Central

    Chen, Jiangang; Hou, Gary Y.; Marquet, Fabrice; Han, Yang; Camarena, Francisco

    2015-01-01

    Acoustic attenuation represents the energy loss of the propagating wave through biological tissues and plays a significant role in both therapeutic and diagnostic ultrasound applications. Estimation of acoustic attenuation remains challenging but critical for tissue characterization. In this study, an attenuation estimation approach was developed using the radiation-force-based method of Harmonic Motion Imaging (HMI). 2D tissue displacement maps were acquired by moving the transducer in a raster-scan format. A linear regression model was applied on the logarithm of the HMI displacements at different depths in order to estimate the acoustic attenuation. Commercially available phantoms with known attenuations (n=5) and in vitro canine livers (n=3) were tested, as well as HIFU lesions in in vitro canine livers (n=5). Results demonstrated that attenuations obtained from the phantoms showed a good correlation (R2=0.976) with the independently obtained values reported by the manufacturer with an estimation error (compared to the values independently measured) varying within the range of 15-35%. The estimated attenuation in the in vitro canine livers was equal to 0.32±0.03 dB/cm/MHz, which is in good agreement with the existing literature. The attenuation in HIFU lesions was found to be higher (0.58±0.06 dB/cm/MHz) than that in normal tissues, also in agreement with the results from previous publications. Future potential applications of the proposed method include estimation of attenuation in pathological tissues before and after thermal ablation. PMID:26371501

  20. Methods for the quantitative comparison of molecular estimates of clade age and the fossil record.

    PubMed

    Clarke, Julia A; Boyd, Clint A

    2015-01-01

    Approaches quantifying the relative congruence, or incongruence, of molecular divergence estimates and the fossil record have been limited. Previously proposed methods are largely node specific, assessing incongruence at particular nodes for which both fossil data and molecular divergence estimates are available. These existing metrics, and other methods that quantify incongruence across topologies including entirely extinct clades, have so far not taken into account uncertainty surrounding both the divergence estimates and the ages of fossils. They have also treated molecular divergence estimates younger than previously assessed fossil minimum estimates of clade age as if they were the same as cases in which they were older. However, these cases are not the same. Recovered divergence dates younger than compared oldest known occurrences require prior hypotheses regarding the phylogenetic position of the compared fossil record and standard assumptions about the relative timing of morphological and molecular change to be incorrect. Older molecular dates, by contrast, are consistent with an incomplete fossil record and do not require prior assessments of the fossil record to be unreliable in some way. Here, we compare previous approaches and introduce two new descriptive metrics. Both metrics explicitly incorporate information on uncertainty by utilizing the 95% confidence intervals on estimated divergence dates and data on stratigraphic uncertainty concerning the age of the compared fossils. Metric scores are maximized when these ranges are overlapping. MDI (minimum divergence incongruence) discriminates between situations where molecular estimates are younger or older than known fossils reporting both absolute fit values and a number score for incompatible nodes. DIG range (divergence implied gap range) allows quantification of the minimum increase in implied missing fossil record induced by enforcing a given set of molecular-based estimates. These metrics are used together to describe the relationship between time trees and a set of fossil data, which we recommend be phylogenetically vetted and referred on the basis of apomorphy. Differences from previously proposed metrics and the utility of MDI and DIG range are illustrated in three empirical case studies from angiosperms, ostracods, and birds. These case studies also illustrate the ways in which MDI and DIG range may be used to assess time trees resultant from analyses varying in calibration regime, divergence dating approach or molecular sequence data analyzed. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Estimating envelope thermal characteristics from single point in time thermal images

    NASA Astrophysics Data System (ADS)

    Alshatshati, Salahaldin Faraj

    Energy efficiency programs implemented nationally in the U.S. by utilities have rendered savings which have cost on average 0.03/kWh. This cost is still well below generation costs. However, as the lowest cost energy efficiency measures are adopted, this the cost effectiveness of further investment declines. Thus there is a need to more effectively find the most opportunities for savings regionally and nationally, so that the greatest cost effectiveness in implementing energy efficiency can be achieved. Integral to this process. are at scale energy audits. However, on-site building energy audits process are expensive, in the range of US1.29/m2-$5.37/m2 and there are an insufficient number of professionals to perform the audits. Energy audits that can be conducted at-scale and at low cost are needed. Research is presented that addresses at community-wide scales characterization of building envelope thermal characteristics via drive-by and fly-over GPS linked thermal imaging. A central question drives this research: Can single point-in-time thermal images be used to infer U-values and thermal capacitances of walls and roofs? Previous efforts to use thermal images to estimate U-values have been limited to rare steady exterior weather conditions. The approaches posed here are based upon the development two models first is a dynamic model of a building envelope component with unknown U-value and thermal capacitance. The weather conditions prior to the thermal image are used as inputs to the model. The model is solved to determine the exterior surface temperature, ultimately predicted the temperature at the thermal measurement time. The model U-value and thermal capacitance are tuned in order to force the error between the predicted surface temperature and the measured surface temperature from thermal imaging to be near zero. This model is developed simply to show that such a model cannot be relied upon to accurately estimate the U-value. The second is a data-based methodology. This approach integrates the exterior surface temperature measurements, historical utility data, and easily accessible or potentially easily accessible housing data. A Random Forest model is developed from a training subset of residences for which the envelope U-value is known. This model is used to predict the envelope U-value for a validation set of houses with unknown U-value. Demonstrated is an ability to estimate the wall/roof U-value with an R-squared value in the range of 0.97 and 0.96 respectively, using as few as 9 and 24 training houses for respectively wall and ceiling U-value estimation. The implication of this research is significant, offering the possibility of auditing residences remotely at-scale via aerial and drive-by thermal imaging.

  2. Estimating the returns to United Kingdom publicly funded musculoskeletal disease research in terms of net value of improved health outcomes.

    PubMed

    Glover, Matthew; Montague, Erin; Pollitt, Alexandra; Guthrie, Susan; Hanney, Stephen; Buxton, Martin; Grant, Jonathan

    2018-01-10

    Building on an approach applied to cardiovascular and cancer research, we estimated the economic returns from United Kingdom public- and charitable-funded musculoskeletal disease (MSD) research that arise from the net value of the improved health outcomes in the United Kingdom. To calculate the economic returns from MSD-related research in the United Kingdom, we estimated (1) the public and charitable expenditure on MSD-related research in the United Kingdom between 1970 and 2013; (2) the net monetary benefit (NMB), derived from the health benefit in quality adjusted life years (QALYs) valued in monetary terms (using a base-case value of a QALY of £25,000) minus the cost of delivering that benefit, for a prioritised list of interventions from 1994 to 2013; (3) the proportion of NMB attributable to United Kingdom research; and (4) the elapsed time between research funding and health gain. The data collected from these four key elements were used to estimate the internal rate of return (IRR) from MSD-related research investments on health benefits. We analysed the uncertainties in the IRR estimate using a one-way sensitivity analysis. Expressed in 2013 prices, total expenditure on MSD-related research from 1970 to 2013 was £3.5 billion, and for the period used to estimate the rate of return, 1978-1997, was £1.4 billion. Over the period 1994-2013 the key interventions analysed produced 871,000 QALYs with a NMB of £16 billion, allowing for the net NHS costs resulting from them and valuing a QALY at £25,000. The proportion of benefit attributable to United Kingdom research was 30% and the elapsed time between funding and impact of MSD treatments was 16 years. Our best estimate of the IRR from MSD-related research was 7%, which is similar to the 9% for CVD and 10% for cancer research. Our estimate of the IRR from the net health gain to public and charitable funding of MSD-related research in the United Kingdom is substantial, and justifies the research investments made between 1978 and 1997. We also demonstrated the applicability of the approach previously used in assessing the returns from cardiovascular and cancer research. Inevitably, with a study of this kind, there are a number of important assumptions and caveats that we highlight, and these can inform future research.

  3. Weak-value amplification as an optimal metrological protocol

    NASA Astrophysics Data System (ADS)

    Alves, G. Bié; Escher, B. M.; de Matos Filho, R. L.; Zagury, N.; Davidovich, L.

    2015-06-01

    The implementation of weak-value amplification requires the pre- and postselection of states of a quantum system, followed by the observation of the response of the meter, which interacts weakly with the system. Data acquisition from the meter is conditioned to successful postselection events. Here we derive an optimal postselection procedure for estimating the coupling constant between system and meter and show that it leads both to weak-value amplification and to the saturation of the quantum Fisher information, under conditions fulfilled by all previously reported experiments on the amplification of weak signals. For most of the preselected states, full information on the coupling constant can be extracted from the meter data set alone, while for a small fraction of the space of preselected states, it must be obtained from the postselection statistics.

  4. Statistical Survey of Persistent Organic Pollutants: Risk Estimations to Humans and Wildlife through Consumption of Fish from U.S. Rivers.

    PubMed

    Batt, Angela L; Wathen, John B; Lazorchak, James M; Olsen, Anthony R; Kincaid, Thomas M

    2017-03-07

    U.S. EPA conducted a national statistical survey of fish tissue contamination at 540 river sites (representing 82 954 river km) in 2008-2009, and analyzed samples for 50 persistent organic pollutants (POPs), including 21 PCB congeners, 8 PBDE congeners, and 21 organochlorine pesticides. The survey results were used to provide national estimates of contamination for these POPs. PCBs were the most abundant, being measured in 93.5% of samples. Summed concentrations of the 21 PCB congeners had a national weighted mean of 32.7 μg/kg and a maximum concentration of 857 μg/kg, and exceeded the human health cancer screening value of 12 μg/kg in 48% of the national sampled population of river km, and in 70% of the urban sampled population. PBDEs (92.0%), chlordane (88.5%) and DDT (98.7%) were also detected frequently, although at lower concentrations. Results were examined by subpopulations of rivers, including urban or nonurban and three defined ecoregions. PCBs, PBDEs, and DDT occur at significantly higher concentrations in fish from urban rivers versus nonurban; however, the distribution varied more among the ecoregions. Wildlife screening values previously published for bird and mammalian species were converted from whole fish to fillet screening values, and used to estimate risk for wildlife through fish consumption.

  5. Estimation of Risk of Normal-tissue Toxicity Following Gastric Cancer Radiotherapy with Photon- or Scanned Proton-beams.

    PubMed

    Mondlane, Gracinda; Ureba, Ana; Gubanski, Michael; Lind, Pehr A; Siegbahn, Albert

    2018-05-01

    Gastric cancer (GC) radiotherapy involves irradiation of large tumour volumes located in the proximities of critical structures. The advantageous dose distributions produced by scanned-proton beams could reduce the irradiated volumes of the organs at risk (OARs). However, treatment-induced side-effects may still appear. The aim of this study was to estimate the normal tissue complication probability (NTCP) following proton therapy of GC, compared to photon radiotherapy. Eight GC patients, previously treated with volumetric-modulated arc therapy (VMAT), were retrospectively planned with scanned proton beams carried out with the single-field uniform-dose (SFUD) method. A beam-specific planning target volume was used for spot positioning and a clinical target volume (CTV) based robust optimisation was performed considering setup- and range-uncertainties. The dosimetric and NTCP values obtained with the VMAT and SFUD plans were compared. With SFUD, lower or similar dose-volume values were obtained for OARs, compared to VMAT. NTCP values of 0% were determined with the VMAT and SFUD plans for all OARs (p>0.05), except for the left kidney (p<0.05), for which lower toxicity was estimated with SFUD. The NTCP reduction, determined for the left kidney with SFUD, can be of clinical relevance for preserving renal function after radiotherapy of GC. Copyright© 2018, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.

  6. Human body area factors for radiation exchange analysis: standing and walking postures

    NASA Astrophysics Data System (ADS)

    Park, Sookuk; Tuller, Stanton E.

    2011-09-01

    Effective radiation area factors ( f eff) and projected area factors ( f p) of unclothed Caucasians' standing and walking postures used in estimating human radiation exchange with the surrounding environment were determined from a sample of adults in Canada. Several three-dimensional (3D) computer body models were created for standing and walking postures. Only small differences in f eff and f p values for standing posture were found between gender (male or female) and body type (normal- or over-weight). Differences between this study and previous studies were much larger: ≤0.173 in f p and ≤0.101 in f eff. Directionless f p values for walking posture also had only minor differences between genders and positions in a stride. However, the differences of mean directional f p values of the positions dependent on azimuth angles were large enough, ≤0.072, to create important differences in modeled radiation receipt. Differences in f eff values were small: 0.02 between the normal-weight male and female models and up to 0.033 between positions in a stride. Variations of directional f p values depending on solar altitudes for walking posture were narrower than those for standing posture. When both standing and walking postures are considered, the mean f eff value, 0.836, of standing (0.826) and walking (0.846) could be used. However, f p values should be selected carefully because differences between directional and directionless f p values were large enough that they could influence the estimated level of human thermal sensation.

  7. Geodesic regression for image time-series.

    PubMed

    Niethammer, Marc; Huang, Yang; Vialard, François-Xavier

    2011-01-01

    Registration of image-time series has so far been accomplished (i) by concatenating registrations between image pairs, (ii) by solving a joint estimation problem resulting in piecewise geodesic paths between image pairs, (iii) by kernel based local averaging or (iv) by augmenting the joint estimation with additional temporal irregularity penalties. Here, we propose a generative model extending least squares linear regression to the space of images by using a second-order dynamic formulation for image registration. Unlike previous approaches, the formulation allows for a compact representation of an approximation to the full spatio-temporal trajectory through its initial values. The method also opens up possibilities to design image-based approximation algorithms. The resulting optimization problem is solved using an adjoint method.

  8. Sulfur concentration of mare basalts at sulfide saturation at high pressures and temperatures-Implications for S in the lunar mantle

    NASA Astrophysics Data System (ADS)

    Ding, S.; Hough, T.; Dasgupta, R.

    2016-12-01

    Low estimate of S in the bulk silicate moon (BSM) [e.g., 1] suggests that sulfide in the lunar mantle is likely exhausted during melting. This agrees with estimates of HSE depletion in the BSM [2], but challenges the S-rich core proposed by previous studies [e.g., 3]. A key parameter to constrain the fate of sulfide during mantle melting is the sulfur carrying capacity of the mantle melts (SCSS). However, the SCSS of variably high-Ti lunar basalts at high P-Tare unknown. Basalt-sulfide melt equilibria experiments were run in graphite capsules using a piston cylinder at 1.0-2.5 GPa and 1400-1600 °C, on high-Ti (Apollo11, 11.1 wt.%; [4]) and intermediate-Ti (Luna16, 5 wt.%; [5]) mare basalts. At 1.5 GPa, SCSS of Apollo11 increases from 3940 ppm S to 5860 ppm, as temperature increases from 1400 °C to 1600 °C. And at 1500 °C, SCSS decreases from 5350 ppm S to 3830 ppm, as pressure increases from 1 to 2.5 GPa. SCSS of Luna16 shows a similar P-T dependence. Previous models [e.g., 6] tend to overestimate the SCSS values determined in our study, with the model overprediction increasing with increasing melt TiO2. Consequently, we derive a new SCSS parameterization for high-FeO* silicate melts of variable TiO2content. At multiple saturation points [e.g., 7], the SCSS of primary lunar melts is 3500-5500 ppm. With these values, 0.02-0.05 wt.% sulfide (70-200 ppm S) in the mantle can be consumed by 2-6% melting. In order to generate primary lunar basalts with S of 800-1000 ppm [1], sulfide in the mantle must be exhausted, and the mode of sulfide cannot exceed 0.025 wt.% (100 ppm S). This estimate corresponds with lower end values in the terrestrial mantle and further agrees with previous calculations of HSE depletion in the BSM [2]. [1] Hauri et al.,2015, EPSL; [2] Day et al.,2007, Science; [3] Jing et al., 2014, EPSL; [4] Synder et al.,1992, GCA; [5] Warren & Taylor, 2014, Treatise on Geochemistry; [6] Li & Ripley, 2009, Econ.Geol ; [7] Krawczynski & Grove, 2012, GCA.

  9. Carbon isotope and abundance systematics of Icelandic geothermal gases, fluids and subglacial basalts with implications for mantle plume-related CO2 fluxes

    NASA Astrophysics Data System (ADS)

    Barry, P. H.; Hilton, D. R.; Füri, E.; Halldórsson, S. A.; Grönvold, K.

    2014-06-01

    We report new carbon dioxide (CO2) abundance and isotope data for 71 geothermal gases and fluids from both high-temperature (HT > 150 °C at 1 km depth) and low-temperature (LT < 150 °C at 1 km depth) geothermal systems located within neovolcanic zones and older segments of the Icelandic crust, respectively. These data are supplemented by CO2 data obtained by stepped heating of 47 subglacial basaltic glasses collected from the neovolcanic zones. The sample suite has been characterized previously for He-Ne (geothermal) and He-Ne-Ar (basalt) systematics (Füri et al., 2010), allowing elemental ratios to be calculated for individual samples. Geothermal fluids are characterized by a wide range in carbon isotope ratios (δ13C), from -18.8‰ to +4.6‰ (vs. VPDB), and CO2/3He values that span eight orders of magnitude, from 1 × 104 to 2 × 1012. Extreme geothermal values suggest that original source compositions have been extensively modified by hydrothermal processes such as degassing and/or calcite precipitation. Basaltic glasses are also characterized by a wide range in δ13C values, from -27.2‰ to -3.6‰, whereas CO2/3He values span a narrower range, from 1 × 108 to 1 × 1012. The combination of both low δ13C values and low CO2 contents in basalts indicates that magmas are extensively and variably degassed. Using an equilibrium degassing model, we estimate that pre-eruptive basaltic melts beneath Iceland contain ∼531 ± 64 ppm CO2 with δ13C values of -2.5 ± 1.1‰, in good agreement with estimates from olivine-hosted melt inclusions (Metrich et al., 1991) and depleted MORB mantle (DMM) CO2 source estimates (Marty, 2012). In addition, pre-eruptive CO2 compositions are estimated for individual segments of the Icelandic axial rift zones, and show a marked decrease from north to south (Northern Rift Zone = 550 ± 66 ppm; Eastern Rift Zone = 371 ± 45 ppm; Western Rift Zone = 206 ± 24 ppm). Notably, these results are model dependent, and selection of a lower δ13C fractionation factor will result in lower source estimates and larger uncertainties associated with the initial δ13C estimate. Degassing can adequately explain low CO2 contents in basalts; however, degassing alone is unlikely to generate the entire spectrum of observed δ13C variations, and we suggest that melt-crust interaction, involving a low δ13C component, may also contribute to observed signatures. Using representative samples, the CO2 flux from Iceland is estimated using three independent methods: (1) combining measured CO2/3He values (in gases and basalts) with 3He flux estimates (Hilton et al., 1990), (2) merging basaltic emplacement rates of Iceland with pre-eruptive magma source estimates of ∼531 ± 64 ppm CO2, and (3) combining fluid CO2 contents with estimated regional fluid discharge rates. These methods yield CO2 flux estimates from of 0.2-23 × 1010 mol a-1, which represent ∼0.1-10% of the estimated global ridge flux (2.2 × 1012 mol a-1; Marty and Tolstikhin, 1998).

  10. Estimating mineral abundances of clay and gypsum mixtures using radiative transfer models applied to visible-near infrared reflectance spectra

    NASA Astrophysics Data System (ADS)

    Robertson, K. M.; Milliken, R. E.; Li, S.

    2016-10-01

    Quantitative mineral abundances of lab derived clay-gypsum mixtures were estimated using a revised Hapke VIS-NIR and Shkuratov radiative transfer model. Montmorillonite-gypsum mixtures were used to test the effectiveness of the model in distinguishing between subtle differences in minor absorption features that are diagnostic of mineralogy in the presence of strong H2O absorptions that are not always diagnostic of distinct phases or mineral abundance. The optical constants (k-values) for both endmembers were determined from bi-directional reflectance spectra measured in RELAB as well as on an ASD FieldSpec3 in a controlled laboratory setting. Multiple size fractions were measured in order to derive a single k-value from optimization of the optical path length in the radiative transfer models. It is shown that with careful experimental conditions, optical constants can be accurately determined from powdered samples using a field spectrometer, consistent with previous studies. Variability in the montmorillonite hydration level increased the uncertainties in the derived k-values, but estimated modal abundances for the mixtures were still within 5% of the measured values. Results suggest that the Hapke model works well in distinguishing between hydrated phases that have overlapping H2O absorptions and it is able to detect gypsum and montmorillonite in these simple mixtures where they are present at levels of ∼10%. Care must be taken however to derive k-values from a sample with appropriate H2O content relative to the modeled spectra. These initial results are promising for the potential quantitative analysis of orbital remote sensing data of hydrated minerals, including more complex clay and sulfate assemblages such as mudstones examined by the Curiosity rover in Gale crater.

  11. Application of maximum entropy to statistical inference for inversion of data from a single track segment.

    PubMed

    Stotts, Steven A; Koch, Robert A

    2017-08-01

    In this paper an approach is presented to estimate the constraint required to apply maximum entropy (ME) for statistical inference with underwater acoustic data from a single track segment. Previous algorithms for estimating the ME constraint require multiple source track segments to determine the constraint. The approach is relevant for addressing model mismatch effects, i.e., inaccuracies in parameter values determined from inversions because the propagation model does not account for all acoustic processes that contribute to the measured data. One effect of model mismatch is that the lowest cost inversion solution may be well outside a relatively well-known parameter value's uncertainty interval (prior), e.g., source speed from track reconstruction or towed source levels. The approach requires, for some particular parameter value, the ME constraint to produce an inferred uncertainty interval that encompasses the prior. Motivating this approach is the hypothesis that the proposed constraint determination procedure would produce a posterior probability density that accounts for the effect of model mismatch on inferred values of other inversion parameters for which the priors might be quite broad. Applications to both measured and simulated data are presented for model mismatch that produces minimum cost solutions either inside or outside some priors.

  12. Characterization of classical static noise via qubit as probe

    NASA Astrophysics Data System (ADS)

    Javed, Muhammad; Khan, Salman; Ullah, Sayed Arif

    2018-03-01

    The dynamics of quantum Fisher information (QFI) of a single qubit coupled to classical static noise is investigated. The analytical relation for QFI fixes the optimal initial state of the qubit that maximizes it. An approximate limit for the time of coupling that leads to physically useful results is identified. Moreover, using the approach of quantum estimation theory and the analytical relation for QFI, the qubit is used as a probe to precisely estimate the disordered parameter of the environment. Relation for optimal interaction time with the environment is obtained, and condition for the optimal measurement of the noise parameter of the environment is given. It is shown that all values, in the mentioned range, of the noise parameter are estimable with equal precision. A comparison of our results with the previous studies in different classical environments is made.

  13. New constraints on Mars rotation determined from radiometric tracking of the Opportunity Mars Exploration Rover

    NASA Astrophysics Data System (ADS)

    Kuchynka, Petr; Folkner, William M.; Konopliv, Alex S.; Parker, Timothy J.; Park, Ryan S.; Le Maistre, Sebastien; Dehant, Veronique

    2014-02-01

    The Opportunity Mars Exploration Rover remained stationary between January and May 2012 in order to conserve solar energy for running its survival heaters during martian winter. While stationary, extra Doppler tracking was performed in order to allow an improved estimate of the martian precession rate. In this study, we determine Mars rotation by combining the new Opportunity tracking data with historic tracking data from the Viking and Pathfinder landers and tracking data from Mars orbiters (Mars Global Surveyor, Mars Odyssey and Mars Reconnaissance Orbiter). The estimated rotation parameters are stable in cross-validation tests and compare well with previously published values. In particular, the Mars precession rate is estimated to be -7606.1 ± 3.5 mas/yr. A representation of Mars rotation as a series expansion based on the determined rotation parameters is provided.

  14. Revised budget for the oceanic uptake of anthropogenic carbon dioxide

    USGS Publications Warehouse

    Sarmiento, J.L.; Sundquist, E.T.

    1992-01-01

    TRACER-CALIBRATED models of the total uptake of anthropogenic CO2 by the world's oceans give estimates of about 2 gigatonnes carbon per year1, significantly larger than a recent estimate2 of 0.3-0.8 Gt C yr-1 for the synoptic air-to-sea CO2 influx. Although both estimates require that the global CO2 budget must be balanced by a large unknown terrestrial sink, the latter estimate implies a much larger terrestrial sink, and challenges the ocean model calculations on which previous CO2 budgets were based. The discrepancy is due in part to the net flux of carbon to the ocean by rivers and rain, which must be added to the synoptic air-to-sea CO2 flux to obtain the total oceanic uptake of anthropogenic CO2. Here we estimate the magnitude of this correction and of several other recently proposed adjustments to the synoptic air-sea CO2 exchange. These combined adjustments minimize the apparent inconsistency, and restore estimates of the terrestrial sink to values implied by the modelled oceanic uptake.

  15. Estimation of glycaemic control in the past month using ratio of glycated albumin to HbA1c.

    PubMed

    Musha, I; Mochizuki, M; Kikuchi, T; Akatsuka, J; Ohtake, A; Kobayashi, K; Kikuchi, N; Kawamura, T; Yokota, I; Urakami, T; Sugihara, S; Amemiya, S

    2018-04-13

    To evaluate comprehensively the use of the glycated albumin to HbA 1c ratio for estimation of glycaemic control in the previous month. A total of 306 children with Type 1 diabetes mellitus underwent ≥10 simultaneous measurements of glycated albumin and HbA 1c . Correlation and concordance rates were examined between HbA 1c measurements taken 1 month apart (ΔHbA 1c ) and glycated albumin/HbA 1c ratio fluctuations were calculated as Z-scores from the cohort value at enrolment of this study cohort (method A) or the percent difference from the individual mean over time (method B). Fluctuations in glycated albumin/HbA 1c ratio (using both methods) were weakly but significantly correlated with ΔHbA 1c , whereas concordance rates were significant for glycaemic deterioration but not for glycaemic improvement. Concordance rates were higher using method B than method A. The glycated albumin/HbA 1c ratio was able to estimate glycaemic deterioration in the previous month, while estimation of glycaemic improvement in the preceding month was limited. Because method B provided a better estimate of recent glycaemic control than method A, the individual mean of several measurements of the glycated albumin/HbA 1c ratio over time may also identify individuals with high or low haemoglobin glycation phenotypes in a given population, such as Japanese children with Type 1 diabetes, thereby allowing more effective diabetes management. © 2018 Diabetes UK.

  16. On Estimating End-to-End Network Path Properties

    NASA Technical Reports Server (NTRS)

    Allman, Mark; Paxson, Vern

    1999-01-01

    The more information about current network conditions available to a transport protocol, the more efficiently it can use the network to transfer its data. In networks such as the Internet, the transport protocol must often form its own estimates of network properties based on measurements per-formed by the connection endpoints. We consider two basic transport estimation problems: determining the setting of the retransmission timer (RTO) for are reliable protocol, and estimating the bandwidth available to a connection as it begins. We look at both of these problems in the context of TCP, using a large TCP measurement set [Pax97b] for trace-driven simulations. For RTO estimation, we evaluate a number of different algorithms, finding that the performance of the estimators is dominated by their minimum values, and to a lesser extent, the timer granularity, while being virtually unaffected by how often round-trip time measurements are made or the settings of the parameters in the exponentially-weighted moving average estimators commonly used. For bandwidth estimation, we explore techniques previously sketched in the literature [Hoe96, AD98] and find that in practice they perform less well than anticipated. We then develop a receiver-side algorithm that performs significantly better.

  17. Infant mortality in the Marshall Islands.

    PubMed

    Levy, S J; Booth, H

    1988-12-01

    Levy and Booth present previously unpublished infant mortality rates for the Marshall Islands. They use an indirect method to estimate infant mortality from the 1973 and 1980 censuses, then apply indirect and direct methods of estimation to data from the Marshall Islands Women's Health Survey of 1985. Comparing the results with estimates of infant mortality obtained from vital registration data enables them to estimate the extent of underregistration of infant deaths. The authors conclude that 1973 census appears to be the most valid information source. Direct estimates from the Women's Health Survey data suggest that infant mortality has increased since 1970-1974, whereas the indirect estimates indicate a decreasing trend in infant mortality rates, converging with the direct estimates in more recent years. In view of increased efforts to improve maternal and child health in the mid-1970s, the decreasing trend is plausible. It is impossible to estimate accurately infant mortality in the Marshall Islands during 1980-1984 from the available data. Estimates based on registration data for 1975-1979 are at least 40% too low. The authors speculate that the estimate of 33 deaths per 1000 live births obtained from registration data for 1984 is 40-50% too low. In round figures, a value of 60 deaths per 1000 may be taken as the final estimate for 1980-1984.

  18. Reduced carbon emission estimates from fossil fuel combustion and cement production in China.

    PubMed

    Liu, Zhu; Guan, Dabo; Wei, Wei; Davis, Steven J; Ciais, Philippe; Bai, Jin; Peng, Shushi; Zhang, Qiang; Hubacek, Klaus; Marland, Gregg; Andres, Robert J; Crawford-Brown, Douglas; Lin, Jintai; Zhao, Hongyan; Hong, Chaopeng; Boden, Thomas A; Feng, Kuishuang; Peters, Glen P; Xi, Fengming; Liu, Junguo; Li, Yuan; Zhao, Yu; Zeng, Ning; He, Kebin

    2015-08-20

    Nearly three-quarters of the growth in global carbon emissions from the burning of fossil fuels and cement production between 2010 and 2012 occurred in China. Yet estimates of Chinese emissions remain subject to large uncertainty; inventories of China's total fossil fuel carbon emissions in 2008 differ by 0.3 gigatonnes of carbon, or 15 per cent. The primary sources of this uncertainty are conflicting estimates of energy consumption and emission factors, the latter being uncertain because of very few actual measurements representative of the mix of Chinese fuels. Here we re-evaluate China's carbon emissions using updated and harmonized energy consumption and clinker production data and two new and comprehensive sets of measured emission factors for Chinese coal. We find that total energy consumption in China was 10 per cent higher in 2000-2012 than the value reported by China's national statistics, that emission factors for Chinese coal are on average 40 per cent lower than the default values recommended by the Intergovernmental Panel on Climate Change, and that emissions from China's cement production are 45 per cent less than recent estimates. Altogether, our revised estimate of China's CO2 emissions from fossil fuel combustion and cement production is 2.49 gigatonnes of carbon (2 standard deviations = ±7.3 per cent) in 2013, which is 14 per cent lower than the emissions reported by other prominent inventories. Over the full period 2000 to 2013, our revised estimates are 2.9 gigatonnes of carbon less than previous estimates of China's cumulative carbon emissions. Our findings suggest that overestimation of China's emissions in 2000-2013 may be larger than China's estimated total forest sink in 1990-2007 (2.66 gigatonnes of carbon) or China's land carbon sink in 2000-2009 (2.6 gigatonnes of carbon).

  19. Reduced carbon emission estimates from fossil fuel combustion and cement production in China

    NASA Astrophysics Data System (ADS)

    Liu, Zhu; Guan, Dabo; Wei, Wei; Davis, Steven J.; Ciais, Philippe; Bai, Jin; Peng, Shushi; Zhang, Qiang; Hubacek, Klaus; Marland, Gregg; Andres, Robert J.; Crawford-Brown, Douglas; Lin, Jintai; Zhao, Hongyan; Hong, Chaopeng; Boden, Thomas A.; Feng, Kuishuang; Peters, Glen P.; Xi, Fengming; Liu, Junguo; Li, Yuan; Zhao, Yu; Zeng, Ning; He, Kebin

    2015-08-01

    Nearly three-quarters of the growth in global carbon emissions from the burning of fossil fuels and cement production between 2010 and 2012 occurred in China. Yet estimates of Chinese emissions remain subject to large uncertainty; inventories of China's total fossil fuel carbon emissions in 2008 differ by 0.3 gigatonnes of carbon, or 15 per cent. The primary sources of this uncertainty are conflicting estimates of energy consumption and emission factors, the latter being uncertain because of very few actual measurements representative of the mix of Chinese fuels. Here we re-evaluate China's carbon emissions using updated and harmonized energy consumption and clinker production data and two new and comprehensive sets of measured emission factors for Chinese coal. We find that total energy consumption in China was 10 per cent higher in 2000-2012 than the value reported by China's national statistics, that emission factors for Chinese coal are on average 40 per cent lower than the default values recommended by the Intergovernmental Panel on Climate Change, and that emissions from China's cement production are 45 per cent less than recent estimates. Altogether, our revised estimate of China's CO2 emissions from fossil fuel combustion and cement production is 2.49 gigatonnes of carbon (2 standard deviations = +/-7.3 per cent) in 2013, which is 14 per cent lower than the emissions reported by other prominent inventories. Over the full period 2000 to 2013, our revised estimates are 2.9 gigatonnes of carbon less than previous estimates of China's cumulative carbon emissions. Our findings suggest that overestimation of China's emissions in 2000-2013 may be larger than China's estimated total forest sink in 1990-2007 (2.66 gigatonnes of carbon) or China's land carbon sink in 2000-2009 (2.6 gigatonnes of carbon).

  20. An extended stochastic method for seismic hazard estimation

    NASA Astrophysics Data System (ADS)

    Abd el-aal, A. K.; El-Eraki, M. A.; Mostafa, S. I.

    2015-12-01

    In this contribution, we developed an extended stochastic technique for seismic hazard assessment purposes. This technique depends on the hypothesis of stochastic technique of Boore (2003) "Simulation of ground motion using the stochastic method. Appl. Geophy. 160:635-676". The essential characteristics of extended stochastic technique are to obtain and simulate ground motion in order to minimize future earthquake consequences. The first step of this technique is defining the seismic sources which mostly affect the study area. Then, the maximum expected magnitude is defined for each of these seismic sources. It is followed by estimating the ground motion using an empirical attenuation relationship. Finally, the site amplification is implemented in calculating the peak ground acceleration (PGA) at each site of interest. We tested and applied this developed technique at Cairo, Suez, Port Said, Ismailia, Zagazig and Damietta cities to predict the ground motion. Also, it is applied at Cairo, Zagazig and Damietta cities to estimate the maximum peak ground acceleration at actual soil conditions. In addition, 0.5, 1, 5, 10 and 20 % damping median response spectra are estimated using the extended stochastic simulation technique. The calculated highest acceleration values at bedrock conditions are found at Suez city with a value of 44 cm s-2. However, these acceleration values decrease towards the north of the study area to reach 14.1 cm s-2 at Damietta city. This comes in agreement with the results of previous studies of seismic hazards in northern Egypt and is found to be comparable. This work can be used for seismic risk mitigation and earthquake engineering purposes.

  1. The Effective Dynamic Ranges for Glaucomatous Visual Field Progression With Standard Automated Perimetry and Stimulus Sizes III and V

    PubMed Central

    Zamba, Gideon K. D.; Artes, Paul H.

    2018-01-01

    Purpose It has been shown that threshold estimates below approximately 20 dB have little effect on the ability to detect visual field progression in glaucoma. We aimed to compare stimulus size V to stimulus size III, in areas of visual damage, to confirm these findings by using (1) a different dataset, (2) different techniques of progression analysis, and (3) an analysis to evaluate the effect of censoring on mean deviation (MD). Methods In the Iowa Variability in Perimetry Study, 120 glaucoma subjects were tested every 6 months for 4 years with size III SITA Standard and size V Full Threshold. Progression was determined with three complementary techniques: pointwise linear regression (PLR), permutation of PLR, and linear regression of the MD index. All analyses were repeated on “censored'' datasets in which threshold estimates below a given criterion value were set to equal the criterion value. Results Our analyses confirmed previous observations that threshold estimates below 20 dB contribute much less to visual field progression than estimates above this range. These findings were broadly similar with stimulus sizes III and V. Conclusions Censoring of threshold values < 20 dB has relatively little impact on the rates of visual field progression in patients with mild to moderate glaucoma. Size V, which has lower retest variability, performs at least as well as size III for longitudinal glaucoma progression analysis and appears to have a larger useful dynamic range owing to the upper sensitivity limit being higher. PMID:29356822

  2. FPGA Accelerated Discrete-SURF for Real-Time Homography Estimation

    DTIC Science & Technology

    2015-03-26

    allows for the sum of a group of pixels to be found with only four memory accesses, and a single calculation...of pixels are retrieved from memory and their Hessian determinant values are compared. If the center pixel of the 3x3 block is greater than the other...process- ing on the FPGA[5][24][31]. Third, previous approaches rely heavily on external memory and other components external to the FPGA, while a logic

  3. Satellite estimates of precipitation susceptibility in low-level marine stratiform clouds

    DOE PAGES

    Terai, C. R.; Wood, R.; Kubar, T. L.

    2015-09-05

    Quantifying the sensitivity of warm rain to aerosols is important for constraining climate model estimates of aerosol indirect effects. In this study, the precipitation sensitivity to cloud droplet number concentration (N d) in satellite retrievals is quantified by applying the precipitation susceptibility metric to a combined CloudSat/Moderate Resolution Imaging Spectroradiometer data set of stratus and stratocumulus clouds that cover the tropical and subtropical Pacific Ocean and Gulf of Mexico. We note that consistent with previous observational studies of marine stratocumulus, precipitation susceptibility decreases with increasing liquid water path (LWP), and the susceptibility of the mean precipitation rate R is nearlymore » equal to the sum of the susceptibilities of precipitation intensity and of probability of precipitation. Consistent with previous modeling studies, the satellite retrievals reveal that precipitation susceptibility varies not only with LWP but also with N d. Puzzlingly, negative values of precipitation susceptibility are found at low LWP and high N d. There is marked regional variation in precipitation susceptibility values that cannot simply be explained by regional variations in LWP and N d. This suggests other controls on precipitation apart from LWP and N d and that precipitation susceptibility will need to be quantified and understood at the regional scale when relating to its role in controlling possible aerosol-induced cloud lifetime effects.« less

  4. Dynamics of Black Band Disease in a Diploria strigosa population subjected to annual upwelling on the northeastern coast of Venezuela

    NASA Astrophysics Data System (ADS)

    Rodríguez, S.; Cróquer, A.

    2008-06-01

    Temporal variability of Black Band Disease (BBD) prevalence, incidence, recurrence, recovery and virulence was estimated in a Diploria strigosa population from an upwelling zone of Venezuela, for 1 year between August 2004 and August 2005. The sampling spanned both upwelling and non-upwelling seasons, and included three samplings, roughly 60 days apart, within each season. The negative effects of BBD epizootiology in the sampling population (El Mercado reef) were positively correlated with sea surface temperature (taken as an upwelling estimator). Disease prevalence, incidence and recurrence decreased significantly during upwelling, and the recovery rate increased. Contrary to expectations, tissue mortality did not decrease significantly during the upwelling season, remaining at 1.2 ± 0.7 mm day-1. BBD prevalence, and the ensuing rates of tissue mortality were higher than values previously reported for other Caribbean reefs, even during upwelling episodes, suggesting that nutrient enrichment of the local waters by upwelling counteracts the expected reductions of the disease prevalence and virulence due to the lower temperature. Colonies which had previously been infected with BBD were up to six times more susceptible to new infections than those which were not infected during the preceding 7 months, suggesting that the infected colonies never healed completely. The high variability between tissue mortality values among coral colonies also suggests that overall host health-status may alter susceptibility to BBD infections.

  5. Extending unified-theory-of-reinforcement neural networks to steady-state operant behavior.

    PubMed

    Calvin, Olivia L; McDowell, J J

    2016-06-01

    The unified theory of reinforcement has been used to develop models of behavior over the last 20 years (Donahoe et al., 1993). Previous research has focused on the theory's concordance with the respondent behavior of humans and animals. In this experiment, neural networks were developed from the theory to extend the unified theory of reinforcement to operant behavior on single-alternative variable-interval schedules. This area of operant research was selected because previously developed neural networks could be applied to it without significant alteration. Previous research with humans and animals indicates that the pattern of their steady-state behavior is hyperbolic when plotted against the obtained rate of reinforcement (Herrnstein, 1970). A genetic algorithm was used in the first part of the experiment to determine parameter values for the neural networks, because values that were used in previous research did not result in a hyperbolic pattern of behavior. After finding these parameters, hyperbolic and other similar functions were fitted to the behavior produced by the neural networks. The form of the neural network's behavior was best described by an exponentiated hyperbola (McDowell, 1986; McLean and White, 1983; Wearden, 1981), which was derived from the generalized matching law (Baum, 1974). In post-hoc analyses the addition of a baseline rate of behavior significantly improved the fit of the exponentiated hyperbola and removed systematic residuals. The form of this function was consistent with human and animal behavior, but the estimated parameter values were not. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Predicting gaseous emissions from small-scale combustion of agricultural biomass fuels.

    PubMed

    Fournel, S; Marcos, B; Godbout, S; Heitz, M

    2015-03-01

    A prediction model of gaseous emissions (CO, CO2, NOx, SO2 and HCl) from small-scale combustion of agricultural biomass fuels was developed in order to rapidly assess their potential to be burned in accordance to current environmental threshold values. The model was established based on calculation of thermodynamic equilibrium of reactive multicomponent systems using Gibbs free energy minimization. Since this method has been widely used to estimate the composition of the syngas from wood gasification, the model was first validated by comparing its prediction results with those of similar models from the literature. The model was then used to evaluate the main gas emissions from the combustion of four dedicated energy crops (short-rotation willow, reed canary grass, switchgrass and miscanthus) previously burned in a 29-kW boiler. The prediction values revealed good agreement with the experimental results. The model was particularly effective in estimating the influence of harvest season on SO2 emissions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Dielectric and modulus analysis of the photoabsorber Cu2SnS3

    NASA Astrophysics Data System (ADS)

    Lahlali, S.; Essaleh, L.; Belaqziz, M.; Chehouani, H.; Alimoussa, A.; Djessas, K.; Viallet, B.; Gauffier, J. L.; Cayez, S.

    2017-12-01

    Dielectric properties of the ternary semiconductor compound Cu2SnS3 is studied for the first time in the high temperature range from 300 °C to 440 °C with the frequency range 1 kHz to 1 MHz. The dielectric constant ε ‧ and dielectric loss tan (δ) were observed to increase with temperature and decrease rapidly with frequency to remains constant at high frequencies. The variation of the dielectric loss Ln (ε ") with L n (ω) was found to follow the empirical law, ε " = B ω m (T). The dielectric data were analyzed using complex electrical modulus M* at various temperatures. The activation energy responsible for the relaxation is estimated from the analysis of the modulus spectra. The value of the hopping barrier potential is estimated from the dielectric loss and compared with the value previously obtained from ac-conductivity. These results are critical for understanding the behavior of based polycrystalline family of Cu2SnS3 for absorber materials in solar-cells.

  8. Experimental study of influence characteristics of flue gas fly ash on acid dew point

    NASA Astrophysics Data System (ADS)

    Song, Jinhui; Li, Jiahu; Wang, Shuai; Yuan, Hui; Ren, Zhongqiang

    2017-12-01

    The long-term operation experience of a large number of utility boilers shows that the measured value of acid dew point is generally lower than estimated value. This is because the influence of CaO and MgO on acid dew point in flue gas fly ash is not considered in the estimation formula of acid dew point. On the basis of previous studies, the experimental device for acid dew point measurement was designed and constructed, and the acid dew point under different smoke conditions was measured. The results show that the CaO and MgO in the flue gas fly ash have an obvious influence on the acid dew point, and the content of the fly ash is negatively correlated with the temperature of acid dew point At the same time, the concentration of H2SO4 in flue gas is different, and the acid dew point of flue gas is different, and positively correlated with the acid dew point.

  9. Nonstationary multivariate modeling of cerebral autoregulation during hypercapnia.

    PubMed

    Kostoglou, Kyriaki; Debert, Chantel T; Poulin, Marc J; Mitsis, Georgios D

    2014-05-01

    We examined the time-varying characteristics of cerebral autoregulation and hemodynamics during a step hypercapnic stimulus by using recursively estimated multivariate (two-input) models which quantify the dynamic effects of mean arterial blood pressure (ABP) and end-tidal CO2 tension (PETCO2) on middle cerebral artery blood flow velocity (CBFV). Beat-to-beat values of ABP and CBFV, as well as breath-to-breath values of PETCO2 during baseline and sustained euoxic hypercapnia were obtained in 8 female subjects. The multiple-input, single-output models used were based on the Laguerre expansion technique, and their parameters were updated using recursive least squares with multiple forgetting factors. The results reveal the presence of nonstationarities that confirm previously reported effects of hypercapnia on autoregulation, i.e. a decrease in the MABP phase lead, and suggest that the incorporation of PETCO2 as an additional model input yields less time-varying estimates of dynamic pressure autoregulation obtained from single-input (ABP-CBFV) models. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.

  10. A Global Emission Inventory of Black Carbon and Primary Organic Carbon from Fossil-Fuel and Biofuel Combustion

    NASA Astrophysics Data System (ADS)

    Bond, T. C.; Streets, D. G.; Nelson, S. M.

    2001-12-01

    Regional and global climate models rely on emission inventories of black carbon and organic carbon to determine the climatic effects of primary particulate matter (PM) from combustion. The emission of primary carbonaceous particles is highly dependent on fuel type and combustion practice. Therefore, simple categories such as "domestic" or "industrial" combustion are not sufficient to quantify emissions, and the black-carbon and organic-carbon fractions of PM vary with combustion type. We present a global inventory of primary carbonaceous particles that improves on previous "bottom-up" tabulations (e.g. \\textit{Cooke et al.,} 1999) by considering approximately 100 technologies, each representing one combination of fuel, combustion type, and emission controls. For fossil-fuel combustion, we include several categories not found in previous inventories, including "superemitting" and two-stroke vehicles, steel-making. We also include emissions from waste burning and biofuels used for heating and cooking. Open biomass burning is not included. Fuel use, drawn from International Energy Agency (IEA) and United Nations (UN) data, is divided into technologies on a regional basis. We suggest that emissions in developing countries are better characterized by including high-emitting technologies than by invoking emission multipliers. Due to lack of information on emission factors and technologies in use, uncertainties are high. We estimate central values and uncertainties by combining the range of emission factors found in the literature with reasonable estimates of technology divisions. We provide regional totals of central, low and high estimates, identify the sources of greatest uncertainty to be targeted for future work, and compare our results with previous emission inventories. Both central estimates and uncertainties are given on a 1\\deg x1\\deg grid. As we have reported previously for the case of China (\\textit{Streets et al.,} 2001), low-technology combustion contributes greatly to the emissions and to the uncertainties.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carle, S F

    Compositional data are represented as vector variables with individual vector components ranging between zero and a positive maximum value representing a constant sum constraint, usually unity (or 100 percent). The earth sciences are flooded with spatial distributions of compositional data, such as concentrations of major ion constituents in natural waters (e.g. mole, mass, or volume fractions), mineral percentages, ore grades, or proportions of mutually exclusive categories (e.g. a water-oil-rock system). While geostatistical techniques have become popular in earth science applications since the 1970s, very little attention has been paid to the unique mathematical properties of geostatistical formulations involving compositional variables.more » The book 'Geostatistical Analysis of Compositional Data' by Vera Pawlowsky-Glahn and Ricardo Olea (Oxford University Press, 2004), unlike any previous book on geostatistics, directly confronts the mathematical difficulties inherent to applying geostatistics to compositional variables. The book righteously justifies itself with prodigious referencing to previous work addressing nonsensical ranges of estimated values and error, spurious correlation, and singular cross-covariance matrices.« less

  12. An approach to get thermodynamic properties from speed of sound

    NASA Astrophysics Data System (ADS)

    Núñez, M. A.; Medina, L. A.

    2017-01-01

    An approach for estimating thermodynamic properties of gases from the speed of sound u, is proposed. The square u2, the compression factor Z and the molar heat capacity at constant volume C V are connected by two coupled nonlinear partial differential equations. Previous approaches to solving this system differ in the conditions used on the range of temperature values [Tmin,Tmax]. In this work we propose the use of Dirichlet boundary conditions at Tmin, Tmax. The virial series of the compression factor Z = 1+Bρ+Cρ2+… and other properties leads the problem to the solution of a recursive set of linear ordinary differential equations for the B, C. Analytic solutions of the B equation for Argon are used to study the stability of our approach and previous ones under perturbation errors of the input data. The results show that the approach yields B with a relative error bounded basically by that of the boundary values and the error of other approaches can be some orders of magnitude lager.

  13. Nuclear DNA Amounts in Angiosperms: Progress, Problems and Prospects

    PubMed Central

    BENNETT, M. D.; LEITCH, I. J.

    2005-01-01

    CONTENTSINTRODUCTION45PROGRESS46    Improved systematic representation (species and families)46        (i) First estimates for species46        (ii) First estimates for families47PROBLEMS48    Geographical representation and distribution48    Plant life form48    Obsolescence time bomb49    Errors and inexactitudes49    Genome size, ‘complete’ genome sequencing, and, the euchromatic genome50    The completely sequenced genome50    Weeding out erroneous data52    What is the smallest reliable C-value for an angiosperm?52    What is the minimum C-value for a free-living angiosperm and other free-living organisms?53PROSPECTS FOR THE NEXT TEN YEARS54    Holistic genomics55LITERATURE CITED56APPENDIX59    Notes to the Appendix59    Original references for DNA values89 • Background The nuclear DNA amount in an unreplicated haploid chromosome complement (1C-value) is a key diversity character with many uses. Angiosperm C-values have been listed for reference purposes since 1976, and pooled in an electronic database since 1997 (http://www.kew.org/cval/homepage). Such lists are cited frequently and provide data for many comparative studies. The last compilation was published in 2000, so a further supplementary list is timely to monitor progress against targets set at the first plant genome size workshop in 1997 and to facilitate new goal setting. • Scope The present work lists DNA C-values for 804 species including first values for 628 species from 88 original sources, not included in any previous compilation, plus additional values for 176 species included in a previous compilation. • Conclusions 1998–2002 saw striking progress in our knowledge of angiosperm C-values. At least 1700 first values for species were measured (the most in any five-year period) and familial representation rose from 30 % to 50 %. The loss of many densitometers used to measure DNA C-values proved less serious than feared, owing to the development of relatively inexpensive flow cytometers and computer-based image analysis systems. New uses of the term genome (e.g. in ‘complete’ genome sequencing) can cause confusion. The Arabidopsis Genome Initiative C-value for Arabidopsis thaliana (125 Mb) was a gross underestimate, and an exact C-value based on genome sequencing alone is unlikely to be obtained soon for any angiosperm. Lack of this expected benchmark poses a quandary as to what to use as the basal calibration standard for angiosperms. The next decade offers exciting prospects for angiosperm genome size research. The database (http://www.kew.org/cval/homepage) should become sufficiently representative of the global flora to answer most questions without needing new estimations. DNA amount variation will remain a key interest as an integrated strand of holistic genomics. PMID:15596457

  14. Updated folate data in the Dutch Food Composition Database and implications for intake estimates

    PubMed Central

    Westenbrink, Susanne; Jansen-van der Vliet, Martine; van Rossum, Caroline

    2012-01-01

    Background and objective Nutrient values are influenced by the analytical method used. Food folate measured by high performance liquid chromatography (HPLC) or by microbiological assay (MA) yield different results, with in general higher results from MA than from HPLC. This leads to the question of how to deal with different analytical methods in compiling standardised and internationally comparable food composition databases? A recent inventory on folate in European food composition databases indicated that currently MA is more widely used than HPCL. Since older Dutch values are produced by HPLC and newer values by MA, analytical methods and procedures for compiling folate data in the Dutch Food Composition Database (NEVO) were reconsidered and folate values were updated. This article describes the impact of this revision of folate values in the NEVO database as well as the expected impact on the folate intake assessment in the Dutch National Food Consumption Survey (DNFCS). Design The folate values were revised by replacing HPLC with MA values from recent Dutch analyses. Previously MA folate values taken from foreign food composition tables had been recalculated to the HPLC level, assuming a 27% lower value from HPLC analyses. These recalculated values were replaced by the original MA values. Dutch HPLC and MA values were compared to each other. Folate intake was assessed for a subgroup within the DNFCS to estimate the impact of the update. Results In the updated NEVO database nearly all folate values were produced by MA or derived from MA values which resulted in an average increase of 24%. The median habitual folate intake in young children was increased by 11–15% using the updated folate values. Conclusion The current approach for folate in NEVO resulted in more transparency in data production and documentation and higher comparability among European databases. Results of food consumption surveys are expected to show higher folate intakes when using the updated values. PMID:22481900

  15. High population density of black-handed spider monkeys (Ateles geoffroyi) in Costa Rican lowland wet forest.

    PubMed

    Weghorst, Jennifer A

    2007-04-01

    The main objective of this study was to estimate the population density and demographic structure of spider monkeys living in wet forest in the vicinity of Sirena Biological Station, Corcovado National Park, Costa Rica. Results of a 14-month line-transect survey showed that spider monkeys of Sirena have one of the highest population densities ever recorded for this genus. Density estimates varied, however, depending on the method chosen to estimate transect width. Data from behavioral monitoring were available to compare density estimates derived from the survey, providing a check of the survey's accuracy. A combination of factors has most probably contributed to the high density of Ateles, including habitat protection within a national park and high diversity of trees of the fig family, Moraceae. Although natural densities of spider monkeys at Sirena are substantially higher than those recorded at most other sites and in previous studies at this site, mean subgroup size and age ratios were similar to those determined in previous studies. Sex ratios were similar to those of other sites with high productivity. Although high densities of preferred fruit trees in the wet, productive forests of Sirena may support a dense population of spider monkeys, other demographic traits recorded at Sirena fall well within the range of values recorded elsewhere for the species.

  16. The dynamic and indirect spatial effects of neighborhood conditions on land value, spatial panel dynamic econometrics model

    NASA Astrophysics Data System (ADS)

    Fitriani, Rahma; Sumarminingsih, Eni; Astutik, Suci

    2017-05-01

    Land value is the product of past decision of its use leading to its value, as well as the value of the surrounded land. It is also affected by the local characteristic and the spillover development demand of the previous time period. The effect of each factor on land value will have dynamic and spatial virtues. Thus, a spatial panel dynamic model is used to estimate the particular effects. The model will be useful for predicting the future land value or the effect of implemented policy on land value. The objective of this paper is to derive the dynamic and indirect spatial marginal effects of the land characteristic and the spillover development demand on land value. Each effect is the partial derivative of the expected land value based on the spatial dynamic model with respect to each variable, by considering different time period and different location. The results indicate that the instant change of local or neighborhood characteristics on land value affect the local and the immediate neighborhood land value. However, the longer the change take place, the effect will spread further, not only on the immediate neighborhood.

  17. Refining new-physics searches in B→Dτν with lattice QCD.

    PubMed

    Bailey, Jon A; Bazavov, A; Bernard, C; Bouchard, C M; Detar, C; Du, Daping; El-Khadra, A X; Foley, J; Freeland, E D; Gámiz, E; Gottlieb, Steven; Heller, U M; Kim, Jongjeong; Kronfeld, A S; Laiho, J; Levkova, L; Mackenzie, P B; Meurice, Y; Neil, E T; Oktay, M B; Qiu, Si-Wei; Simone, J N; Sugar, R; Toussaint, D; Van de Water, R S; Zhou, Ran

    2012-08-17

    The semileptonic decay channel B→Dτν is sensitive to the presence of a scalar current, such as that mediated by a charged-Higgs boson. Recently, the BABAR experiment reported the first observation of the exclusive semileptonic decay B→Dτ(-)ν, finding an approximately 2σ disagreement with the standard-model prediction for the ratio R(D)=BR(B→Dτν)/BR(B→Dℓν), where ℓ = e,μ. We compute this ratio of branching fractions using hadronic form factors computed in unquenched lattice QCD and obtain R(D)=0.316(12)(7), where the errors are statistical and total systematic, respectively. This result is the first standard-model calculation of R(D) from ab initio full QCD. Its error is smaller than that of previous estimates, primarily due to the reduced uncertainty in the scalar form factor f(0)(q(2)). Our determination of R(D) is approximately 1σ higher than previous estimates and, thus, reduces the tension with experiment. We also compute R(D) in models with electrically charged scalar exchange, such as the type-II two-Higgs-doublet model. Once again, our result is consistent with, but approximately 1σ higher than, previous estimates for phenomenologically relevant values of the scalar coupling in the type-II model. As a by-product of our calculation, we also present the standard-model prediction for the longitudinal-polarization ratio P(L)(D)=0.325(4)(3).

  18. Physicians' prescribing preferences were a potential instrument for patients' actual prescriptions of antidepressants☆

    PubMed Central

    Davies, Neil M.; Gunnell, David; Thomas, Kyla H.; Metcalfe, Chris; Windmeijer, Frank; Martin, Richard M.

    2013-01-01

    Objectives To investigate whether physicians' prescribing preferences were valid instrumental variables for the antidepressant prescriptions they issued to their patients. Study Design and Setting We investigated whether physicians' previous prescriptions of (1) tricyclic antidepressants (TCAs) vs. selective serotonin reuptake inhibitors (SSRIs) and (2) paroxetine vs. other SSRIs were valid instruments. We investigated whether the instrumental variable assumptions are likely to hold and whether TCAs (vs. SSRIs) were associated with hospital admission for self-harm or death by suicide using both conventional and instrumental variable regressions. The setting for the study was general practices in the United Kingdom. Results Prior prescriptions were strongly associated with actual prescriptions: physicians who previously prescribed TCAs were 14.9 percentage points (95% confidence interval [CI], 14.4, 15.4) more likely to prescribe TCAs, and those who previously prescribed paroxetine were 27.7 percentage points (95% CI, 26.7, 28.8) more likely to prescribe paroxetine, to their next patient. Physicians' previous prescriptions were less strongly associated with patients' baseline characteristics than actual prescriptions. We found no evidence that the estimated association of TCAs with self-harm/suicide using instrumental variable regression differed from conventional regression estimates (P-value = 0.45). Conclusion The main instrumental variable assumptions held, suggesting that physicians' prescribing preferences are valid instruments for evaluating the short-term effects of antidepressants. PMID:24075596

  19. Evaluation of questionnaire-based information on previous physical work loads. Stockholm MUSIC 1 Study Group. Musculoskeletal Intervention Center.

    PubMed

    Torgén, M; Winkel, J; Alfredsson, L; Kilbom, A

    1999-06-01

    The principal aim of the present study was to evaluate questionnaire-based information on past physical work loads (6-year recall). Effects of memory difficulties on reproducibility were evaluated for 82 subjects by comparing previously reported results on current work loads (test-retest procedure) with the same items recalled 6 years later. Validity was assessed by comparing self-reports in 1995, regarding work loads in 1989, with worksite measurements performed in 1989. Six-year reproducibility, calculated as weighted kappa coefficients (k(w)), varied between 0.36 and 0.86, with the highest values for proportion of the workday spent sitting and for perceived general exertion and the lowest values for trunk and neck flexion. The six-year reproducibility results were similar to previously reported test-retest results for these items; this finding indicates that memory difficulties was a minor problem. The validity of the questionnaire responses, expressed as rank correlations (r(s)) between the questionnaire responses and workplace measurements, varied between -0.16 and 0.78. The highest values were obtained for the items sitting and repetitive work, and the lowest and "unacceptable" values were for head rotation and neck flexion. Misclassification of exposure did not appear to be differential with regard to musculoskeletal symptom status, as judged by the calculated risk estimates. The validity of some of these self-administered questionnaire items appears sufficient for a crude assessment of physical work loads in the past in epidemiologic studies of the general population with predominantly low levels of exposure.

  20. The ACCE method: an approach for obtaining quantitative or qualitative estimates of residual confounding that includes unmeasured confounding

    PubMed Central

    Smith, Eric G.

    2015-01-01

    Background:  Nonrandomized studies typically cannot account for confounding from unmeasured factors.  Method:  A method is presented that exploits the recently-identified phenomenon of  “confounding amplification” to produce, in principle, a quantitative estimate of total residual confounding resulting from both measured and unmeasured factors.  Two nested propensity score models are constructed that differ only in the deliberate introduction of an additional variable(s) that substantially predicts treatment exposure.  Residual confounding is then estimated by dividing the change in treatment effect estimate between models by the degree of confounding amplification estimated to occur, adjusting for any association between the additional variable(s) and outcome. Results:  Several hypothetical examples are provided to illustrate how the method produces a quantitative estimate of residual confounding if the method’s requirements and assumptions are met.  Previously published data is used to illustrate that, whether or not the method routinely provides precise quantitative estimates of residual confounding, the method appears to produce a valuable qualitative estimate of the likely direction and general size of residual confounding. Limitations:  Uncertainties exist, including identifying the best approaches for: 1) predicting the amount of confounding amplification, 2) minimizing changes between the nested models unrelated to confounding amplification, 3) adjusting for the association of the introduced variable(s) with outcome, and 4) deriving confidence intervals for the method’s estimates (although bootstrapping is one plausible approach). Conclusions:  To this author’s knowledge, it has not been previously suggested that the phenomenon of confounding amplification, if such amplification is as predictable as suggested by a recent simulation, provides a logical basis for estimating total residual confounding. The method's basic approach is straightforward.  The method's routine usefulness, however, has not yet been established, nor has the method been fully validated. Rapid further investigation of this novel method is clearly indicated, given the potential value of its quantitative or qualitative output. PMID:25580226

  1. Uplift rates of marine terraces as a constraint on fault-propagation fold kinematics: Examples from the Hawkswood and Kate anticlines, North Canterbury, New Zealand

    NASA Astrophysics Data System (ADS)

    Oakley, David O. S.; Fisher, Donald M.; Gardner, Thomas W.; Stewart, Mary Kate

    2018-01-01

    Marine terraces on growing fault-propagation folds provide valuable insight into the relationship between fold kinematics and uplift rates, providing a means to distinguish among otherwise non-unique kinematic model solutions. Here, we investigate this relationship at two locations in North Canterbury, New Zealand: the Kate anticline and Haumuri Bluff, at the northern end of the Hawkswood anticline. At both locations, we calculate uplift rates of previously dated marine terraces, using DGPS surveys to estimate terrace inner edge elevations. We then use Markov chain Monte Carlo methods to fit fault-propagation fold kinematic models to structural geologic data, and we incorporate marine terrace uplift into the models as an additional constraint. At Haumuri Bluff, we find that marine terraces, when restored to originally horizontal surfaces, can help to eliminate certain trishear models that would fit the geologic data alone. At Kate anticline, we compare uplift rates at different structural positions and find that the spatial pattern of uplift rates is more consistent with trishear than with a parallel-fault propagation fold kink-band model. Finally, we use our model results to compute new estimates for fault slip rates ( 1-2 m/ka at Kate anticline and 1-4 m/ka at Haumuri Bluff) and ages of the folds ( 1 Ma), which are consistent with previous estimates for the onset of folding in this region. These results are consistent with previous work on the age of onset of folding in this region, provide revised estimates of fault slip rates necessary to understand the seismic hazard posed by these faults, and demonstrate the value of incorporating marine terraces in inverse fold kinematic models as a means to distinguish among non-unique solutions.

  2. Dimensionless Numbers Expressed in Terms of Common CVD Process Parameters

    NASA Technical Reports Server (NTRS)

    Kuczmarski, Maria A.

    1999-01-01

    A variety of dimensionless numbers related to momentum and heat transfer are useful in Chemical Vapor Deposition (CVD) analysis. These numbers are not traditionally calculated by directly using reactor operating parameters, such as temperature and pressure. In this paper, these numbers have been expressed in a form that explicitly shows their dependence upon the carrier gas, reactor geometry, and reactor operation conditions. These expressions were derived for both monatomic and diatomic gases using estimation techniques for viscosity, thermal conductivity, and heat capacity. Values calculated from these expressions compared well to previously published values. These expressions provide a relatively quick method for predicting changes in the flow patterns resulting from changes in the reactor operating conditions.

  3. Influence of Iterative Reconstruction Algorithms on PET Image Resolution

    NASA Astrophysics Data System (ADS)

    Karpetas, G. E.; Michail, C. M.; Fountos, G. P.; Valais, I. G.; Nikolopoulos, D.; Kandarakis, I. S.; Panayiotakis, G. S.

    2015-09-01

    The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction. The simulated PET scanner was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the modulation transfer function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL, the ordered subsets separable paraboloidal surrogate (OSSPS), the median root prior (MRP) and OSMAPOSL with quadratic prior, algorithms. OSMAPOSL reconstruction was assessed by using fixed subsets and various iterations, as well as by using various beta (hyper) parameter values. MTF values were found to increase with increasing iterations. MTF also improves by using lower beta values. The simulated PET evaluation method, based on the TLC plane source, can be useful in the resolution assessment of PET scanners.

  4. Estimating material viscoelastic properties based on surface wave measurements: A comparison of techniques and modeling assumptions

    PubMed Central

    Royston, Thomas J.; Dai, Zoujun; Chaunsali, Rajesh; Liu, Yifei; Peng, Ying; Magin, Richard L.

    2011-01-01

    Previous studies of the first author and others have focused on low audible frequency (<1 kHz) shear and surface wave motion in and on a viscoelastic material comprised of or representative of soft biological tissue. A specific case considered has been surface (Rayleigh) wave motion caused by a circular disk located on the surface and oscillating normal to it. Different approaches to identifying the type and coefficients of a viscoelastic model of the material based on these measurements have been proposed. One approach has been to optimize coefficients in an assumed viscoelastic model type to match measurements of the frequency-dependent Rayleigh wave speed. Another approach has been to optimize coefficients in an assumed viscoelastic model type to match the complex-valued frequency response function (FRF) between the excitation location and points at known radial distances from it. In the present article, the relative merits of these approaches are explored theoretically, computationally, and experimentally. It is concluded that matching the complex-valued FRF may provide a better estimate of the viscoelastic model type and parameter values; though, as the studies herein show, there are inherent limitations to identifying viscoelastic properties based on surface wave measurements. PMID:22225067

  5. Quantitative T1 and T2* carotid atherosclerotic plaque imaging using a three-dimensional multi-echo phase-sensitive inversion recovery sequence: a feasibility study.

    PubMed

    Fujiwara, Yasuhiro; Maruyama, Hirotoshi; Toyomaru, Kanako; Nishizaka, Yuri; Fukamatsu, Masahiro

    2018-06-01

    Magnetic resonance imaging (MRI) is widely used to detect carotid atherosclerotic plaques. Although it is important to evaluate vulnerable carotid plaques containing lipids and intra-plaque hemorrhages (IPHs) using T 1 -weighted images, the image contrast changes depending on the imaging settings. Moreover, to distinguish between a thrombus and a hemorrhage, it is useful to evaluate the iron content of the plaque using both T 1 -weighted and T 2 *-weighted images. Therefore, a quantitative evaluation of carotid atherosclerotic plaques using T 1 and T 2 * values may be necessary for the accurate evaluation of plaque components. The purpose of this study was to determine whether the multi-echo phase-sensitive inversion recovery (mPSIR) sequence can improve T 1 contrast while simultaneously providing accurate T 1 and T 2 * values of an IPH. T 1 and T 2 * values measured using mPSIR were compared to values from conventional methods in phantom and in vivo studies. In the phantom study, the T 1 and T 2 * values estimated using mPSIR were linearly correlated with those of conventional methods. In the in vivo study, mPSIR demonstrated higher T 1 contrast between the IPH phantom and sternocleidomastoid muscle than the conventional method. Moreover, the T 1 and T 2 * values of the blood vessel wall and sternocleidomastoid muscle estimated using mPSIR were correlated with values measured by conventional methods and with values reported previously. The mPSIR sequence improved T 1 contrast while simultaneously providing accurate T 1 and T 2 * values of the neck region. Although further study is required to evaluate the clinical utility, mPSIR may improve carotid atherosclerotic plaque detection and provide detailed information about plaque components.

  6. Remote estimation of crown size and tree density in snowy areas

    NASA Astrophysics Data System (ADS)

    Kishi, R.; Ito, A.; Kamada, K.; Fukita, T.; Lahrita, L.; Kawase, Y.; Murahashi, K.; Kawamata, H.; Naruse, N.; Takahashi, Y.

    2017-12-01

    Precise estimation of tree density in the forest leads us to understand the amount of carbon dioxide fixed by plants. Aerial photographs have been used to measure the number of trees. Campaign using aircraft, however, is expensive ( $50,000/1 campaign flight) and the research area is limited in drone. In addition, previous studies estimating the density of trees from aerial photographs have been performed in the summer, so there was a gap of 15% in the estimation due to the overlapping of the leaves. Here, we have proposed a method to accurately estimate the number of forest trees from the satellite images of snow-covered deciduous forest area, using the ratio of branches to snow. The advantages of our method are as follows; 1) snow area could be excluded easily due to the high reflectance, 2) tree branches are small overlapping compared to leaves. Although our method can use only in the snowfall region, the area covered with snow in the world becomes more than 12,800,000 km2. Our proposition should play an important role in discussing global warming. As a test area, we have chosen the forest near Mt. Amano in Iwate prefecture in Japan. First, we made a new index of (Band1-Band5)/(Band1+Band5), which will be suitable to distinguish between the snow and the tree trunk using the corresponding spectral reflection data. Next, the index values of changing the ratio in 1% increments were listed. From the satellite image analysis at 4 points, the ratio of snow to tree trunk showed the following values, I:61%, II:65%, III:66% and IV:65%. To confirm the estimation, we used the aerial photograph from Google earth; the rate was I:42.05%, II:48.89%, III:50.64%, IV:49.05%, respectively. There is a correlation between the numerical values of both, but there are differences. We will discuss in detail at this point, focusing on the effect of shadows.

  7. Partial volume correction of brain perfusion estimates using the inherent signal data of time-resolved arterial spin labeling.

    PubMed

    Ahlgren, André; Wirestam, Ronnie; Petersen, Esben Thade; Ståhlberg, Freddy; Knutsson, Linda

    2014-09-01

    Quantitative perfusion MRI based on arterial spin labeling (ASL) is hampered by partial volume effects (PVEs), arising due to voxel signal cross-contamination between different compartments. To address this issue, several partial volume correction (PVC) methods have been presented. Most previous methods rely on segmentation of a high-resolution T1 -weighted morphological image volume that is coregistered to the low-resolution ASL data, making the result sensitive to errors in the segmentation and coregistration. In this work, we present a methodology for partial volume estimation and correction, using only low-resolution ASL data acquired with the QUASAR sequence. The methodology consists of a T1 -based segmentation method, with no spatial priors, and a modified PVC method based on linear regression. The presented approach thus avoids prior assumptions about the spatial distribution of brain compartments, while also avoiding coregistration between different image volumes. Simulations based on a digital phantom as well as in vivo measurements in 10 volunteers were used to assess the performance of the proposed segmentation approach. The simulation results indicated that QUASAR data can be used for robust partial volume estimation, and this was confirmed by the in vivo experiments. The proposed PVC method yielded probable perfusion maps, comparable to a reference method based on segmentation of a high-resolution morphological scan. Corrected gray matter (GM) perfusion was 47% higher than uncorrected values, suggesting a significant amount of PVEs in the data. Whereas the reference method failed to completely eliminate the dependence of perfusion estimates on the volume fraction, the novel approach produced GM perfusion values independent of GM volume fraction. The intra-subject coefficient of variation of corrected perfusion values was lowest for the proposed PVC method. As shown in this work, low-resolution partial volume estimation in connection with ASL perfusion estimation is feasible, and provides a promising tool for decoupling perfusion and tissue volume. Copyright © 2014 John Wiley & Sons, Ltd.

  8. Ozone Climate Penalty and Mortality in a Changing World

    NASA Astrophysics Data System (ADS)

    Hakami, A.; Zhao, S.; Pappin, A.; Mesbah, M.

    2013-12-01

    The expected increase in ozone concentrations with temperature is referred to as the climate penalty factor (CPF). Observed ozone trends have resulted in estimations of regional CPFs in the range of 1-3 ppb/K in the Eastern US, and larger values around the globe. We use the adjoint of a regional model (CMAQ) for attributing changes in ozone mortality and attainment metrics to increased temperature levels at each location in North America during the summer of 2007. Unlike previous forward sensitivity analysis studies, we estimate how changes in temperatures at various locations influence such policy-relevant metrics. Our analysis accounts for separate temperature impact pathways through gas-phase chemistry, moisture abundance, and biogenic emissions. We find that water vapor impact, while mostly negative, is positive and large for temperature changes in urban areas. We also find that increased biogenic emissions plays an important role in the overall temperature influence. Our simulations show a wide range of spatial variability in CPFs between -0.4 and 6.2 ppb/K with largest values in urban areas. We also estimate mortality-based CPFs of up to 4 deaths/K for each grid cell, again with large localization in urban areas. This amounts to an estimated 370 deaths/K for the 3-month period of the simulation. We find that this number is almost equivalent to 5% reduction in anthropogenic NOx emissions for each degree increase in temperature. We show how the CPF will change as the result progressive NOx emission controls from various anthropogenic sectors and sources at different locations. Our findings suggest that urban NOx control can be regarded as an adaptation strategy with regards to ozone air quality. Also, the strong temperature dependence in urban environments suggests that the health and attainment burden of urban heat island may be more substantial than previously thought. Spatial distribution of average adjoint-based CPFs Adjoint-based CPF and Mortality CPF (domainwide)

  9. Comparison of amyloid plaque contrast generated by T2-, T2*-, and susceptibility-weighted imaging methods in transgenic mouse models of Alzheimer’s disease

    PubMed Central

    Chamberlain, Ryan; Reyes, Denise; Curran, Geoffrey L.; Marjanska, Malgorzata; Wengenack, Thomas M.; Poduslo, Joseph F.; Garwood, Michael; Jack, Clifford R.

    2009-01-01

    One of the hallmark pathologies of Alzheimer’s disease (AD) is amyloid plaque deposition. Plaques appear hypointense on T2- and T2*-weighted MR images probably due to the presence of endogenous iron, but no quantitative comparison of various imaging techniques has been reported. We estimated the T1, T2, T2*, and proton density values of cortical plaques and normal cortical tissue and analyzed the plaque contrast generated by a collection of T2-, T2*-, and susceptibility-weighted imaging (SWI) methods in ex vivo transgenic mouse specimens. The proton density and T1 values were similar for both cortical plaques and normal cortical tissue. The T2 and T2* values were similar in cortical plaques, which indicates that the iron content of cortical plaques may not be as large as previously thought. Ex vivo plaque contrast was increased compared to a previously reported spin echo sequence by summing multiple echoes and by performing SWI; however, gradient echo and susceptibility weighted imaging was found to be impractical for in vivo imaging due to susceptibility interface-related signal loss in the cortex. PMID:19253386

  10. Multivariate models for prediction of rheological characteristics of filamentous fermentation broth from the size distribution.

    PubMed

    Petersen, Nanna; Stocks, Stuart; Gernaey, Krist V

    2008-05-01

    The main purpose of this article is to demonstrate that principal component analysis (PCA) and partial least squares regression (PLSR) can be used to extract information from particle size distribution data and predict rheological properties. Samples from commercially relevant Aspergillus oryzae fermentations conducted in 550 L pilot scale tanks were characterized with respect to particle size distribution, biomass concentration, and rheological properties. The rheological properties were described using the Herschel-Bulkley model. Estimation of all three parameters in the Herschel-Bulkley model (yield stress (tau(y)), consistency index (K), and flow behavior index (n)) resulted in a large standard deviation of the parameter estimates. The flow behavior index was not found to be correlated with any of the other measured variables and previous studies have suggested a constant value of the flow behavior index in filamentous fermentations. It was therefore chosen to fix this parameter to the average value thereby decreasing the standard deviation of the estimates of the remaining rheological parameters significantly. Using a PLSR model, a reasonable prediction of apparent viscosity (micro(app)), yield stress (tau(y)), and consistency index (K), could be made from the size distributions, biomass concentration, and process information. This provides a predictive method with a high predictive power for the rheology of fermentation broth, and with the advantages over previous models that tau(y) and K can be predicted as well as micro(app). Validation on an independent test set yielded a root mean square error of 1.21 Pa for tau(y), 0.209 Pa s(n) for K, and 0.0288 Pa s for micro(app), corresponding to R(2) = 0.95, R(2) = 0.94, and R(2) = 0.95 respectively. Copyright 2007 Wiley Periodicals, Inc.

  11. Use of A-Train Aerosol Observations to Constrain Direct Aerosol Radiative Effects (DARE) Comparisons with Aerocom Models and Uncertainty Assessments

    NASA Technical Reports Server (NTRS)

    Redemann, J.; Shinozuka, Y.; Kacenelenbogen, M.; Segal-Rozenhaimer, M.; LeBlanc, S.; Vaughan, M.; Stier, P.; Schutgens, N.

    2017-01-01

    We describe a technique for combining multiple A-Train aerosol data sets, namely MODIS spectral AOD (aerosol optical depth), OMI AAOD (absorption aerosol optical depth) and CALIOP aerosol backscatter retrievals (hereafter referred to as MOC retrievals) to estimate full spectral sets of aerosol radiative properties, and ultimately to calculate the 3-D distribution of direct aerosol radiative effects (DARE). We present MOC results using almost two years of data collected in 2007 and 2008, and show comparisons of the aerosol radiative property estimates to collocated AERONET retrievals. Use of the MODIS Collection 6 AOD data derived with the dark target and deep blue algorithms has extended the coverage of the MOC retrievals towards higher latitudes. The MOC aerosol retrievals agree better with AERONET in terms of the single scattering albedo (ssa) at 441 nm than ssa calculated from OMI and MODIS data alone, indicating that CALIOP aerosol backscatter data contains information on aerosol absorption. We compare the spatio-temporal distribution of the MOC retrievals and MOC-based calculations of seasonal clear-sky DARE to values derived from four models that participated in the Phase II AeroCom model intercomparison initiative. Overall, the MOC-based calculations of clear-sky DARE at TOA over land are smaller (less negative) than previous model or observational estimates due to the inclusion of more absorbing aerosol retrievals over brighter surfaces, not previously available for observationally-based estimates of DARE. MOC-based DARE estimates at the surface over land and total (land and ocean) DARE estimates at TOA are in between previous model and observational results. Comparisons of seasonal aerosol property to AeroCom Phase II results show generally good agreement best agreement with forcing results at TOA is found with GMI-MerraV3. We discuss sampling issues that affect the comparisons and the major challenges in extending our clear-sky DARE results to all-sky conditions. We present estimates of clear-sky and all-sky DARE and show uncertainties that stem from the assumptions in the spatial extrapolation and accuracy of aerosol and cloud properties, in the diurnal evolution of these properties, and in the radiative transfer calculations.

  12. Improving the accuracy of livestock distribution estimates through spatial interpolation.

    PubMed

    Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy

    2012-11-01

    Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P <0.009 based on a sample of 2,077 parishes using one-stage stratified samples). During aggregation, area-weighted mean values were assigned to higher administrative unit levels. However, when this step is preceded by a spatial interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level). Whether the same observations apply on a lower spatial scale should be further investigated.

  13. Possibility of Predicting Serotonin Transporter Occupancy From the In Vitro Inhibition Constant for Serotonin Transporter, the Clinically Relevant Plasma Concentration of Unbound Drugs, and Their Profiles for Substrates of Transporters.

    PubMed

    Yahata, Masahiro; Chiba, Koji; Watanabe, Takao; Sugiyama, Yuichi

    2017-09-01

    Accurate prediction of target occupancy facilitates central nervous system drug development. In this review, we discuss the predictability of serotonin transporter (SERT) occupancy in human brain estimated from in vitro K i values for human SERT and plasma concentrations of unbound drug (C u,plasma ), as well as the impact of drug transporters in the blood-brain barrier. First, the geometric means of in vitro K i values were compared with the means of in vivo K i values (K i,u,plasma ) which were calculated as C u,plasma values at 50% occupancy of SERT obtained from previous clinical positron emission tomography/single photon emission computed tomography imaging studies for 6 selective serotonin transporter reuptake inhibitors and 3 serotonin norepinephrine reuptake inhibitors. The in vitro K i values for 7 drugs were comparable to their in vivo K i,u,plasma values within 3-fold difference. SERT occupancy was overestimated for 5 drugs (P-glycoprotein substrates) and underestimated for 2 drugs (presumably uptake transporter substrates, although no evidence exists as yet). In conclusion, prediction of human SERT occupancy from in vitro K i values and C u,plasma was successful for drugs that are not transporter substrates and will become possible in future even for transporter substrates, once the transporter activities will be accurately estimated from in vitro experiments. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  14. Fetal heart rate intermittency

    NASA Astrophysics Data System (ADS)

    Yum, Myung-Kul; Kim, Jong-Hwa; Kim, Kyungsik

    2003-03-01

    We noticed that fetal heart rates(FHR) of immature fetuses intermittently showed unstable falls below baseline FHR which do not occur in mature fetuses. We aim to investigate the nature and maturational changes of intermittency of the FHR in normal fetuses, and to present the intermittency values of normal fetuses according to gestational weeks. FHR data of 450 normal fetuses between 23 and 40 weeks of gestation were studied. We performed multifractal analysis and calcualted a intermittency (C_1). The C1 values exhibited a strong negative linear correlation(P=0.0001) with the gestational weeks. At 27-28, 29-30, 33-34, and 37-38 gestational weeks, the C1 values were significantly lower than those of the previous two or four gestational weeks. The maturation of normal fetuses is related to decreasing the severity of the unstable falls in FHR that is measured by C_1, the intermittency. The C1 values according to the gestational weeks we presented can be used as credible values when estimating the degree of maturity of certain FHR.

  15. Quasi-projective synchronization of fractional-order complex-valued recurrent neural networks.

    PubMed

    Yang, Shuai; Yu, Juan; Hu, Cheng; Jiang, Haijun

    2018-08-01

    In this paper, without separating the complex-valued neural networks into two real-valued systems, the quasi-projective synchronization of fractional-order complex-valued neural networks is investigated. First, two new fractional-order inequalities are established by using the theory of complex functions, Laplace transform and Mittag-Leffler functions, which generalize traditional inequalities with the first-order derivative in the real domain. Additionally, different from hybrid control schemes given in the previous work concerning the projective synchronization, a simple and linear control strategy is designed in this paper and several criteria are derived to ensure quasi-projective synchronization of the complex-valued neural networks with fractional-order based on the established fractional-order inequalities and the theory of complex functions. Moreover, the error bounds of quasi-projective synchronization are estimated. Especially, some conditions are also presented for the Mittag-Leffler synchronization of the addressed neural networks. Finally, some numerical examples with simulations are provided to show the effectiveness of the derived theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Thermochemistry of the gaseous fluorides of samarium, europium, and thulium

    NASA Astrophysics Data System (ADS)

    Kleinschmidt, P. D.; Lau, K. H.; Hildenbrand, D. L.

    1981-01-01

    The gaseous mono-, di-, and trifluorides of the lanthanide metals samarium, europium, and thulium were characterized thermochemically from high temperature equilibrium studies carried out by mass spectrometry. Reaction enthalpies and entropies were derived using second-law analysis throughout, and the results were used to evaluate the enthalpies of formation and bond dissociation energies (BDE) of the gaseous fluorides, and to obtain approximate values for the electronic entropies of the MF and MF2 species. The dissociation energies of the monofluorides D°0(SmF)=134 kcal/mole, D°0(EuF)=129 kcal/mole, and D°0(TmF)=121 kcal/mole, all ±2 kcal/mole, are in good agreement with values predicted by the Rittner electrostatic model, whereas values in the polyatomic fluorides show considerable variation and do not seem to follow any clear trends. Although the BDE values in some instances differ from previous estimates, their sums yield trifluoride heats of atomization that are in close accord with values derived from the vaporization thermodynamics of the solid trifluorides.

  17. Intracellular pH in rat isolated superior cervical ganglia in relation to nicotine-depolarization and nicotine-uptake

    PubMed Central

    Brown, D. A.; Halliwell, J. V.

    1972-01-01

    1. The intracellular pH (pHi) of rat isolated superior cervical ganglia incubated in normal Krebs solution (pHo=7·37) was estimated to be 7·33 from the uptake of a weak acid, 14C-5,5-dimethyloxazolidine-2,4-dione (DMO). Addition of 30 μM nicotine for 30 min reduced the DMO-estimated pHi by 0·15 units to 7·18. This effect was prevented by hexamethonium (2·5 mM) or by depolarizing the ganglion with K+ (124 mM). 2. 3H-Nicotine (30 μM) was concentrated within the ganglia to an intracellular/extracellular concentration ratio (Ci/Co) of 5·54 in normal Krebs solution and 4·61 in 2·5 mM hexamethonium. This would suggest an intracellular pH of 6·54 and 6·63 respectively. In ganglia previously depolarized by K+ the corresponding values for Ci/Co were 4·02 (minus hexamethonium, estimated pHi 6·95) and 4·17 (plus hexamethonium, estimated pHi 6·94). 3. A multicompartment cell interior comprising an acid cytoplasm (pH∼6·6) and more alkaline nucleus and mitochondria is proposed to explain the difference between the values of pHi estimated from the uptake of DMO and nicotine. It is suggested that the fall in pHi during nicotine-depolarization results from metabolic stimulation following Na+ entry. PMID:5048652

  18. Estimating snow depth of alpine snowpack via airborne multifrequency passive microwave radiance observations: Colorado, USA

    NASA Astrophysics Data System (ADS)

    Kim, R. S.; Durand, M. T.; Li, D.; Baldo, E.; Margulis, S. A.; Dumont, M.; Morin, S.

    2017-12-01

    This paper presents a newly-proposed snow depth retrieval approach for mountainous deep snow using airborne multifrequency passive microwave (PM) radiance observation. In contrast to previous snow depth estimations using satellite PM radiance assimilation, the newly-proposed method utilized single flight observation and deployed the snow hydrologic models. This method is promising since the satellite-based retrieval methods have difficulties to estimate snow depth due to their coarse resolution and computational effort. Indeed, this approach consists of particle filter using combinations of multiple PM frequencies and multi-layer snow physical model (i.e., Crocus) to resolve melt-refreeze crusts. The method was performed over NASA Cold Land Processes Experiment (CLPX) area in Colorado during 2002 and 2003. Results showed that there was a significant improvement over the prior snow depth estimates and the capability to reduce the prior snow depth biases. When applying our snow depth retrieval algorithm using a combination of four PM frequencies (10.7,18.7, 37.0 and 89.0 GHz), the RMSE values were reduced by 48 % at the snow depth transects sites where forest density was less than 5% despite deep snow conditions. This method displayed a sensitivity to different combinations of frequencies, model stratigraphy (i.e. different number of layering scheme for snow physical model) and estimation methods (particle filter and Kalman filter). The prior RMSE values at the forest-covered areas were reduced by 37 - 42 % even in the presence of forest cover.

  19. Analyses and estimates of hydraulic conductivity from slug tests in alluvial aquifer underlying Air Force Plant 4 and Naval Air Station-Joint Reserve Base Carswell Field, Fort Worth, Texas

    USGS Publications Warehouse

    Houston, Natalie A.; Braun, Christopher L.

    2004-01-01

    This report describes the collection, analyses, and distribution of hydraulic-conductivity data obtained from slug tests completed in the alluvial aquifer underlying Air Force Plant 4 and Naval Air Station-Joint Reserve Base Carswell Field, Fort Worth, Texas, during October 2002 and August 2003 and summarizes previously available hydraulic-conductivity data. The U.S. Geological Survey, in cooperation with the U.S. Air Force, completed 30 slug tests in October 2002 and August 2003 to obtain estimates of horizontal hydraulic conductivity to use as initial values in a ground-water-flow model for the site. The tests were done by placing a polyvinyl-chloride slug of known volume beneath the water level in selected wells, removing the slug, and measuring the resulting water-level recovery over time. The water levels were measured with a pressure transducer and recorded with a data logger. Hydraulic-conductivity values were estimated from an analytical relation between the instantaneous displacement of water in a well bore and the resulting rate of head change. Although nearly two-thirds of the tested wells recovered 90 percent of their slug-induced head change in less than 2 minutes, 90-percent recovery times ranged from 3 seconds to 35 minutes. The estimates of hydraulic conductivity range from 0.2 to 200 feet per day. Eighty-three percent of the estimates are between 1 and 100 feet per day.

  20. Correction method for influence of tissue scattering for sidestream dark-field oximetry using multicolor LEDs

    NASA Astrophysics Data System (ADS)

    Kurata, Tomohiro; Oda, Shigeto; Kawahira, Hiroshi; Haneishi, Hideaki

    2016-12-01

    We have previously proposed an estimation method of intravascular oxygen saturation (SO_2) from the images obtained by sidestream dark-field (SDF) imaging (we call it SDF oximetry) and we investigated its fundamental characteristics by Monte Carlo simulation. In this paper, we propose a correction method for scattering by the tissue and performed experiments with turbid phantoms as well as Monte Carlo simulation experiments to investigate the influence of the tissue scattering in the SDF imaging. In the estimation method, we used modified extinction coefficients of hemoglobin called average extinction coefficients (AECs) to correct the influence from the bandwidth of the illumination sources, the imaging camera characteristics, and the tissue scattering. We estimate the scattering coefficient of the tissue from the maximum slope of pixel value profile along a line perpendicular to the blood vessel running direction in an SDF image and correct AECs using the scattering coefficient. To evaluate the proposed method, we developed a trial SDF probe to obtain three-band images by switching multicolor light-emitting diodes and obtained the image of turbid phantoms comprised of agar powder, fat emulsion, and bovine blood-filled glass tubes. As a result, we found that the increase of scattering by the phantom body brought about the decrease of the AECs. The experimental results showed that the use of suitable values for AECs led to more accurate SO_2 estimation. We also confirmed the validity of the proposed correction method to improve the accuracy of the SO_2 estimation.

  1. Estimating the returns to UK publicly funded cancer-related research in terms of the net value of improved health outcomes

    PubMed Central

    2014-01-01

    Background Building on an approach developed to assess the economic returns to cardiovascular research, we estimated the economic returns from UK public and charitable funded cancer-related research that arise from the net value of the improved health outcomes. Methods To assess these economic returns from cancer-related research in the UK we estimated: 1) public and charitable expenditure on cancer-related research in the UK from 1970 to 2009; 2) net monetary benefit (NMB), that is, the health benefit measured in quality adjusted life years (QALYs) valued in monetary terms (using a base-case value of a QALY of GB£25,000) minus the cost of delivering that benefit, for a prioritised list of interventions from 1991 to 2010; 3) the proportion of NMB attributable to UK research; 4) the elapsed time between research funding and health gain; and 5) the internal rate of return (IRR) from cancer-related research investments on health benefits. We analysed the uncertainties in the IRR estimate using sensitivity analyses to illustrate the effect of some key parameters. Results In 2011/12 prices, total expenditure on cancer-related research from 1970 to 2009 was £15 billion. The NMB of the 5.9 million QALYs gained from the prioritised interventions from 1991 to 2010 was £124 billion. Calculation of the IRR incorporated an estimated elapsed time of 15 years. We related 17% of the annual NMB estimated to be attributable to UK research (for each of the 20 years 1991 to 2010) to 20 years of research investment 15 years earlier (that is, for 1976 to 1995). This produced a best-estimate IRR of 10%, compared with 9% previously estimated for cardiovascular disease research. The sensitivity analysis demonstrated the importance of smoking reduction as a major source of improved cancer-related health outcomes. Conclusions We have demonstrated a substantive IRR from net health gain to public and charitable funding of cancer-related research in the UK, and further validated the approach that we originally used in assessing the returns from cardiovascular research. In doing so, we have highlighted a number of weaknesses and key assumptions that need strengthening in further investigations. Nevertheless, these cautious estimates demonstrate that the returns from past cancer research have been substantial, and justify the investments made during the period 1976 to 1995. PMID:24930803

  2. Dosimetry of 64Cu-DOTA-AE105, a PET tracer for uPAR imaging.

    PubMed

    Persson, Morten; El Ali, Henrik H; Binderup, Tina; Pfeifer, Andreas; Madsen, Jacob; Rasmussen, Palle; Kjaer, Andreas

    2014-03-01

    (64)Cu-DOTA-AE105 is a novel positron emission tomography (PET) tracer specific to the human urokinase-type plasminogen activator receptor (uPAR). In preparation of using this tracer in humans, as a new promising method to distinguish between indolent and aggressive cancers, we have performed PET studies in mice to evaluate the in vivo biodistribution and estimate human dosimetry of (64)Cu-DOTA-AE105. Five mice received iv tail injection of (64)Cu-DOTA-AE105 and were PET/CT scanned 1, 4.5 and 22 h post injection. Volume-of-interest (VOI) were manually drawn on the following organs: heart, lung, liver, kidney, spleen, intestine, muscle, bone and bladder. The activity concentrations in the mentioned organs [%ID/g] were used for the dosimetry calculation. The %ID/g of each organ at 1, 4.5 and 22 h was scaled to human value based on a difference between organ and body weights. The scaled values were then exported to OLINDA software for computation of the human absorbed doses. The residence times as well as effective dose equivalent for male and female could be obtained for each organ. To validate this approach, of human projection using mouse data, five mice received iv tail injection of another (64)Cu-DOTA peptide-based tracer, (64)Cu-DOTA-TATE, and underwent same procedure as just described. The human dosimetry estimates were then compared with observed human dosimetry estimate recently found in a first-in-man study using (64)Cu-DOTA-TATE. Human estimates of (64)Cu-DOTA-AE105 revealed the heart wall to receive the highest dose (0.0918 mSv/MBq) followed by the liver (0.0815 mSv/MBq), All other organs/tissue were estimated to receive doses in the range of 0.02-0.04 mSv/MBq. The mean effective whole-body dose of (64)Cu-DOTA-AE105 was estimated to be 0.0317 mSv/MBq. Relatively good correlation between human predicted and observed dosimetry estimates for (64)Cu-DOTA-TATE was found. Importantly, the effective whole body dose was predicted with very high precision (predicted value: 0.0252 mSv/Mbq, Observed value: 0.0315 mSv/MBq) thus validating our approach for human dosimetry estimation. Favorable dosimetry estimates together with previously reported uPAR PET data fully support human testing of (64)Cu-DOTA-AE105. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Methods for estimating drought streamflow probabilities for Virginia streams

    USGS Publications Warehouse

    Austin, Samuel H.

    2014-01-01

    Maximum likelihood logistic regression model equations used to estimate drought flow probabilities for Virginia streams are presented for 259 hydrologic basins in Virginia. Winter streamflows were used to estimate the likelihood of streamflows during the subsequent drought-prone summer months. The maximum likelihood logistic regression models identify probable streamflows from 5 to 8 months in advance. More than 5 million streamflow daily values collected over the period of record (January 1, 1900 through May 16, 2012) were compiled and analyzed over a minimum 10-year (maximum 112-year) period of record. The analysis yielded the 46,704 equations with statistically significant fit statistics and parameter ranges published in two tables in this report. These model equations produce summer month (July, August, and September) drought flow threshold probabilities as a function of streamflows during the previous winter months (November, December, January, and February). Example calculations are provided, demonstrating how to use the equations to estimate probable streamflows as much as 8 months in advance.

  4. Estimating Precipitation Susceptibility in Warm Marine Clouds Using Multi-sensor Aerosol and Cloud Products from A-Train Satellites

    NASA Astrophysics Data System (ADS)

    Bai, H.; Gong, C.; Wang, M.; Zhang, Z.

    2017-12-01

    Precipitation susceptibility to aerosol perturbation plays a key role in understanding aerosol-cloud interactions and constraining aerosol indirect effects. However, large discrepancies exist in the previous satellite estimates of precipitation susceptibility. In this paper, multi-sensor aerosol and cloud products, including those from CALIPSO, CloudSat, MODIS, and AMSR-E from June 2006 to April 2011 are analyzed to estimate precipitation susceptibility (including precipitation frequency susceptibility SPOP, precipitation intensity susceptibility SI, and precipitation rate susceptibility SR) in warm marine clouds. Our results show that SPOP demonstrates relatively robust features throughout independent LWP products and diverse rain products. In contrast, the behaviors of SI are more subject to LWP or rain products. Our results further show that SPOP strongly depends on atmospherics stability, with larger value under more stable environment. Precipitation susceptibility calculated with respect to cloud droplet number concentration (CDNC) is generally much larger than that estimated with respect to aerosol index (AI), which results from the weak dependency of CDNC on AI.

  5. Comprehensive analysis of proton range uncertainties related to patient stopping-power-ratio estimation using the stoichiometric calibration

    PubMed Central

    Yang, M; Zhu, X R; Park, PC; Titt, Uwe; Mohan, R; Virshup, G; Clayton, J; Dong, L

    2012-01-01

    The purpose of this study was to analyze factors affecting proton stopping-power-ratio (SPR) estimations and range uncertainties in proton therapy planning using the standard stoichiometric calibration. The SPR uncertainties were grouped into five categories according to their origins and then estimated based on previously published reports or measurements. For the first time, the impact of tissue composition variations on SPR estimation was assessed and the uncertainty estimates of each category were determined for low-density (lung), soft, and high-density (bone) tissues. A composite, 95th percentile water-equivalent-thickness uncertainty was calculated from multiple beam directions in 15 patients with various types of cancer undergoing proton therapy. The SPR uncertainties (1σ) were quite different (ranging from 1.6% to 5.0%) in different tissue groups, although the final combined uncertainty (95th percentile) for different treatment sites was fairly consistent at 3.0–3.4%, primarily because soft tissue is the dominant tissue type in human body. The dominant contributing factor for uncertainties in soft tissues was the degeneracy of Hounsfield Numbers in the presence of tissue composition variations. To reduce the overall uncertainties in SPR estimation, the use of dual-energy computed tomography is suggested. The values recommended in this study based on typical treatment sites and a small group of patients roughly agree with the commonly referenced value (3.5%) used for margin design. By using tissue-specific range uncertainties, one could estimate the beam-specific range margin by accounting for different types and amounts of tissues along a beam, which may allow for customization of range uncertainty for each beam direction. PMID:22678123

  6. Radiation-force-based estimation of acoustic attenuation using harmonic motion imaging (HMI) in phantoms and in vitro livers before and after HIFU ablation

    NASA Astrophysics Data System (ADS)

    Chen, Jiangang; Hou, Gary Y.; Marquet, Fabrice; Han, Yang; Camarena, Francisco; Konofagou, Elisa

    2015-10-01

    Acoustic attenuation represents the energy loss of the propagating wave through biological tissues and plays a significant role in both therapeutic and diagnostic ultrasound applications. Estimation of acoustic attenuation remains challenging but critical for tissue characterization. In this study, an attenuation estimation approach was developed using the radiation-force-based method of harmonic motion imaging (HMI). 2D tissue displacement maps were acquired by moving the transducer in a raster-scan format. A linear regression model was applied on the logarithm of the HMI displacements at different depths in order to estimate the acoustic attenuation. Commercially available phantoms with known attenuations (n=5 ) and in vitro canine livers (n=3 ) were tested, as well as HIFU lesions in in vitro canine livers (n=5 ). Results demonstrated that attenuations obtained from the phantoms showed a good correlation ({{R}2}=0.976 ) with the independently obtained values reported by the manufacturer with an estimation error (compared to the values independently measured) varying within the range of 15-35%. The estimated attenuation in the in vitro canine livers was equal to 0.32   ±   0.03 dB cm-1 MHz-1, which is in good agreement with the existing literature. The attenuation in HIFU lesions was found to be higher (0.58   ±   0.06 dB cm-1 MHz-1) than that in normal tissues, also in agreement with the results from previous publications. Future potential applications of the proposed method include estimation of attenuation in pathological tissues before and after thermal ablation.

  7. Estimations of One Repetition Maximum and Isometric Peak Torque in Knee Extension Based on the Relationship Between Force and Velocity.

    PubMed

    Sugiura, Yoshito; Hatanaka, Yasuhiko; Arai, Tomoaki; Sakurai, Hiroaki; Kanada, Yoshikiyo

    2016-04-01

    We aimed to investigate whether a linear regression formula based on the relationship between joint torque and angular velocity measured using a high-speed video camera and image measurement software is effective for estimating 1 repetition maximum (1RM) and isometric peak torque in knee extension. Subjects comprised 20 healthy men (mean ± SD; age, 27.4 ± 4.9 years; height, 170.3 ± 4.4 cm; and body weight, 66.1 ± 10.9 kg). The exercise load ranged from 40% to 150% 1RM. Peak angular velocity (PAV) and peak torque were used to estimate 1RM and isometric peak torque. To elucidate the relationship between force and velocity in knee extension, the relationship between the relative proportion of 1RM (% 1RM) and PAV was examined using simple regression analysis. The concordance rate between the estimated value and actual measurement of 1RM and isometric peak torque was examined using intraclass correlation coefficients (ICCs). Reliability of the regression line of PAV and % 1RM was 0.95. The concordance rate between the actual measurement and estimated value of 1RM resulted in an ICC(2,1) of 0.93 and that of isometric peak torque had an ICC(2,1) of 0.87 and 0.86 for 6 and 3 levels of load, respectively. Our method for estimating 1RM was effective for decreasing the measurement time and reducing patients' burden. Additionally, isometric peak torque can be estimated using 3 levels of load, as we obtained the same results as those reported previously. We plan to expand the range of subjects and examine the generalizability of our results.

  8. Radiation-force-based estimation of acoustic attenuation using harmonic motion imaging (HMI) in phantoms and in vitro livers before and after HIFU ablation.

    PubMed

    Chen, Jiangang; Hou, Gary Y; Marquet, Fabrice; Han, Yang; Camarena, Francisco; Konofagou, Elisa

    2015-10-07

    Acoustic attenuation represents the energy loss of the propagating wave through biological tissues and plays a significant role in both therapeutic and diagnostic ultrasound applications. Estimation of acoustic attenuation remains challenging but critical for tissue characterization. In this study, an attenuation estimation approach was developed using the radiation-force-based method of harmonic motion imaging (HMI). 2D tissue displacement maps were acquired by moving the transducer in a raster-scan format. A linear regression model was applied on the logarithm of the HMI displacements at different depths in order to estimate the acoustic attenuation. Commercially available phantoms with known attenuations (n = 5) and in vitro canine livers (n = 3) were tested, as well as HIFU lesions in in vitro canine livers (n = 5). Results demonstrated that attenuations obtained from the phantoms showed a good correlation (R² = 0.976) with the independently obtained values reported by the manufacturer with an estimation error (compared to the values independently measured) varying within the range of 15-35%. The estimated attenuation in the in vitro canine livers was equal to 0.32   ±   0.03 dB cm(-1) MHz(-1), which is in good agreement with the existing literature. The attenuation in HIFU lesions was found to be higher (0.58   ±   0.06 dB cm(-1) MHz(-1)) than that in normal tissues, also in agreement with the results from previous publications. Future potential applications of the proposed method include estimation of attenuation in pathological tissues before and after thermal ablation.

  9. Research and development for a ground-based hydrogen-maser system

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The results of a joint experiment aimed primarily at the determination of the frequency of the H(1) hyperfine transition are reported. The transition frequency value for Cs-133 hyperfine transition is found. The result is the mean of two independent evaluations against the cesium reference, which differ by 0.002 Hz. The one-sigma uncertainty of the value nu sub H is also estimated to be 0.002 Hz. One evaluation is based on wall shift experiments at Harvard University; the other is a result of new wall shift measurement using many storage bulbs of different sizes at the National Bureau of Standards. The experimental procedures and the applied corrections are described. Results for the wall shift and for the frequency of hydrogen are compared with previously published values, and error limits of the experiments are discussed.

  10. Pharmacokinetic design optimization in children and estimation of maturation parameters: example of cytochrome P450 3A4.

    PubMed

    Bouillon-Pichault, Marion; Jullien, Vincent; Bazzoli, Caroline; Pons, Gérard; Tod, Michel

    2011-02-01

    The aim of this work was to determine whether optimizing the study design in terms of ages and sampling times for a drug eliminated solely via cytochrome P450 3A4 (CYP3A4) would allow us to accurately estimate the pharmacokinetic parameters throughout the entire childhood timespan, while taking into account age- and weight-related changes. A linear monocompartmental model with first-order absorption was used successively with three different residual error models and previously published pharmacokinetic parameters ("true values"). The optimal ages were established by D-optimization using the CYP3A4 maturation function to create "optimized demographic databases." The post-dose times for each previously selected age were determined by D-optimization using the pharmacokinetic model to create "optimized sparse sampling databases." We simulated concentrations by applying the population pharmacokinetic model to the optimized sparse sampling databases to create optimized concentration databases. The latter were modeled to estimate population pharmacokinetic parameters. We then compared true and estimated parameter values. The established optimal design comprised four age ranges: 0.008 years old (i.e., around 3 days), 0.192 years old (i.e., around 2 months), 1.325 years old, and adults, with the same number of subjects per group and three or four samples per subject, in accordance with the error model. The population pharmacokinetic parameters that we estimated with this design were precise and unbiased (root mean square error [RMSE] and mean prediction error [MPE] less than 11% for clearance and distribution volume and less than 18% for k(a)), whereas the maturation parameters were unbiased but less precise (MPE < 6% and RMSE < 37%). Based on our results, taking growth and maturation into account a priori in a pediatric pharmacokinetic study is theoretically feasible. However, it requires that very early ages be included in studies, which may present an obstacle to the use of this approach. First-pass effects, alternative elimination routes, and combined elimination pathways should also be investigated.

  11. Actinometric measurement of j(O3-O(1D)) using a luminol detector

    NASA Technical Reports Server (NTRS)

    Bairai, Solomon T.; Stedman, Donald H.

    1992-01-01

    The photolysis frequency of ozone to singlet D oxygen atoms has been measured by means of a chemical actinometer using a luminol based detector. The instrument measures j(O3-O(1D)) with a precision of 10 percent. The data collected in winter and spring of 1991 is in agreement with model predictions and previously measured values. Data from a global solar radiometer can be used to estimate the effects of local cloudiness on j(O3-O(1D)).

  12. A Species-Level Phylogeny of Extant Snakes with Description of a New Colubrid Subfamily and Genus.

    PubMed

    Figueroa, Alex; McKelvy, Alexander D; Grismer, L Lee; Bell, Charles D; Lailvaux, Simon P

    2016-01-01

    With over 3,500 species encompassing a diverse range of morphologies and ecologies, snakes make up 36% of squamate diversity. Despite several attempts at estimating higher-level snake relationships and numerous assessments of generic- or species-level phylogenies, a large-scale species-level phylogeny solely focusing on snakes has not been completed. Here, we provide the largest-yet estimate of the snake tree of life using maximum likelihood on a supermatrix of 1745 taxa (1652 snake species + 7 outgroup taxa) and 9,523 base pairs from 10 loci (5 nuclear, 5 mitochondrial), including previously unsequenced genera (2) and species (61). Increased taxon sampling resulted in a phylogeny with a new higher-level topology and corroborate many lower-level relationships, strengthened by high nodal support values (> 85%) down to the species level (73.69% of nodes). Although the majority of families and subfamilies were strongly supported as monophyletic with > 88% support values, some families and numerous genera were paraphyletic, primarily due to limited taxon and loci sampling leading to a sparse supermatrix and minimal sequence overlap between some closely-related taxa. With all rogue taxa and incertae sedis species eliminated, higher-level relationships and support values remained relatively unchanged, except in five problematic clades. Our analyses resulted in new topologies at higher- and lower-levels; resolved several previous topological issues; established novel paraphyletic affiliations; designated a new subfamily, Ahaetuliinae, for the genera Ahaetulla, Chrysopelea, Dendrelaphis, and Dryophiops; and appointed Hemerophis (Coluber) zebrinus to a new genus, Mopanveldophis. Although we provide insight into some distinguished problematic nodes, at the deeper phylogenetic scale, resolution of these nodes may require sampling of more slowly-evolving nuclear genes.

  13. A Species-Level Phylogeny of Extant Snakes with Description of a New Colubrid Subfamily and Genus

    PubMed Central

    McKelvy, Alexander D.; Grismer, L. Lee; Bell, Charles D.; Lailvaux, Simon P.

    2016-01-01

    Background With over 3,500 species encompassing a diverse range of morphologies and ecologies, snakes make up 36% of squamate diversity. Despite several attempts at estimating higher-level snake relationships and numerous assessments of generic- or species-level phylogenies, a large-scale species-level phylogeny solely focusing on snakes has not been completed. Here, we provide the largest-yet estimate of the snake tree of life using maximum likelihood on a supermatrix of 1745 taxa (1652 snake species + 7 outgroup taxa) and 9,523 base pairs from 10 loci (5 nuclear, 5 mitochondrial), including previously unsequenced genera (2) and species (61). Results Increased taxon sampling resulted in a phylogeny with a new higher-level topology and corroborate many lower-level relationships, strengthened by high nodal support values (> 85%) down to the species level (73.69% of nodes). Although the majority of families and subfamilies were strongly supported as monophyletic with > 88% support values, some families and numerous genera were paraphyletic, primarily due to limited taxon and loci sampling leading to a sparse supermatrix and minimal sequence overlap between some closely-related taxa. With all rogue taxa and incertae sedis species eliminated, higher-level relationships and support values remained relatively unchanged, except in five problematic clades. Conclusion Our analyses resulted in new topologies at higher- and lower-levels; resolved several previous topological issues; established novel paraphyletic affiliations; designated a new subfamily, Ahaetuliinae, for the genera Ahaetulla, Chrysopelea, Dendrelaphis, and Dryophiops; and appointed Hemerophis (Coluber) zebrinus to a new genus, Mopanveldophis. Although we provide insight into some distinguished problematic nodes, at the deeper phylogenetic scale, resolution of these nodes may require sampling of more slowly-evolving nuclear genes. PMID:27603205

  14. Using remote sensing and GIS techniques to estimate discharge and recharge fluxes for the Death Valley regional groundwater flow system, USA

    USGS Publications Warehouse

    D'Agnese, F. A.; Faunt, C.C.; Turner, A.K.; ,

    1996-01-01

    The recharge and discharge components of the Death Valley regional groundwater flow system were defined by techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were used to calculate discharge volumes for these area. An empirical method of groundwater recharge estimation was modified to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter estimation techniques also used these data to evaluate the reliability and sensitivity of estimated values.The recharge and discharge components of the Death Valley regional groundwater flow system were defined by remote sensing and GIS techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. This map provided a basis for subsequent evapotranspiration and infiltration estimations. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were then used to calculate discharge volumes for these areas. A previously used empirical method of groundwater recharge estimation was modified by GIS methods to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter estimation techniques also used these data to evaluate the reliability and sensitivity of estimated values.

  15. The Model Human Processor and the Older Adult: Parameter Estimation and Validation Within a Mobile Phone Task

    PubMed Central

    Jastrzembski, Tiffany S.; Charness, Neil

    2009-01-01

    The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; Mage = 20) and older (N = 20; Mage = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies. PMID:18194048

  16. Fine-granularity inference and estimations to network traffic for SDN.

    PubMed

    Jiang, Dingde; Huo, Liuwei; Li, Ya

    2018-01-01

    An end-to-end network traffic matrix is significantly helpful for network management and for Software Defined Networks (SDN). However, the end-to-end network traffic matrix's inferences and estimations are a challenging problem. Moreover, attaining the traffic matrix in high-speed networks for SDN is a prohibitive challenge. This paper investigates how to estimate and recover the end-to-end network traffic matrix in fine time granularity from the sampled traffic traces, which is a hard inverse problem. Different from previous methods, the fractal interpolation is used to reconstruct the finer-granularity network traffic. Then, the cubic spline interpolation method is used to obtain the smooth reconstruction values. To attain an accurate the end-to-end network traffic in fine time granularity, we perform a weighted-geometric-average process for two interpolation results that are obtained. The simulation results show that our approaches are feasible and effective.

  17. The Model Human Processor and the older adult: parameter estimation and validation within a mobile phone task.

    PubMed

    Jastrzembski, Tiffany S; Charness, Neil

    2007-12-01

    The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; M-sub(age) = 20) and older (N = 20; M-sub(age) = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies.

  18. The contribution of glacier melt to streamflow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaner, Neil; Voisin, Nathalie; Nijssen, Bart

    2012-09-13

    Ongoing and projected future changes in glacier extent and water storage globally have lead to concerns about the implications for water supplies. However, the current magnitude of glacier contributions to river runoff is not well known, nor is the population at risk to future glacier changes. We estimate an upper bound on glacier melt contribution to seasonal streamflow by computing the energy balance of glaciers globally. Melt water quantities are computed as a fraction of total streamflow simulated using a hydrology model and the melt fraction is tracked down the stream network. In general, our estimates of the glacier meltmore » contribution to streamflow are lower than previously published values. Nonetheless, we find that globally an estimated 225 (36) million people live in river basins where maximum seasonal glacier melt contributes at least 10% (25%) of streamflow, mostly in the High Asia region.« less

  19. Terrestrial reference frame solution with the Vienna VLBI Software VieVS and implication of tropospheric gradient estimation

    NASA Astrophysics Data System (ADS)

    Spicakova, H.; Plank, L.; Nilsson, T.; Böhm, J.; Schuh, H.

    2011-07-01

    The Vienna VLBI Software (VieVS) has been developed at the Institute of Geodesy and Geophysics at TU Vienna since 2008. In this presentation, we present the module Vie_glob which is the part of VieVS that allows the parameter estimation from multiple VLBI sessions in a so-called global solution. We focus on the determination of the terrestrial reference frame (TRF) using all suitable VLBI sessions since 1984. We compare different analysis options like the choice of loading corrections or of one of the models for the tropospheric delays. The effect of atmosphere loading corrections on station heights if neglected at observation level will be shown. Time series of station positions (using a previously determined TRF as a priori values) are presented and compared to other estimates of site positions from individual IVS (International VLBI Service for Geodesy and Astrometry) Analysis Centers.

  20. Fine-granularity inference and estimations to network traffic for SDN

    PubMed Central

    Huo, Liuwei; Li, Ya

    2018-01-01

    An end-to-end network traffic matrix is significantly helpful for network management and for Software Defined Networks (SDN). However, the end-to-end network traffic matrix's inferences and estimations are a challenging problem. Moreover, attaining the traffic matrix in high-speed networks for SDN is a prohibitive challenge. This paper investigates how to estimate and recover the end-to-end network traffic matrix in fine time granularity from the sampled traffic traces, which is a hard inverse problem. Different from previous methods, the fractal interpolation is used to reconstruct the finer-granularity network traffic. Then, the cubic spline interpolation method is used to obtain the smooth reconstruction values. To attain an accurate the end-to-end network traffic in fine time granularity, we perform a weighted-geometric-average process for two interpolation results that are obtained. The simulation results show that our approaches are feasible and effective. PMID:29718913

  1. Consumption and diffusion of dissolved oxygen in sedimentary rocks.

    PubMed

    Manaka, M; Takeda, M

    2016-10-01

    Fe(II)-bearing minerals (e.g., biotite, chlorite, and pyrite) are a promising reducing agent for the consumption of atmospheric oxygen in repositories for the geological disposal of high-level radioactive waste. To estimate effective diffusion coefficients (D e , in m 2 s -1 ) for dissolved oxygen (DO) and the reaction rates for the oxidation of Fe(II)-bearing minerals in a repository environment, we conducted diffusion-chemical reaction experiments using intact rock samples of Mizunami sedimentary rock. In addition, we conducted batch experiments on the oxidation of crushed sedimentary rock by DO in a closed system. From the results of the diffusion-chemical reaction experiments, we estimated the values of D e for DO to lie within the range 2.69×10 -11

  2. Development of a preference-based index from the National Eye Institute Visual Function Questionnaire-25.

    PubMed

    Rentz, Anne M; Kowalski, Jonathan W; Walt, John G; Hays, Ron D; Brazier, John E; Yu, Ren; Lee, Paul; Bressler, Neil; Revicki, Dennis A

    2014-03-01

    Understanding how individuals value health states is central to patient-centered care and to health policy decision making. Generic preference-based measures of health may not effectively capture the impact of ocular diseases. Recently, 6 items from the National Eye Institute Visual Function Questionnaire-25 were used to develop the Visual Function Questionnaire-Utility Index health state classification, which defines visual function health states. To describe elicitation of preferences for health states generated from the Visual Function Questionnaire-Utility Index health state classification and development of an algorithm to estimate health preference scores for any health state. Nonintervention, cross-sectional study of the general community in 4 countries (Australia, Canada, United Kingdom, and United States). A total of 607 adult participants were recruited from local newspaper advertisements. In the United Kingdom, an existing database of participants from previous studies was used for recruitment. Eight of 15,625 possible health states from the Visual Function Questionnaire-Utility Index were valued using time trade-off technique. A θ severity score was calculated for Visual Function Questionnaire-Utility Index-defined health states using item response theory analysis. Regression models were then used to develop an algorithm to assign health state preference values for all potential health states defined by the Visual Function Questionnaire-Utility Index. Health state preference values for the 8 states ranged from a mean (SD) of 0.343 (0.395) to 0.956 (0.124). As expected, preference values declined with worsening visual function. Results indicate that the Visual Function Questionnaire-Utility Index describes states that participants view as spanning most of the continuum from full health to dead. Visual Function Questionnaire-Utility Index health state classification produces health preference scores that can be estimated in vision-related studies that include the National Eye Institute Visual Function Questionnaire-25. These preference scores may be of value for estimating utilities in economic and health policy analyses.

  3. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    NASA Astrophysics Data System (ADS)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and posterior eye segment as well as in skin imaging. The new estimator shows superior performance and also shows clearer image contrast.

  4. Comparison between GSTAR and GSTAR-Kalman Filter models on inflation rate forecasting in East Java

    NASA Astrophysics Data System (ADS)

    Rahma Prillantika, Jessica; Apriliani, Erna; Wahyuningsih, Nuri

    2018-03-01

    Up to now, we often find data which have correlation between time and location. This data also known as spatial data. Inflation rate is one type of spatial data because it is not only related to the events of the previous time, but also has relevance to the other location or elsewhere. In this research, we do comparison between GSTAR model and GSTAR-Kalman Filter to get prediction which have small error rate. Kalman Filter is one estimator that estimates state changes due to noise from white noise. The final result shows that Kalman Filter is able to improve the GSTAR forecast result. This is shown through simulation results in the form of graphs and clarified with smaller RMSE values.

  5. Single point estimation of phenytoin dosing: a reappraisal.

    PubMed

    Koup, J R; Gibaldi, M; Godolphin, W

    1981-11-01

    A previously proposed method for estimation of phenytoin dosing requirement using a single serum sample obtained 24 hours after intravenous loading dose (18 mg/Kg) has been re-evaluated. Using more realistic values for the volume of distribution of phenytoin (0.4 to 1.2 L/Kg), simulations indicate that the proposed method will fail to consistently predict dosage requirements. Additional simulations indicate that two samples obtained during the 24 hour interval following the iv loading dose could be used to more reliably predict phenytoin dose requirement. Because of the nonlinear relationship which exists between phenytoin dose administration rate (RO) and the mean steady state serum concentration (CSS), small errors in prediction of the required RO result in much larger errors in CSS.

  6. Robust Flutter Margin Analysis that Incorporates Flight Data

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Brenner, Martin J.

    1998-01-01

    An approach for computing worst-case flutter margins has been formulated in a robust stability framework. Uncertainty operators are included with a linear model to describe modeling errors and flight variations. The structured singular value, mu, computes a stability margin that directly accounts for these uncertainties. This approach introduces a new method of computing flutter margins and an associated new parameter for describing these margins. The mu margins are robust margins that indicate worst-case stability estimates with respect to the defined uncertainty. Worst-case flutter margins are computed for the F/A-18 Systems Research Aircraft using uncertainty sets generated by flight data analysis. The robust margins demonstrate flight conditions for flutter may lie closer to the flight envelope than previously estimated by p-k analysis.

  7. An estimate of the number of tropical tree species

    PubMed Central

    Slik, J. W. Ferry; Arroyo-Rodríguez, Víctor; Aiba, Shin-Ichiro; Alvarez-Loayza, Patricia; Alves, Luciana F.; Ashton, Peter; Balvanera, Patricia; Bastian, Meredith L.; Bellingham, Peter J.; van den Berg, Eduardo; Bernacci, Luis; da Conceição Bispo, Polyanna; Blanc, Lilian; Böhning-Gaese, Katrin; Boeckx, Pascal; Bongers, Frans; Boyle, Brad; Bradford, Matt; Brearley, Francis Q.; Breuer-Ndoundou Hockemba, Mireille; Bunyavejchewin, Sarayudh; Calderado Leal Matos, Darley; Castillo-Santiago, Miguel; Catharino, Eduardo L. M.; Chai, Shauna-Lee; Chen, Yukai; Colwell, Robert K.; Chazdon, Robin L.; Clark, Connie; Clark, David B.; Clark, Deborah A.; Culmsee, Heike; Damas, Kipiro; Dattaraja, Handanakere S.; Dauby, Gilles; Davidar, Priya; DeWalt, Saara J.; Doucet, Jean-Louis; Duque, Alvaro; Durigan, Giselda; Eichhorn, Karl A. O.; Eisenlohr, Pedro V.; Eler, Eduardo; Ewango, Corneille; Farwig, Nina; Feeley, Kenneth J.; Ferreira, Leandro; Field, Richard; de Oliveira Filho, Ary T.; Fletcher, Christine; Forshed, Olle; Franco, Geraldo; Fredriksson, Gabriella; Gillespie, Thomas; Gillet, Jean-François; Amarnath, Giriraj; Griffith, Daniel M.; Grogan, James; Gunatilleke, Nimal; Harris, David; Harrison, Rhett; Hector, Andy; Homeier, Jürgen; Imai, Nobuo; Itoh, Akira; Jansen, Patrick A.; Joly, Carlos A.; de Jong, Bernardus H. J.; Kartawinata, Kuswata; Kearsley, Elizabeth; Kelly, Daniel L.; Kenfack, David; Kessler, Michael; Kitayama, Kanehiro; Kooyman, Robert; Larney, Eileen; Laumonier, Yves; Laurance, Susan; Laurance, William F.; Lawes, Michael J.; do Amaral, Ieda Leao; Letcher, Susan G.; Lindsell, Jeremy; Lu, Xinghui; Mansor, Asyraf; Marjokorpi, Antti; Martin, Emanuel H.; Meilby, Henrik; Melo, Felipe P. L.; Metcalfe, Daniel J.; Medjibe, Vincent P.; Metzger, Jean Paul; Millet, Jerome; Mohandass, D.; Montero, Juan C.; de Morisson Valeriano, Márcio; Mugerwa, Badru; Nagamasu, Hidetoshi; Nilus, Reuben; Ochoa-Gaona, Susana; Onrizal; Page, Navendu; Parolin, Pia; Parren, Marc; Parthasarathy, Narayanaswamy; Paudel, Ekananda; Permana, Andrea; Piedade, Maria T. F.; Pitman, Nigel C. A.; Poorter, Lourens; Poulsen, Axel D.; Poulsen, John; Powers, Jennifer; Prasad, Rama C.; Puyravaud, Jean-Philippe; Razafimahaimodison, Jean-Claude; Reitsma, Jan; dos Santos, João Roberto; Roberto Spironello, Wilson; Romero-Saltos, Hugo; Rovero, Francesco; Rozak, Andes Hamuraby; Ruokolainen, Kalle; Rutishauser, Ervan; Saiter, Felipe; Saner, Philippe; Santos, Braulio A.; Santos, Fernanda; Sarker, Swapan K.; Satdichanh, Manichanh; Schmitt, Christine B.; Schöngart, Jochen; Schulze, Mark; Suganuma, Marcio S.; Sheil, Douglas; da Silva Pinheiro, Eduardo; Sist, Plinio; Stevart, Tariq; Sukumar, Raman; Sun, I.-Fang; Sunderland, Terry; Suresh, H. S.; Suzuki, Eizi; Tabarelli, Marcelo; Tang, Jangwei; Targhetta, Natália; Theilade, Ida; Thomas, Duncan W.; Tchouto, Peguy; Hurtado, Johanna; Valencia, Renato; van Valkenburg, Johan L. C. H.; Van Do, Tran; Vasquez, Rodolfo; Verbeeck, Hans; Adekunle, Victor; Vieira, Simone A.; Webb, Campbell O.; Whitfeld, Timothy; Wich, Serge A.; Williams, John; Wittmann, Florian; Wöll, Hannsjoerg; Yang, Xiaobo; Adou Yao, C. Yves; Yap, Sandra L.; Yoneda, Tsuyoshi; Zahawi, Rakan A.; Zakaria, Rahmad; Zang, Runguo; de Assis, Rafael L.; Garcia Luize, Bruno; Venticinque, Eduardo M.

    2015-01-01

    The high species richness of tropical forests has long been recognized, yet there remains substantial uncertainty regarding the actual number of tropical tree species. Using a pantropical tree inventory database from closed canopy forests, consisting of 657,630 trees belonging to 11,371 species, we use a fitted value of Fisher’s alpha and an approximate pantropical stem total to estimate the minimum number of tropical forest tree species to fall between ∼40,000 and ∼53,000, i.e., at the high end of previous estimates. Contrary to common assumption, the Indo-Pacific region was found to be as species-rich as the Neotropics, with both regions having a minimum of ∼19,000–25,000 tree species. Continental Africa is relatively depauperate with a minimum of ∼4,500–6,000 tree species. Very few species are shared among the African, American, and the Indo-Pacific regions. We provide a methodological framework for estimating species richness in trees that may help refine species richness estimates of tree-dependent taxa. PMID:26034279

  8. Multiunit Activity-Based Real-Time Limb-State Estimation from Dorsal Root Ganglion Recordings

    PubMed Central

    Han, Sungmin; Chu, Jun-Uk; Kim, Hyungmin; Park, Jong Woong; Youn, Inchan

    2017-01-01

    Proprioceptive afferent activities could be useful for providing sensory feedback signals for closed-loop control during functional electrical stimulation (FES). However, most previous studies have used the single-unit activity of individual neurons to extract sensory information from proprioceptive afferents. This study proposes a new decoding method to estimate ankle and knee joint angles using multiunit activity data. Proprioceptive afferent signals were recorded from a dorsal root ganglion with a single-shank microelectrode during passive movements of the ankle and knee joints, and joint angles were measured as kinematic data. The mean absolute value (MAV) was extracted from the multiunit activity data, and a dynamically driven recurrent neural network (DDRNN) was used to estimate ankle and knee joint angles. The multiunit activity-based MAV feature was sufficiently informative to estimate limb states, and the DDRNN showed a better decoding performance than conventional linear estimators. In addition, processing time delay satisfied real-time constraints. These results demonstrated that the proposed method could be applicable for providing real-time sensory feedback signals in closed-loop FES systems. PMID:28276474

  9. ISC-GEM: Global Instrumental Earthquake Catalogue (1900-2009), III. Re-computed MS and mb, proxy MW, final magnitude composition and completeness assessment

    NASA Astrophysics Data System (ADS)

    Di Giacomo, Domenico; Bondár, István; Storchak, Dmitry A.; Engdahl, E. Robert; Bormann, Peter; Harris, James

    2015-02-01

    This paper outlines the re-computation and compilation of the magnitudes now contained in the final ISC-GEM Reference Global Instrumental Earthquake Catalogue (1900-2009). The catalogue is available via the ISC website (http://www.isc.ac.uk/iscgem/). The available re-computed MS and mb provided an ideal basis for deriving new conversion relationships to moment magnitude MW. Therefore, rather than using previously published regression models, we derived new empirical relationships using both generalized orthogonal linear and exponential non-linear models to obtain MW proxies from MS and mb. The new models were tested against true values of MW, and the newly derived exponential models were then preferred to the linear ones in computing MW proxies. For the final magnitude composition of the ISC-GEM catalogue, we preferred directly measured MW values as published by the Global CMT project for the period 1976-2009 (plus intermediate-depth earthquakes between 1962 and 1975). In addition, over 1000 publications have been examined to obtain direct seismic moment M0 and, therefore, also MW estimates for 967 large earthquakes during 1900-1978 (Lee and Engdahl, 2015) by various alternative methods to the current GCMT procedure. In all other instances we computed MW proxy values by converting our re-computed MS and mb values into MW, using the newly derived non-linear regression models. The final magnitude composition is an improvement in terms of magnitude homogeneity compared to previous catalogues. The magnitude completeness is not homogeneous over the 110 years covered by the ISC-GEM catalogue. Therefore, seismicity rate estimates may be strongly affected without a careful time window selection. In particular, the ISC-GEM catalogue appears to be complete down to MW 5.6 starting from 1964, whereas for the early instrumental period the completeness varies from ∼7.5 to 6.2. Further time and resources would be necessary to homogenize the magnitude of completeness over the entire catalogue length.

  10. Assessing temporally and spatially resolved PM 2.5 exposures for epidemiological studies using satellite aerosol optical depth measurements

    NASA Astrophysics Data System (ADS)

    Kloog, Itai; Koutrakis, Petros; Coull, Brent A.; Lee, Hyung Joo; Schwartz, Joel

    2011-11-01

    Land use regression (LUR) models provide good estimates of spatially resolved long-term exposures, but are poor at capturing short term exposures. Satellite-derived Aerosol Optical Depth (AOD) measurements have the potential to provide spatio-temporally resolved predictions of both long and short term exposures, but previous studies have generally showed relatively low predictive power. Our objective was to extend our previous work on day-specific calibrations of AOD data using ground PM 2.5 measurements by incorporating commonly used LUR variables and meteorological variables, thus benefiting from both the spatial resolution from the LUR models and the spatio-temporal resolution from the satellite models. Later we use spatial smoothing to predict PM 2.5 concentrations for day/locations with missing AOD measures. We used mixed models with random slopes for day to calibrate AOD data for 2000-2008 across New-England with monitored PM 2.5 measurements. We then used a generalized additive mixed model with spatial smoothing to estimate PM 2.5 in location-day pairs with missing AOD, using regional measured PM 2.5, AOD values in neighboring cells, and land use. Finally, local (100 m) land use terms were used to model the difference between grid cell prediction and monitored value to capture very local traffic particles. Out-of-sample ten-fold cross-validation was used to quantify the accuracy of our predictions. For days with available AOD data we found high out-of-sample R2 (mean out-of-sample R2 = 0.830, year to year variation 0.725-0.904). For days without AOD values, our model performance was also excellent (mean out-of-sample R2 = 0.810, year to year variation 0.692-0.887). Importantly, these R2 are for daily, rather than monthly or yearly, values. Our model allows one to assess short term and long-term human exposures in order to investigate both the acute and chronic effects of ambient particles, respectively.

  11. The influence of vapor pressure deficit (VPD) on the use of carbonyl sulfide (COS) as a photosynthetic tracer

    NASA Astrophysics Data System (ADS)

    Sun, W.; Maseyk, K. S.; Lett, C.; Seibt, U.

    2017-12-01

    Using carbonyl sulfide (COS) as a tracer to derive gross primary productivity (GPP) estimates requires knowledge of the relationship between leaf COS and CO2 uptake, which is typically embodied in a parameter called leaf relative uptake (LRU) ratio, defined as the concentration normalized COS:CO2 flux ratio. Previous laboratory and field studies have found light as the key environmental driver of LRU due to differential light responses of COS and CO2 uptake imposed by stomatal regulation. But the influences on LRU from other environmental drivers, particularly vapor pressure deficit (VPD) that affects stomatal conductance, remain elusive. Here we show that VPD is an important determinant of the COS-CO2 uptake relationship in a water-stressed ecosystem. We measured leaf COS and CO2 fluxes from a coast live oak with automated leaf chambers in spring 2013 in a southern Californian woodland. In this semiarid ecosystem, both leaf COS and CO2 uptake responded to VPD and showed a midday depression caused by reduced stomatal conductance. Above a moderate light level ( 500 µmol m-2 s-1), COS uptake decreased with light, whereas CO2 uptake saturated. As a result of the VPD-limited COS uptake, LRU value became smaller than 1.0 at high light (> 1000 µmol m-2 s-1), strongly deviating from previous laboratory values that converge to 1.6. Hence, failure to consider VPD influence may result in overestimated LRU value and underestimated CO2 uptake in this ecosystem. Using a coupled photosynthesis-stomatal conductance model, we show that the VPD control on LRU is in accordance with the response of stomatal conductance to VPD. Our results highlight that incorporating the VPD effect into the prediction of LRU value is crucial to the implementation of COS-based photosynthesis estimates in semiarid ecosystems.

  12. How much will it cost to eradicate lymphatic filariasis? An analysis of the financial and economic costs of intensified efforts against lymphatic filariasis.

    PubMed

    Kastner, Randee J; Sicuri, Elisa; Stone, Christopher M; Matwale, Gabriel; Onapa, Ambrose; Tediosi, Fabrizio

    2017-09-01

    Lymphatic filariasis (LF), a neglected tropical disease (NTD) preventable through mass drug administration (MDA), is one of six diseases deemed possibly eradicable. Previously we developed one LF elimination scenario, which assumes MDA scale-up to continue in all countries that have previously undertaken MDA. In contrast, our three previously developed eradication scenarios assume all LF endemic countries will undertake MDA at an average (eradication I), fast (eradication II), or instantaneous (eradication III) rate of scale-up. In this analysis we use a micro-costing model to project the financial and economic costs of each of these scenarios in order to provide evidence to decision makers about the investment required to eliminate and eradicate LF. Costing was undertaken from a health system perspective, with all results expressed in 2012 US dollars (USD). A discount rate of 3% was applied to calculate the net present value of future costs. Prospective NTD budgets from LF endemic countries were reviewed to preliminarily determine activities and resources necessary to undertake a program to eliminate LF at a country level. In consultation with LF program experts, activities and resources were further reviewed and a refined list of activities and necessary resources, along with their associated quantities and costs, were determined and grouped into the following activities: advocacy and communication, capacity strengthening, coordination and strengthening partnerships, data management, ongoing surveillance, monitoring and supervision, drug delivery, and administration. The costs of mapping and undertaking transmission assessment surveys and the value of donated drugs and volunteer time were also accounted for. Using previously developed scenarios and deterministic estimates of MDA duration, the financial and economic costs of interrupting LF transmission under varying rates of MDA scale-up were then modelled using a micro-costing approach. The elimination scenario, which includes countries that previously undertook MDA, is estimated to cost 929 million USD (95% Credible Interval: 884m-972m). Proceeding to eradication is anticipated to require a higher financial investment, estimated at 1.24 billion USD (1.17bn-1.30bn) in the eradication III scenario (immediate scale-up), with eradication II (intensified scale-up) projected at 1.27 billion USD (1.21bn-1.33bn), and eradication I (slow scale-up) estimated at 1.29 billion USD (1.23bn-1.34bn). The economic costs of the eradication III scenario are estimated at approximately 7.57 billion USD (7.12bn-7.94bn), while the elimination scenario is projected to have an economic cost of 5.21 billion USD (4.91bn-5.45bn). Countries in the AFRO region will require the greatest investment to reach elimination or eradication, but also stand to gain the most in cost savings. Across all scenarios, capacity strengthening and advocacy and communication represent the greatest financial costs, whereas mapping, post-MDA surveillance, and administration comprise the least. Though challenging to implement, our results indicate that financial and economic savings are greatest under the eradication III scenario. Thus, if eradication for LF is the objective, accelerated scale-up is projected to be the best investment.

  13. The shape parameter and its modification for defining coastal profiles

    NASA Astrophysics Data System (ADS)

    Türker, Umut; Kabdaşli, M. Sedat

    2009-03-01

    The shape parameter is important for the theoretical description of the sandy coastal profiles. This parameter has previously been defined as a function of the sediment-settling velocity. However, the settling velocity cannot be characterized over a wide range of sediment grains. This, in turn, limits the calculation of the shape parameter over a wide range. This paper provides a simpler and faster analytical equation to describe the shape parameter. The validity of the equation has been tested and compared with the previously estimated values given in both graphical and tabular forms. The results of this study indicate that the analytical solutions of the shape parameter improved the usability of profile better than graphical solutions, predicting better results both at the surf zone and offshore.

  14. Bayesian population analysis of a washin-washout physiologically based pharmacokinetic model for acetone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moerk, Anna-Karin, E-mail: anna-karin.mork@ki.s; Jonsson, Fredrik; Pharsight, a Certara company, St. Louis, MO

    2009-11-01

    The aim of this study was to derive improved estimates of population variability and uncertainty of physiologically based pharmacokinetic (PBPK) model parameters, especially of those related to the washin-washout behavior of polar volatile substances. This was done by optimizing a previously published washin-washout PBPK model for acetone in a Bayesian framework using Markov chain Monte Carlo simulation. The sensitivity of the model parameters was investigated by creating four different prior sets, where the uncertainty surrounding the population variability of the physiological model parameters was given values corresponding to coefficients of variation of 1%, 25%, 50%, and 100%, respectively. The PBPKmore » model was calibrated to toxicokinetic data from 2 previous studies where 18 volunteers were exposed to 250-550 ppm of acetone at various levels of workload. The updated PBPK model provided a good description of the concentrations in arterial, venous, and exhaled air. The precision of most of the model parameter estimates was improved. New information was particularly gained on the population distribution of the parameters governing the washin-washout effect. The results presented herein provide a good starting point to estimate the target dose of acetone in the working and general populations for risk assessment purposes.« less

  15. Implications of lower risk thresholds for statin treatment in primary prevention: analysis of CPRD and simulation modelling of annual cholesterol monitoring.

    PubMed

    McFadden, Emily; Stevens, Richard; Glasziou, Paul; Perera, Rafael

    2015-01-01

    To estimate numbers affected by a recent change in UK guidelines for statin use in primary prevention of cardiovascular disease. We modelled cholesterol ratio over time using a sample of 45,151 men (≥40years) and 36,168 women (≥55years) in 2006, without statin treatment or previous cardiovascular disease, from the Clinical Practice Research Datalink. Using simulation methods, we estimated numbers indicated for new statin treatment, if cholesterol was measured annually and used in the QRISK2 CVD risk calculator, using the previous 20% and newly recommended 10% thresholds. We estimate that 58% of men and 55% of women would be indicated for treatment by five years and 71% of men and 73% of women by ten years using the 20% threshold. Using the proposed threshold of 10%, 84% of men and 90% of women would be indicated for treatment by 5years and 92% of men and 98% of women by ten years. The proposed change of risk threshold from 20% to 10% would result in the substantial majority of those recommended for cholesterol testing being indicated for statin treatment. Implications depend on the value of statins in those at low to medium risk, and whether there are harms. Copyright © 2014. Published by Elsevier Inc.

  16. Extending Theory-Based Quantitative Predictions to New Health Behaviors.

    PubMed

    Brick, Leslie Ann D; Velicer, Wayne F; Redding, Colleen A; Rossi, Joseph S; Prochaska, James O

    2016-04-01

    Traditional null hypothesis significance testing suffers many limitations and is poorly adapted to theory testing. A proposed alternative approach, called Testing Theory-based Quantitative Predictions, uses effect size estimates and confidence intervals to directly test predictions based on theory. This paper replicates findings from previous smoking studies and extends the approach to diet and sun protection behaviors using baseline data from a Transtheoretical Model behavioral intervention (N = 5407). Effect size predictions were developed using two methods: (1) applying refined effect size estimates from previous smoking research or (2) using predictions developed by an expert panel. Thirteen of 15 predictions were confirmed for smoking. For diet, 7 of 14 predictions were confirmed using smoking predictions and 6 of 16 using expert panel predictions. For sun protection, 3 of 11 predictions were confirmed using smoking predictions and 5 of 19 using expert panel predictions. Expert panel predictions and smoking-based predictions poorly predicted effect sizes for diet and sun protection constructs. Future studies should aim to use previous empirical data to generate predictions whenever possible. The best results occur when there have been several iterations of predictions for a behavior, such as with smoking, demonstrating that expected values begin to converge on the population effect size. Overall, the study supports necessity in strengthening and revising theory with empirical data.

  17. Poor agreement between continuous measurements of energy expenditure and routinely used prediction equations in intensive care unit patients.

    PubMed

    Reid, Clare L

    2007-10-01

    A wide variation in 24h energy expenditure has been demonstrated previously in intensive care unit (ICU) patients. The accuracy of equations used to predict energy expenditure in critically ill patients is frequently compared with single or short-duration indirect calorimetry measurements, which may not represent the total energy expenditure (TEE) of these patients. To take into account this variability in energy expenditure, estimates have been compared with continuous indirect calorimetry measurements. Continuous (24h/day for 5 days) indirect calorimetry measurements were made in patients requiring mechanical ventilation for 5 days. The Harris-Benedict, Schofield and Ireton-Jones equations and the American College of Chest Physicians recommendation of 25 kcal/kg/day were used to estimate energy requirements. A total of 192 days of measurements, in 27 patients, were available for comparison with the different equations. Agreement between the equations and measured values was poor. The Harris-Benedict, Schofield and ACCP equations provided more estimates (66%, 66% and 65%, respectively) within 80% and 110% of TEE values. However, each of these equations would have resulted in clinically significant underfeeding (<80% of TEE) in 16%, 15% and 22% of patients, respectively, and overfeeding (>110% of TEE) in 18%, 19% and 13% of patients, respectively. Limits of agreement between the different equations and TEE values were unacceptably wide. Prediction equations may result in significant under or overfeeding in the clinical setting.

  18. Estimating suspended sediment load with multivariate adaptive regression spline, teaching-learning based optimization, and artificial bee colony models.

    PubMed

    Yilmaz, Banu; Aras, Egemen; Nacar, Sinan; Kankal, Murat

    2018-05-23

    The functional life of a dam is often determined by the rate of sediment delivery to its reservoir. Therefore, an accurate estimate of the sediment load in rivers with dams is essential for designing and predicting a dam's useful lifespan. The most credible method is direct measurements of sediment input, but this can be very costly and it cannot always be implemented at all gauging stations. In this study, we tested various regression models to estimate suspended sediment load (SSL) at two gauging stations on the Çoruh River in Turkey, including artificial bee colony (ABC), teaching-learning-based optimization algorithm (TLBO), and multivariate adaptive regression splines (MARS). These models were also compared with one another and with classical regression analyses (CRA). Streamflow values and previously collected data of SSL were used as model inputs with predicted SSL data as output. Two different training and testing dataset configurations were used to reinforce the model accuracy. For the MARS method, the root mean square error value was found to range between 35% and 39% for the test two gauging stations, which was lower than errors for other models. Error values were even lower (7% to 15%) using another dataset. Our results indicate that simultaneous measurements of streamflow with SSL provide the most effective parameter for obtaining accurate predictive models and that MARS is the most accurate model for predicting SSL. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Incorporation of diffusion-weighted magnetic resonance imaging data into a simple mathematical model of tumor growth

    NASA Astrophysics Data System (ADS)

    Atuegwu, N. C.; Colvin, D. C.; Loveless, M. E.; Xu, L.; Gore, J. C.; Yankeelov, T. E.

    2012-01-01

    We build on previous work to show how serial diffusion-weighted MRI (DW-MRI) data can be used to estimate proliferation rates in a rat model of brain cancer. Thirteen rats were inoculated intracranially with 9L tumor cells; eight rats were treated with the chemotherapeutic drug 1,3-bis(2-chloroethyl)-1-nitrosourea and five rats were untreated controls. All animals underwent DW-MRI immediately before, one day and three days after treatment. Values of the apparent diffusion coefficient (ADC) were calculated from the DW-MRI data and then used to estimate the number of cells in each voxel and also for whole tumor regions of interest. The data from the first two imaging time points were then used to estimate the proliferation rate of each tumor. The proliferation rates were used to predict the number of tumor cells at day three, and this was correlated with the corresponding experimental data. The voxel-by-voxel analysis yielded Pearson's correlation coefficients ranging from -0.06 to 0.65, whereas the region of interest analysis provided Pearson's and concordance correlation coefficients of 0.88 and 0.80, respectively. Additionally, the ratio of positive to negative proliferation values was used to separate the treated and control animals (p <0.05) at an earlier point than the mean ADC values. These results further illustrate how quantitative measurements of tumor state obtained non-invasively by imaging can be incorporated into mathematical models that predict tumor growth.

  20. Simulation of the airwave caused by the Chelyabinsk superbolide

    NASA Astrophysics Data System (ADS)

    Avramenko, Mikhail I.; Glazyrin, Igor V.; Ionov, Gennady V.; Karpeev, Artem V.

    2014-06-01

    Numerical simulations were carried out to model the propagation of an airwave from the fireball that passed over Chelyabinsk (Russia) on 15 February 2013. The airburst of the Chelyabinsk meteoroid occurred due to its catastrophic fragmentation in the atmosphere. Simulations of the space-time distribution of energy deposition during the airburst were done using a novel fragmentation model based on dimensionality considerations and analogy to the fission chain reaction in fissile materials. To get an estimate of the airburst energy, observed values of the airwave arrival times to different populated localities were retrieved from video records available on the Internet. The calculated arrival times agree well with the observed values for all the localities. Energy deposition in the atmosphere obtained from observations of the airwave arrival times was found to be 460 ± 60 kt in trinitrotoluene (TNT) equivalent. We also obtained an independent estimate for the deposited energy, 450-160+200 kt TNT from detecting the air increment velocity due to the wave passage in Chelyabinsk. Assuming that the energy of about 90 kt TNT was irradiated in the form of visible light and infrared radiation, as registered with optical sensors [Yeomans and Chodas, 2013], one can value the total energy release to be about 550 kt TNT which is in agreement with previous estimates from infrasound registration and from optical sensors data. The overpressure amplitude and its positive phase duration in the airwave that reached the city of Chelyabinsk were calculated to be about 2 kPa and 10 s accordingly.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nisbet, A.F.; Woodman, R.F.M.

    A database of soil-to-plant transfer factors for radiocesium and radiostrontium has been compiled for arable crops from published and unpublished sources. The database is more extensive than previous compilations of data published by the International Union of Radioecologists, containing new information for Scandinavia and Greece in particular. It also contains ancillary data on important soil characteristics. The database is sub-divided into 28 soil-crop combinations, covering four soil types and seven crop groups. Statistical analyses showed that transfer factors for radiocesium could not generally be predicted as a function of climatic region, type of experiment, age of contamination, or silt characteristics.more » However, significant relationships accounting for more than 30% of the variability in transfer factor were identified between transfer factors for radiostrontium and soil pH/organic matter status for a few soil-crop combinations. Best estimate transfer factors for radiocesium and radiostrontium were calculated for 28 soil-crop combinations, based on their geometric means: only the edible parts were considered. To predict the likely value of future individual transfer factors, 95% confidence intervals were also derived. A comparison of best estimate transfer factors derived in this study with recommended values published by the International Union of Radioecologists in 1989 and 1992 was made for comparable soil-crop groupings. While there were no significant differences between the best estimate values derived in this study and the 1992 data, radiological assessments that still use 1989 data may be unnecessarily cautious.« less

  2. Fusing Continuous-Valued Medical Labels Using a Bayesian Model.

    PubMed

    Zhu, Tingting; Dunkley, Nic; Behar, Joachim; Clifton, David A; Clifford, Gari D

    2015-12-01

    With the rapid increase in volume of time series medical data available through wearable devices, there is a need to employ automated algorithms to label data. Examples of labels include interventions, changes in activity (e.g. sleep) and changes in physiology (e.g. arrhythmias). However, automated algorithms tend to be unreliable resulting in lower quality care. Expert annotations are scarce, expensive, and prone to significant inter- and intra-observer variance. To address these problems, a Bayesian Continuous-valued Label Aggregator (BCLA) is proposed to provide a reliable estimation of label aggregation while accurately infer the precision and bias of each algorithm. The BCLA was applied to QT interval (pro-arrhythmic indicator) estimation from the electrocardiogram using labels from the 2006 PhysioNet/Computing in Cardiology Challenge database. It was compared to the mean, median, and a previously proposed Expectation Maximization (EM) label aggregation approaches. While accurately predicting each labelling algorithm's bias and precision, the root-mean-square error of the BCLA was 11.78 ± 0.63 ms, significantly outperforming the best Challenge entry (15.37 ± 2.13 ms) as well as the EM, mean, and median voting strategies (14.76 ± 0.52, 17.61 ± 0.55, and 14.43 ± 0.57 ms respectively with p < 0.0001). The BCLA could therefore provide accurate estimation for medical continuous-valued label tasks in an unsupervised manner even when the ground truth is not available.

  3. Estimation of speciated and total mercury dry deposition at monitoring locations in eastern and central North America

    USGS Publications Warehouse

    Zhang, L.; Blanchard, P.; Gay, D.A.; Prestbo, E.M.; Risch, M.R.; Johnson, D.; Narayan, J.; Zsolway, R.; Holsen, T.M.; Miller, E.K.; Castro, M.S.; Graydon, J.A.; St. Louis, V.L.; Dalziel, J.

    2012-01-01

    Dry deposition of speciated mercury, i.e., gaseous oxidized mercury (GOM), particulate-bound mercury (PBM), and gaseous elemental mercury (GEM), was estimated for the year 2008–2009 at 19 monitoring locations in eastern and central North America. Dry deposition estimates were obtained by combining monitored two- to four-hourly speciated ambient concentrations with modeled hourly dry deposition velocities (Vd) calculated using forecasted meteorology. Annual dry deposition of GOM+PBM was estimated to be in the range of 0.4 to 8.1 μg m−2 at these locations with GOM deposition being mostly five to ten times higher than PBM deposition, due to their different modeled Vd values. Net annual GEM dry deposition was estimated to be in the range of 5 to 26 μg m−2 at 18 sites and 33 μg m−2 at one site. The estimated dry deposition agrees very well with limited surrogate-surface dry deposition measurements of GOM and PBM, and also agrees with litterfall mercury measurements conducted at multiple locations in eastern and central North America. This study suggests that GEM contributes much more than GOM+PBM to the total dry deposition at the majority of the sites considered here; the only exception is at locations close to significant point sources where GEM and GOM+PBM contribute equally to the total dry deposition. The relative magnitude of the speciated dry deposition and their good comparisons with litterfall deposition suggest that mercury in litterfall originates primarily from GEM, which is consistent with the limited number of previous field studies. The study also supports previous analyses suggesting that total dry deposition of mercury is equal to, if not more important than, wet deposition of mercury on a regional scale in eastern North America.

  4. THE INCLINATION OF THE SOFT X-RAY TRANSIENT A0620-00 AND THE MASS OF ITS BLACK HOLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cantrell, Andrew G.; Bailyn, Charles D.; Orosz, Jerome A.

    2010-02-20

    We analyze photometry of the soft X-ray transient A0620 - 00 spanning nearly 30 years, including previously published and previously unpublished data. Previous attempts to determine the inclination of A0620 using subsets of these data have yielded a wide range of measured values of i. Differences in the measured value of i have been due to changes in the shape of the light curve and uncertainty regarding the contamination from the disk. We give a new technique for estimating the disk fraction and find that disk light is significant in all light curves, even in the infrared. We also findmore » that all changes in the shape and normalization of the light curve originate in a variable disk component. After accounting for this disk component, we find that all the data, including light curves of significantly different shapes, point to a consistent value of i. Combining results from many separate data sets, we find i = 51.{sup 0}0 +- 0.{sup 0}9, implying M = 6.6 +- 0.25 M{sub sun}. Using our dynamical model and zero-disk stellar VIH magnitudes, we find d = 1.06 +- 0.12 kpc. Understanding the disk origin of nonellipsoidal variability may assist with making reliable determinations of i in other systems, and the fluctuations in disk light may provide a new observational tool for understanding the three-dimensional structure of the accretion disk.« less

  5. The Inclination of the Soft X-Ray Transient A0620-00 and the Mass of its Black Hole

    NASA Astrophysics Data System (ADS)

    Cantrell, Andrew G.; Bailyn, Charles D.; Orosz, Jerome A.; McClintock, Jeffrey E.; Remillard, Ronald A.; Froning, Cynthia S.; Neilsen, Joseph; Gelino, Dawn M.; Gou, Lijun

    2010-02-01

    We analyze photometry of the soft X-ray transient A0620 - 00 spanning nearly 30 years, including previously published and previously unpublished data. Previous attempts to determine the inclination of A0620 using subsets of these data have yielded a wide range of measured values of i. Differences in the measured value of i have been due to changes in the shape of the light curve and uncertainty regarding the contamination from the disk. We give a new technique for estimating the disk fraction and find that disk light is significant in all light curves, even in the infrared. We also find that all changes in the shape and normalization of the light curve originate in a variable disk component. After accounting for this disk component, we find that all the data, including light curves of significantly different shapes, point to a consistent value of i. Combining results from many separate data sets, we find i = 51fdg0 ± 0fdg9, implying M = 6.6 ± 0.25 M sun. Using our dynamical model and zero-disk stellar VIH magnitudes, we find d = 1.06 ± 0.12 kpc. Understanding the disk origin of nonellipsoidal variability may assist with making reliable determinations of i in other systems, and the fluctuations in disk light may provide a new observational tool for understanding the three-dimensional structure of the accretion disk.

  6. A re-evaluation of the relativistic redshift on frequency standards at NIST, Boulder, Colorado, USA

    NASA Astrophysics Data System (ADS)

    Pavlis, Nikolaos K.; Weiss, Marc A.

    2017-08-01

    We re-evaluated the relativistic redshift correction applicable to the frequency standards at the National Institute of Standards and Technology (NIST) in Boulder, Colorado, USA, based on a precise GPS survey of three benchmarks on the roof of the building where these standards had been previously housed, and on global and regional geoid models supported by data from the GRACE and GOCE missions, including EGM2008, USGG2009, and USGG2012. We also evaluated the redshift offset based on the published NAVD88 geopotential number of the leveling benchmark Q407 located on the side of Building 1 at NIST, Boulder, Colorado, USA, after estimating the bias of the NAVD88 datum at our specific location. Based on these results, our current best estimate of the relativistic redshift correction, if frequency standards were located at the height of the leveling benchmark Q407 outside the second floor of Building 1, with respect to the EGM2008 geoid whose potential has been estimated to be {{W}0}=62 636 855.69 {{m}2} {{s}-2} , is equal to (-1798.50  ±  0.06)  ×  10-16. The corresponding value, with respect to an equipotential surface defined by the International Astronomical Union’s (IAU) adopted value of {{W}0}=62 636 856.0 {{m}2} {{s}-2} , is (-1798.53  ±  0.06)  ×  10-16. These values are comparable to the value of (-1798.70  ±  0.30)  ×  10-16, estimated by Pavlis and Weiss in 2003, with respect to an equipotential surface defined by {{W}0}=62 636 856.88 {{m}2} {{s}-2} . The minus sign implies that clocks run faster in the laboratory in Boulder than a corresponding clock located on the geoid. Contribution of US government, not subject to Copyright.

  7. The option value of delay in health technology assessment.

    PubMed

    Eckermann, Simon; Willan, Andrew R

    2008-01-01

    Processes of health technology assessment (HTA) inform decisions under uncertainty about whether to invest in new technologies based on evidence of incremental effects, incremental cost, and incremental net benefit monetary (INMB). An option value to delaying such decisions to wait for further evidence is suggested in the usual case of interest, in which the prior distribution of INMB is positive but uncertain. of estimating the option value of delaying decisions to invest have previously been developed when investments are irreversible with an uncertain payoff over time and information is assumed fixed. However, in HTA decision uncertainty relates to information (evidence) on the distribution of INMB. This article demonstrates that the option value of delaying decisions to allow collection of further evidence can be estimated as the expected value of sample of information (EVSI). For irreversible decisions, delay and trial (DT) is demonstrated to be preferred to adopt and no trial (AN) when the EVSI exceeds expected costs of information, including expected opportunity costs of not treating patients with the new therapy. For reversible decisions, adopt and trial (AT) becomes a potentially optimal strategy, but costs of reversal are shown to reduce the EVSI of this strategy due to both a lower probability of reversal being optimal and lower payoffs when reversal is optimal. Hence, decision makers are generally shown to face joint research and reimbursement decisions (AN, DT and AT), with the optimal choice dependent on costs of reversal as well as opportunity costs of delay and the distribution of prior INMB.

  8. Maximum likelihood solution for inclination-only data in paleomagnetism

    NASA Astrophysics Data System (ADS)

    Arason, P.; Levi, S.

    2010-08-01

    We have developed a new robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data. In paleomagnetic analysis, the arithmetic mean of inclination-only data is known to introduce a shallowing bias. Several methods have been introduced to estimate the unbiased mean inclination of inclination-only data together with measures of the dispersion. Some inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all the methods require various assumptions and approximations that are often inappropriate. For some steep and dispersed data sets, these methods provide estimates that are significantly displaced from the peak of the likelihood function to systematically shallower inclination. The problem locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest, because some elements of the likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study, we succeeded in analytically cancelling exponential elements from the log-likelihood function, and we are now able to calculate its value anywhere in the parameter space and for any inclination-only data set. Furthermore, we can now calculate the partial derivatives of the log-likelihood function with desired accuracy, and locate the maximum likelihood without the assumptions required by previous methods. To assess the reliability and accuracy of our method, we generated large numbers of random Fisher-distributed data sets, for which we calculated mean inclinations and precision parameters. The comparisons show that our new robust Arason-Levi maximum likelihood method is the most reliable, and the mean inclination estimates are the least biased towards shallow values.

  9. Cost-benefit analysis of establishing and operating radiation oncology services in Fiji.

    PubMed

    Kim, Eunkyoung; Cho, Yoon-Min; Kwon, Soonman; Park, Kunhee

    2017-10-01

    Rising demand for services of cancer patients has been recognised by the Government of Fiji as a national health priority. Increasing attention has been paid to the lack of service of radiation therapy or radiotherapy in Fiji. This study aims to estimate and compare the costs and benefits of introducing radiation oncology services in Fiji from the societal perspective. Time horizon for cost-benefit analysis (CBA) was 15 years from 2021 to 2035. The benefits and costs were converted to the present values of 2016. Estimates for the CBA model were taken from previous studies and expert opinions and data obtained from field visits to Fiji in January 2016. Sensitivity analyses with changing assumptions were undertaken. The estimated net benefit, applying the national minimum wage (NMW) to measure monetary value for life-year gained, was -31,624,421 FJD with 0.69 of benefit-cost (B/C) ratio. If gross national income (GNI) per capita was used for the value of life years, net benefit was 3,975,684 FJD (B/C ratio: 1.04). With a pessimistic scenario, establishing the center appeared to be not cost-beneficial, and the net benefit was -53,634,682 FJD (B/C ratio: 0.46); net benefit with an optimistic scenario was estimated 23,178,189 FJD (B/C ratio: 1.20). Based on the CBA results from using GNI per capita instead of the NMW, this project would be cost-beneficial. Introducing a radiation oncology center in Fiji would have potential impacts on financial sustainability, financial protection, and accessibility and equity of the health system. Copyright © 2017 World Health Organization. Published by Elsevier Ltd.. All rights reserved.

  10. Continuous non-contact vital sign monitoring in neonatal intensive care unit

    PubMed Central

    Guazzi, Alessandro; Jorge, João; Davis, Sara; Watkinson, Peter; Green, Gabrielle; Shenvi, Asha; McCormick, Kenny; Tarassenko, Lionel

    2014-01-01

    Current technologies to allow continuous monitoring of vital signs in pre-term infants in the hospital require adhesive electrodes or sensors to be in direct contact with the patient. These can cause stress, pain, and also damage the fragile skin of the infants. It has been established previously that the colour and volume changes in superficial blood vessels during the cardiac cycle can be measured using a digital video camera and ambient light, making it possible to obtain estimates of heart rate or breathing rate. Most of the papers in the literature on non-contact vital sign monitoring report results on adult healthy human volunteers in controlled environments for short periods of time. The authors' current clinical study involves the continuous monitoring of pre-term infants, for at least four consecutive days each, in the high-dependency care area of the Neonatal Intensive Care Unit (NICU) at the John Radcliffe Hospital in Oxford. The authors have further developed their video-based, non-contact monitoring methods to obtain continuous estimates of heart rate, respiratory rate and oxygen saturation for infants nursed in incubators. In this Letter, it is shown that continuous estimates of these three parameters can be computed with an accuracy which is clinically useful. During stable sections with minimal infant motion, the mean absolute error between the camera-derived estimates of heart rate and the reference value derived from the ECG is similar to the mean absolute error between the ECG-derived value and the heart rate value from a pulse oximeter. Continuous non-contact vital sign monitoring in the NICU using ambient light is feasible, and the authors have shown that clinically important events such as a bradycardia accompanied by a major desaturation can be identified with their algorithms for processing the video signal. PMID:26609384

  11. Polarizabilities and hyperpolarizabilities for the atoms Al, Si, P, S, Cl, and Ar: Coupled cluster calculations.

    PubMed

    Lupinetti, Concetta; Thakkar, Ajit J

    2005-01-22

    Accurate static dipole polarizabilities and hyperpolarizabilities are calculated for the ground states of the Al, Si, P, S, Cl, and Ar atoms. The finite-field computations use energies obtained with various ab initio methods including Moller-Plesset perturbation theory and the coupled cluster approach. Excellent agreement with experiment is found for argon. The experimental alpha for Al is likely to be in error. Only limited comparisons are possible for the other atoms because hyperpolarizabilities have not been reported previously for most of these atoms. Our recommended values of the mean dipole polarizability (in the order Al-Ar) are alpha/e(2)a(0) (2)E(h) (-1)=57.74, 37.17, 24.93, 19.37, 14.57, and 11.085 with an error estimate of +/-0.5%. The recommended values of the mean second dipole hyperpolarizability (in the order Al-Ar) are gamma/e(4)a(0) (4)E(h) (-3)=2.02 x 10(5), 4.31 x 10(4), 1.14 x 10(4), 6.51 x 10(3), 2.73 x 10(3), and 1.18 x 10(3) with an error estimate of +/-2%. Our recommended polarizability anisotropy values are Deltaalpha/e(2)a(0) (2)E(h) (-1)=-25.60, 8.41, -3.63, and 1.71 for Al, Si, S, and Cl respectively, with an error estimate of +/-1%. The recommended hyperpolarizability anisotropies are Deltagamma/e(4)a(0) (4)E(h) (-3)=-3.88 x 10(5), 4.16 x 10(4), -7.00 x 10(3), and 1.65 x 10(3) for Al, Si, S, and Cl, respectively, with an error estimate of +/-4%. (c) 2005 American Institute of Physics.

  12. Statistics of the fractional polarization of extragalactic dusty sources in Planck HFI maps

    NASA Astrophysics Data System (ADS)

    Bonavera, L.; González-Nuevo, J.; De Marco, B.; Argüeso, F.; Toffolatti, L.

    2017-11-01

    We estimate the average fractional polarization at 143, 217 and 353 GHz of a sample of 4697 extragalactic dusty sources by applying stacking technique. The sample is selected from the second version of the Planck Catalogue of Compact Sources at 857 GHz, avoiding the region inside the Planck Galactic mask (fsky ∼ 60 per cent). We recover values for the mean fractional polarization at 217 and 353 GHz of (3.10 ± 0.75) per cent and (3.65 ± 0.66) per cent, respectively, whereas at 143 GHz we give a tentative value of (3.52 ± 2.48) per cent. We discuss the possible origin of the measured polarization, comparing our new estimates with those previously obtained from a sample of radio sources. We test different distribution functions and we conclude that the fractional polarization of dusty sources is well described by a log-normal distribution, as determined in the radio band studies. For this distribution we estimate μ217GHz = 0.3 ± 0.5 [that would correspond to a median fractional polarization of Πmed = (1.3 ± 0.7) per cent] and μ353GHz = 0.7 ± 0.4 (Πmed = (2.0 ± 0.8) per cent), σ217GHz = 1.3 ± 0.2 and σ353GHz = 1.1 ± 0.2. With these values we estimate the source number counts in polarization and the contribution given by these sources to the Cosmic Microwave Background B-mode angular power spectrum at 217, 353, 600 and 800 GHz. We conclude that extragalactic dusty sources might be an important contaminant for the primordial B-mode at frequencies >217 GHz.

  13. Continuous non-contact vital sign monitoring in neonatal intensive care unit.

    PubMed

    Villarroel, Mauricio; Guazzi, Alessandro; Jorge, João; Davis, Sara; Watkinson, Peter; Green, Gabrielle; Shenvi, Asha; McCormick, Kenny; Tarassenko, Lionel

    2014-09-01

    Current technologies to allow continuous monitoring of vital signs in pre-term infants in the hospital require adhesive electrodes or sensors to be in direct contact with the patient. These can cause stress, pain, and also damage the fragile skin of the infants. It has been established previously that the colour and volume changes in superficial blood vessels during the cardiac cycle can be measured using a digital video camera and ambient light, making it possible to obtain estimates of heart rate or breathing rate. Most of the papers in the literature on non-contact vital sign monitoring report results on adult healthy human volunteers in controlled environments for short periods of time. The authors' current clinical study involves the continuous monitoring of pre-term infants, for at least four consecutive days each, in the high-dependency care area of the Neonatal Intensive Care Unit (NICU) at the John Radcliffe Hospital in Oxford. The authors have further developed their video-based, non-contact monitoring methods to obtain continuous estimates of heart rate, respiratory rate and oxygen saturation for infants nursed in incubators. In this Letter, it is shown that continuous estimates of these three parameters can be computed with an accuracy which is clinically useful. During stable sections with minimal infant motion, the mean absolute error between the camera-derived estimates of heart rate and the reference value derived from the ECG is similar to the mean absolute error between the ECG-derived value and the heart rate value from a pulse oximeter. Continuous non-contact vital sign monitoring in the NICU using ambient light is feasible, and the authors have shown that clinically important events such as a bradycardia accompanied by a major desaturation can be identified with their algorithms for processing the video signal.

  14. Electrochemical estimation of the polyphenol index in wines using a laccase biosensor.

    PubMed

    Gamella, M; Campuzano, S; Reviejo, A J; Pingarrón, J M

    2006-10-18

    The use of a laccase biosensor, under both batch and flow injection (FI) conditions, for a rapid and reliable amperometric estimation of the total content of polyphenolic compounds in wines is reported. The enzyme was immobilized by cross-linking with glutaraldehyde onto a glassy carbon electrode. Caffeic acid and gallic acid were selected as standard compounds to carry out such estimation. Experimental variables such as the enzyme loading, the applied potential, and the pH value were optimized, and different aspects regarding the operational stability of the laccase biosensor were evaluated. Using batch amperometry at -200 mV, the detection limits obtained were 2.6 x 10(-3) and 7.2 x 10(-4) mg L(-1) gallic acid and caffeic acid, respectively, which compares advantageously with previous biosensor designs. An extremely simple sample treatment consisting only of an appropriate dilution of wine sample with the supporting electrolyte solution (0.1 mol L(-1) citrate buffer of pH 5.0) was needed for the amperometric analysis of red, rosé, and white wines. Good correlations were found when the polyphenol indices obtained with the biosensor (in both the batch and FI modes) for different wine samples were plotted versus the results achieved with the classic Folin-Ciocalteu method. Application of the calibration transfer chemometric model (multiplicative fitting) allowed that the confidence intervals (for a significance level of 0.05) for the slope and intercept values of the amperometric index versus Folin-Ciocalteu index plots (r = 0.997) included the unit and zero values, respectively. This indicates that the laccase biosensor can be successfully used for the estimation of the polyphenol index in wines when compared with the Folin-Ciocalteu reference method.

  15. Real-Time Airborne Gamma-Ray Background Estimation Using NASVD with MLE and Radiation Transport for Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kulisek, Jonathan A.; Schweppe, John E.; Stave, Sean C.

    2015-06-01

    Helicopter-mounted gamma-ray detectors can provide law enforcement officials the means to quickly and accurately detect, identify, and locate radiological threats over a wide geographical area. The ability to accurately distinguish radiological threat-generated gamma-ray signatures from background gamma radiation in real time is essential in order to realize this potential. This problem is non-trivial, especially in urban environments for which the background may change very rapidly during flight. This exacerbates the challenge of estimating background due to the poor counting statistics inherent in real-time airborne gamma-ray spectroscopy measurements. To address this, we have developed a new technique for real-time estimation ofmore » background gamma radiation from aerial measurements. This method is built upon on the noise-adjusted singular value decomposition (NASVD) technique that was previously developed for estimating the potassium (K), uranium (U), and thorium (T) concentrations in soil post-flight. The method can be calibrated using K, U, and T spectra determined from radiation transport simulations along with basis functions, which may be determined empirically by applying maximum likelihood estimation (MLE) to previously measured airborne gamma-ray spectra. The method was applied to both measured and simulated airborne gamma-ray spectra, with and without man-made radiological source injections. Compared to schemes based on simple averaging, this technique was less sensitive to background contamination from the injected man-made sources and may be particularly useful when the gamma-ray background frequently changes during the course of the flight.« less

  16. Density Measurements of Low Silica CaO-SiO2-Al2O3 Slags

    NASA Astrophysics Data System (ADS)

    Muhmood, Luckman; Seetharaman, Seshadri

    2010-08-01

    Density measurements of a low-silica CaO-SiO2-Al2O3 system were carried out using the Archimedes principle. A Pt 30 pct Rh bob and wire arrangement was used for this purpose. The results obtained were in good agreement with those obtained from the model developed in the current group as well as with other results reported earlier. The density for the CaO-SiO2 and the CaO-Al2O3 binary slag systems also was estimated from the ternary values. The extrapolation of density values for high-silica systems also showed good agreement with previous works. An estimation for the density value of CaO was made from the current experimental data. The density decrease at high temperatures was interpreted based on the silicate structure. As the mole percent of SiO2 was below the 33 pct required for the orthosilicate composition, discrete {text{SiO}}4^{4 - } tetrahedral units in the silicate melt would exist along with O2- ions. The change in melt expansivity may be attributed to the ionic expansions in the order of {text{Al}}^{ 3+ } - {text{O}}^{ 2- } < {text{Ca}}^{ 2+ } - {text{O}}^{ 2- } < {text{Ca}}^{ 2+ } - {text{O}}^{ - } Structural changes in the ternary slag also could be correlated to a drastic change in the value of enthalpy of mixing.

  17. Evaluation of mRNA markers for estimating blood deposition time: Towards alibi testing from human forensic stains with rhythmic biomarkers.

    PubMed

    Lech, Karolina; Liu, Fan; Ackermann, Katrin; Revell, Victoria L; Lao, Oscar; Skene, Debra J; Kayser, Manfred

    2016-03-01

    Determining the time a biological trace was left at a scene of crime reflects a crucial aspect of forensic investigations as - if possible - it would permit testing the sample donor's alibi directly from the trace evidence, helping to link (or not) the DNA-identified sample donor with the crime event. However, reliable and robust methodology is lacking thus far. In this study, we assessed the suitability of mRNA for the purpose of estimating blood deposition time, and its added value relative to melatonin and cortisol, two circadian hormones we previously introduced for this purpose. By analysing 21 candidate mRNA markers in blood samples from 12 individuals collected around the clock at 2h intervals for 36h under real-life, controlled conditions, we identified 11 mRNAs with statistically significant expression rhythms. We then used these 11 significantly rhythmic mRNA markers, with and without melatonin and cortisol also analysed in these samples, to establish statistical models for predicting day/night time categories. We found that although in general mRNA-based estimation of time categories was less accurate than hormone-based estimation, the use of three mRNA markers HSPA1B, MKNK2 and PER3 together with melatonin and cortisol generally enhanced the time prediction accuracy relative to the use of the two hormones alone. Our data best support a model that by using these five molecular biomarkers estimates three time categories, i.e. night/early morning, morning/noon, and afternoon/evening with prediction accuracies expressed as AUC values of 0.88, 0.88, and 0.95, respectively. For the first time, we demonstrate the value of mRNA for blood deposition timing and introduce a statistical model for estimating day/night time categories based on molecular biomarkers, which shall be further validated with additional samples in the future. Moreover, our work provides new leads for molecular approaches on time of death estimation using the significantly rhythmic mRNA markers established here. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. Preliminary estimates of annual agricultural pesticide use for counties of the conterminous United States, 2010-11

    USGS Publications Warehouse

    Baker, Nancy T.; Stone, Wesley W.

    2013-01-01

    This report provides preliminary estimates of annual agricultural use of 374 pesticide compounds in counties of the conterminous United States in 2010 and 2011, compiled by means of methods described in Thelin and Stone (2013). U.S. Department of Agriculture (USDA) county-level data for harvested-crop acreage were used in conjunction with proprietary Crop Reporting District (CRD)-level pesticide-use data to estimate county-level pesticide use. Estimated pesticide use (EPest) values were calculated with both the EPest-high and EPest-low methods. The distinction between the EPest-high method and the EPest-low method is that there are more counties with estimated pesticide use for EPest-high compared to EPest-low, owing to differing assumptions about missing survey data (Thelin and Stone, 2013). Preliminary estimates in this report will be revised upon availability of updated crop acreages in the 2012 Agricultural Census, to be published by the USDA in 2014. In addition, estimates for 2008 and 2009 previously published by Stone (2013) will be updated subsequent to the 2012 Agricultural Census release. Estimates of annual agricultural pesticide use are provided as downloadable, tab-delimited files, which are organized by compound, year, state Federal Information Processing Standard (FIPS) code, county FIPS code, and kg (amount in kilograms).

  19. Optical Method for Estimating the Chlorophyll Contents in Plant Leaves.

    PubMed

    Pérez-Patricio, Madaín; Camas-Anzueto, Jorge Luis; Sanchez-Alegría, Avisaí; Aguilar-González, Abiel; Gutiérrez-Miceli, Federico; Escobar-Gómez, Elías; Voisin, Yvon; Rios-Rojas, Carlos; Grajales-Coutiño, Ruben

    2018-02-22

    This work introduces a new vision-based approach for estimating chlorophyll contents in a plant leaf using reflectance and transmittance as base parameters. Images of the top and underside of the leaf are captured. To estimate the base parameters (reflectance/transmittance), a novel optical arrangement is proposed. The chlorophyll content is then estimated by using linear regression where the inputs are the reflectance and transmittance of the leaf. Performance of the proposed method for chlorophyll content estimation was compared with a spectrophotometer and a Soil Plant Analysis Development (SPAD) meter. Chlorophyll content estimation was realized for Lactuca sativa L., Azadirachta indica , Canavalia ensiforme , and Lycopersicon esculentum . Experimental results showed that-in terms of accuracy and processing speed-the proposed algorithm outperformed many of the previous vision-based approach methods that have used SPAD as a reference device. On the other hand, the accuracy reached is 91% for crops such as Azadirachta indica , where the chlorophyll value was obtained using the spectrophotometer. Additionally, it was possible to achieve an estimation of the chlorophyll content in the leaf every 200 ms with a low-cost camera and a simple optical arrangement. This non-destructive method increased accuracy in the chlorophyll content estimation by using an optical arrangement that yielded both the reflectance and transmittance information, while the required hardware is cheap.

  20. Predicting the Performance of Chain Saw Machines Based on Shore Scleroscope Hardness

    NASA Astrophysics Data System (ADS)

    Tumac, Deniz

    2014-03-01

    Shore hardness has been used to estimate several physical and mechanical properties of rocks over the last few decades. However, the number of researches correlating Shore hardness with rock cutting performance is quite limited. Also, rather limited researches have been carried out on predicting the performance of chain saw machines. This study differs from the previous investigations in the way that Shore hardness values (SH1, SH2, and deformation coefficient) are used to determine the field performance of chain saw machines. The measured Shore hardness values are correlated with the physical and mechanical properties of natural stone samples, cutting parameters (normal force, cutting force, and specific energy) obtained from linear cutting tests in unrelieved cutting mode, and areal net cutting rate of chain saw machines. Two empirical models developed previously are improved for the prediction of the areal net cutting rate of chain saw machines. The first model is based on a revised chain saw penetration index, which uses SH1, machine weight, and useful arm cutting depth as predictors. The second model is based on the power consumed for only cutting the stone, arm thickness, and specific energy as a function of the deformation coefficient. While cutting force has a strong relationship with Shore hardness values, the normal force has a weak or moderate correlation. Uniaxial compressive strength, Cerchar abrasivity index, and density can also be predicted by Shore hardness values.

  1. VHDL-AMS modelling and simulation of a planar electrostatic micromotor

    NASA Astrophysics Data System (ADS)

    Endemaño, A.; Fourniols, J. Y.; Camon, H.; Marchese, A.; Muratet, S.; Bony, F.; Dunnigan, M.; Desmulliez, M. P. Y.; Overton, G.

    2003-09-01

    System level simulation results of a planar electrostatic micromotor, based on analytical models of the static and dynamic torque behaviours, are presented. A planar variable capacitance (VC) electrostatic micromotor designed, fabricated and tested at LAAS (Toulouse) in 1995 is simulated using the high level language VHDL-AMS (VHSIC (very high speed integrated circuits) hardware description language-analog mixed signal). The analytical torque model is obtained by first calculating the overlaps and capacitances between different electrodes based on a conformal mapping transformation. Capacitance values in the order of 10-16 F and torque values in the order of 10-11 N m have been calculated in agreement with previous measurements and simulations from this type of motor. A dynamic model has been developed for the motor by calculating the inertia coefficient and estimating the friction-coefficient-based values calculated previously for other similar devices. Starting voltage results obtained from experimental measurement are in good agreement with our proposed simulation model. Simulation results of starting voltage values, step response, switching response and continuous operation of the micromotor, based on the dynamic model of the torque, are also presented. Four VHDL-AMS blocks were created, validated and simulated for power supply, excitation control, micromotor torque creation and micromotor dynamics. These blocks can be considered as the initial phase towards the creation of intellectual property (IP) blocks for microsystems in general and electrostatic micromotors in particular.

  2. GRACE time-variable gravity field recovery using an improved energy balance approach

    NASA Astrophysics Data System (ADS)

    Shang, Kun; Guo, Junyi; Shum, C. K.; Dai, Chunli; Luo, Jia

    2015-12-01

    A new approach based on energy conservation principle for satellite gravimetry mission has been developed and yields more accurate estimation of in situ geopotential difference observables using K-band ranging (KBR) measurements from the Gravity Recovery and Climate Experiment (GRACE) twin-satellite mission. This new approach preserves more gravity information sensed by KBR range-rate measurements and reduces orbit error as compared to previous energy balance methods. Results from analysis of 11 yr of GRACE data indicated that the resulting geopotential difference estimates agree well with predicted values from official Level 2 solutions: with much higher correlation at 0.9, as compared to 0.5-0.8 reported by previous published energy balance studies. We demonstrate that our approach produced a comparable time-variable gravity solution with the Level 2 solutions. The regional GRACE temporal gravity solutions over Greenland reveals that a substantially higher temporal resolution is achievable at 10-d sampling as compared to the official monthly solutions, but without the compromise of spatial resolution, nor the need to use regularization or post-processing.

  3. Refining our estimate of atmospheric CO2 across the Eocene-Oligocene climatic transition

    NASA Astrophysics Data System (ADS)

    Heureux, Ana M. C.; Rickaby, Rosalind E. M.

    2015-01-01

    The Eocene-Oligocene transition (EOT) followed by Oligocene isotope event 1 (Oi-1) is a dramatic global switch in climate characterized by deep-sea cooling and the first formation of permanent Antarctic ice. Models and proxy evidence suggest that declining partial pressure of atmospheric carbon dioxide (CO2atm) below a threshold may explain the onset of global cooling and associated ice formation at Oi-1. However, significant uncertainty remains in the estimated values and salient features of reconstructed CO2atm across this interval. In this study, we present novel carbon isotope records from size separated diatom associated organic matter (δ13Cdiatom) preserved in silica frustules. Physical preservation of this material allows concurrent investigation of isotopic and cell size information, providing two input parameters for biogeochemical models and the reconstruction of CO2atm. We estimate CO2atm in two ways; first we use size and reaction-diffusion kinetics of a cell to calculate a CO2atm threshold. Second we use the calibrated relationship between ɛp(diatom) and carbon dioxide from culture and field studies to create a record of CO2atm prior to and across the transition. Our study, from site 1090 in the Atlantic sector of the Southern Ocean, shows CO2atm values fluctuating between 900 and 1700 ± 100 p.p.m.v. across the EOT followed by a drop to values in the order of 700 to 800 ± 100 p.p.m.v. just prior to the onset of Oi-1. Our values and magnitude of CO2atm change differ from previous estimates, but confirm the overall trends inferred from boron isotopes and alkenones, including a marked rebound following Oi-1. Due to the intricate nature of the climate system and complexities in constraining paleo-proxies, this work emphasizes the importance of a multi-proxy approach to estimating of CO2atm in order to elucidate its role in the emplacement of Antarctic ice-sheets at the EOT.

  4. Source Term Estimates of Radioxenon Released from the BaTek Medical Isotope Production Facility Using External Measured Air Concentrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eslinger, Paul W.; Cameron, Ian M.; Dumais, Johannes R.

    2015-10-01

    Abstract Batan Teknologi (BaTek) operates an isotope production facility in Serpong, Indonesia that supplies 99mTc for use in medical procedures. Atmospheric releases of Xe-133 in the production process at BaTek are known to influence the measurements taken at the closest stations of the International Monitoring System (IMS). The purpose of the IMS is to detect evidence of nuclear explosions, including atmospheric releases of radionuclides. The xenon isotopes released from BaTek are the same as those produced in a nuclear explosion, but the isotopic ratios are different. Knowledge of the magnitude of releases from the isotope production facility helps inform analystsmore » trying to decide whether a specific measurement result came from a nuclear explosion. A stack monitor deployed at BaTek in 2013 measured releases to the atmosphere for several isotopes. The facility operates on a weekly cycle, and the stack data for June 15-21, 2013 show a release of 1.84E13 Bq of Xe-133. Concentrations of Xe-133 in the air are available at the same time from a xenon sampler located 14 km from BaTek. An optimization process using atmospheric transport modeling and the sampler air concentrations produced a release estimate of 1.88E13 Bq. The same optimization process yielded a release estimate of 1.70E13 Bq for a different week in 2012. The stack release value and the two optimized estimates are all within 10 percent of each other. Weekly release estimates of 1.8E13 Bq and a 40 percent facility operation rate yields a rough annual release estimate of 3.7E13 Bq of Xe-133. This value is consistent with previously published estimates of annual releases for this facility, which are based on measurements at three IMS stations. These multiple lines of evidence cross-validate the stack release estimates and the release estimates from atmospheric samplers.« less

  5. Above Ground Carbon Stock Estimates of Mangrove Forest Using Worldview-2 Imagery in Teluk Benoa, Bali

    NASA Astrophysics Data System (ADS)

    Candra, E. D.; Hartono; Wicaksono, P.

    2016-11-01

    Mangrove forests have a role as an absorbent and a carbon sink to a reduction CO2 in the atmosphere. Based on the previous studies found that mangrove forests have the ability to sequestering carbon through photosynthesis and carbon burial of sediment effectively. The value and distribution of carbon stock are important to understand through remote sensing technology. In this study, will estimate the carbon stock using WorldView-2 imagery with and without distinction mangrove species. Worldview-2 is a high resolution image with 2 meters spatial resolution and eight spectral bands. Worldview-2 potential to estimate carbon stock in detail. Vegetation indices such as DVI (Difference Vegetation Index), EVI (Enhanced Vegetation Index), and MRE-SR (Modified Red Edge-Simple Ratio) and field data were modeled to determine the best vegetation indices to estimate carbon stocks. Carbon stock estimated by allometric equation approach specific to each species of mangrove. Worldview-2 imagery to map mangrove species with an accuracy of 80.95%. Total carbon stock estimation results in the study area of 35.349,87 tons of dominant species Rhizophora apiculata, Rhizophora mucronata and Sonneratia alba.

  6. Buying and Selling Prices of Investments: Configural Weight Model of Interactions Predicts Violations of Joint Independence.

    PubMed

    Birnbaum; Zimmermann

    1998-05-01

    Judges evaluated buying and selling prices of hypothetical investments, based on the previous price of each investment and estimates of the investment's future value given by advisors of varied expertise. Effect of a source's estimate varied in proportion to the source's expertise, and it varied inversely with the number and expertise of other sources. There was also a configural effect in which the effect of a source's estimate was affected by the rank order of that source's estimate, in relation to other estimates of the same investment. These interactions were fit with a configural weight averaging model in which buyers and sellers place different weights on estimates of different ranks. This model implies that one can design a new experiment in which there will be different violations of joint independence in different viewpoints. Experiment 2 confirmed patterns of violations of joint independence predicted from the model fit in Experiment 1. Experiment 2 also showed that preference reversals between viewpoints can be predicted by the model of Experiment 1. Configural weighting provides a better account of buying and selling prices than either of two models of loss aversion or the theory of anchoring and insufficient adjustment. Copyright 1998 Academic Press.

  7. [Predictive value of preoperative tests in estimating difficult intubation in patients who underwent direct laryngoscopy in ear, nose, and throat surgery].

    PubMed

    Karakus, Osman; Kaya, Cengiz; Ustun, Faik Emre; Koksal, Ersin; Ustun, Yasemin Burcu

    2015-01-01

    Predictive value of preoperative tests in estimating difficult intubation may differ in the laryngeal pathologies. Patients who had undergone direct laryngoscopy (DL) were reviewed, and predictive value of preoperative tests in estimating difficult intubation was investigated. Preoperative, and intraoperative anesthesia record forms, and computerized system of the hospital were screened. A total of 2611 patients were assessed. In 7.4% of the patients, difficult intubations were detected. Difficult intubations were encountered in some of the patients with Mallampati scoring (MS) system Class 4 (50%), Cormack-Lehane classification (CLS) Grade 4 (95.7%), previous knowledge of difficult airway (86.2%), restricted neck movements (cervical ROM) (75.8%), short thyromental distance (TMD) (81.6%), vocal cord mass (49.5%) as indicated in parentheses (p<0.0001). MS had a low sensitivity, while restricted cervical ROM, presence of a vocal cord mass, short thyromental distance, and MS each had a relatively higher positive predictive value. Incidence of difficult intubations increased 6.159 and 1.736-fold with each level of increase in CLS grade and MS class, respectively. When all tests were considered in combination difficult intubation could be classified accurately in 96.3% of the cases. Test results predicting difficult intubations in cases with DL had observedly overlapped with the results provided in the literature for the patient populations in general. Differences in some test results when compared with those of the general population might stem from the concomitant underlying laryngeal pathological conditions in patient populations with difficult intubation. Copyright © 2014 Sociedade Brasileira de Anestesiologia. Publicado por Elsevier Editora Ltda. All rights reserved.

  8. Cardiopulmonary Resuscitation in Microgravity: Efficacy in the Swine During Parabolic Flight

    NASA Technical Reports Server (NTRS)

    Johnston, Smith L.; Campbell, Mark R.; Billica, Roger D.; Gilmore, Stevan M.

    2004-01-01

    INTRODUCTION: The International Space Station will need to be as capable as possible in providing Advanced Cardiac Life Support (ACLS) and cardiopulmonary resuscitation (CPR). Previous studies with manikins in parabolic microgravity (0 G) have shown that delivering CPR in microgravity is difficult. End tidal carbon dioxide (PetCO2) has been previously shown to be an effective non-invasive tool for estimating cardiac output during cardiopulmonary resuscitation. Animal models have shown that this diagnostic adjunct can be used as a predictor of survival when PetCO2 values are maintained above 25% of pre-arrest values. METHODS: Eleven anesthetized Yorkshire swine were flown in microgravity during parabolic flight. Physiologic parameters, including PetCO2, were monitored. Standard ACLS protocols were used to resuscitate these models after chemical induction of cardiac arrest. Chest compressions were administered using conventional body positioning with waist restraint and unconventional vertical-inverted body positioning. RESULTS: PetCO2 values were maintained above 25% of both 1-G and O-G pre-arrest values in the microgravity environment (33% +/- 3 and 41 +/- 3). No significant difference between 1-G CPR and O-G CPR was found in these animal models. Effective CPR was delivered in both body positions although conventional body positioning was found to be quickly fatiguing as compared with the vertical-inverted. CONCLUSIONS: Cardiopulmonary resuscitation can be effectively administered in microgravity (0 G). Validation of this model has demonstrated that PetCO2 levels were maintained above a level previously reported to be predictive of survival. The unconventional vertical-inverted position provided effective CPR and was less fatiguing as compared with the conventional body position with waist restraints.

  9. Influence of critical closing pressure on systemic vascular resistance and total arterial compliance: A clinical invasive study.

    PubMed

    Chemla, Denis; Lau, Edmund M T; Hervé, Philippe; Millasseau, Sandrine; Brahimi, Mabrouk; Zhu, Kaixian; Sattler, Caroline; Garcia, Gilles; Attal, Pierre; Nitenberg, Alain

    2017-12-01

    Systemic vascular resistance (SVR) and total arterial compliance (TAC) modulate systemic arterial load, and their product is the time constant (Tau) of the Windkessel. Previous studies have assumed that aortic pressure decays towards a pressure asymptote (P∞) close to 0mmHg, as right atrial pressure is considered the outflow pressure. Using these assumptions, aortic Tau values of ∼1.5seconds have been documented. However, a zero P∞ may not be physiological because of the high critical closing pressure previously documented in vivo. To calculate precisely the Tau and P∞ of the Windkessel, and to determine the implications for the indices of systemic arterial load. Aortic pressure decay was analysed using high-fidelity recordings in 16 subjects. Tau was calculated assuming P∞=0mmHg, and by two methods that make no assumptions regarding P∞ (the derivative and best-fit methods). Assuming P∞=0mmHg, we documented a Tau value of 1372±308ms, with only 29% of Windkessel function manifested by end-diastole. In contrast, Tau values of 306±109 and 353±106ms were found from the derivative and best-fit methods, with P∞ values of 75±12 and 71±12mmHg, and with ∼80% completion of Windkessel function. The "effective" resistance and compliance were ∼70% and ∼40% less than SVR and TAC (area method), respectively. We did not challenge the Windkessel model, but rather the estimation technique of model variables (Tau, SVR, TAC) that assumes P∞=0. The study favoured a shorter Tau of the Windkessel and a higher P∞ compared with previous studies. This calls for a reappraisal of the quantification of systemic arterial load. Crown Copyright © 2017. Published by Elsevier Masson SAS. All rights reserved.

  10. National Economic Value Assessment of Plug-in Electric Vehicles: Volume I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melaina, Marc; Bush, Brian; Eichman, Joshua

    The adoption of plug-in electric vehicles (PEVs) can reduce household fuel expenditures by substituting electricity for gasoline while reducing greenhouse gas emissions and petroleum imports. A scenario approach is employed to provide insights into the long-term economic value of increased PEV market growth across the United States. The analytic methods estimate fundamental costs and benefits associated with an economic allocation of PEVs across households based upon household driving patterns, projected vehicle cost and performance attributes, and simulations of a future electricity grid. To explore the full technological potential of PEVs and resulting demands on the electricity grid, very high PEVmore » market growth projections from previous studies are relied upon to develop multiple future scenarios.« less

  11. Low-energy pion-nucleon scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gibbs, W.R.; Ai, L.; Kaufmann, W.B.

    An analysis of low-energy charged pion-nucleon data from recent {pi}{sup {plus_minus}}p experiments is presented. From the scattering lengths and the Goldberger-Miyazawa-Oehme (GMO) sum rule we find a value of the pion-nucleon coupling constant of f{sup 2}=0.0756{plus_minus}0.0007. We also find, contrary to most previous analyses, that the scattering volumes for the P{sub 31} and P{sub 13} partial waves are equal, within errors, corresponding to a symmetry found in the Hamiltonian of many theories. For the potential models used, the amplitudes are extrapolated into the subthreshold region to estimate the value of the {Sigma} term. Off-shell amplitudes are also provided. {copyright} {italmore » 1998} {ital The American Physical Society}« less

  12. Noble gas isotopes in mineral springs within the Cascadia Forearc, Wasihington and Oregon

    USGS Publications Warehouse

    McCrory, Patricia A.; Constantz, James E.; Hunt, Andrew G.

    2014-01-01

    This U.S. Geological Survey report presents laboratory analyses along with field notes for a pilot study to document the relative abundance of noble gases in mineral springs within the Cascadia forearc of Washington and Oregon. Estimates of the depth to the underlying Juan de Fuca oceanic plate beneath the sample sites are derived from the McCrory and others (2012) slab model. Some of these springs have been previously sampled for chemical analyses (Mariner and others, 2006), but none currently have publicly available noble gas data. Helium isotope values as well as the noble gas values and ratios presented below will be used to determine the sources and mixing history of these mineral waters.

  13. Large-scale structure from cosmic-string loops in a baryon-dominated universe

    NASA Technical Reports Server (NTRS)

    Melott, Adrian L.; Scherrer, Robert J.

    1988-01-01

    The results are presented of a numerical simulation of the formation of large-scale structure in a universe with Omega(0) = 0.2 and h = 0.5 dominated by baryons in which cosmic strings provide the initial density perturbations. The numerical model yields a power spectrum. Nonlinear evolution confirms that the model can account for 700 km/s bulk flows and a strong cluster-cluster correlation, but does rather poorly on smaller scales. There is no visual 'filamentary' structure, and the two-point correlation has too steep a logarithmic slope. The value of G mu = 4 x 10 to the -6th is significantly lower than previous estimates for the value of G mu in baryon-dominated cosmic string models.

  14. In vitro digestibility of individual amino acids in rumen-undegraded protein: the modified three-step procedure and the immobilized digestive enzyme assay.

    PubMed

    Boucher, S E; Calsamiglia, S; Parsons, C M; Stern, M D; Moreno, M Ruiz; Vázquez-Añón, M; Schwab, C G

    2009-08-01

    Three soybean meal, 3 SoyPlus (West Central Cooperative, Ralston, IA), 5 distillers dried grains with solubles, and 5 fish meal samples were used to evaluate the modified 3-step in vitro procedure (TSP) and the in vitro immobilized digestive enzyme assay (IDEA; Novus International Inc., St. Louis, MO) for estimating digestibility of AA in rumen-undegraded protein (RUP-AA). In a previous experiment, each sample was ruminally incubated in situ for 16 h, and in vivo digestibility of AA in the intact samples and in the rumen-undegraded residues (RUR) was obtained for all samples using the precision-fed cecectomized rooster assay. For the modified TSP, 5 g of RUR was weighed into polyester bags, which were then heat-sealed and placed into Daisy(II) incubator bottles. Samples were incubated in a pepsin/HCl solution followed by incubation in a pancreatin solution. After this incubation, residues remaining in the bags were analyzed for AA, and digestibility of RUP-AA was calculated based on disappearance from the bags. In vitro RUP-AA digestibility estimates obtained with this procedure were highly correlated to in vivo estimates. Corresponding intact feeds were also analyzed via the pepsin/pancreatin steps of the modified TSP. In vitro estimates of AA digestibility of the feeds were highly correlated to in vivo RUP-AA digestibility, which suggests that the feeds may not need to be ruminally incubated before determining RUP-AA digestibility in vitro. The RUR were also analyzed via the IDEA kits. The IDEA values of the RUR were good predictors of RUP-AA digestibility in soybean meal, SoyPlus, and distillers dried grains with solubles, but the IDEA values were not as good predictors of RUP-AA digestibility in fish meal. However, the IDEA values of intact feed samples were also determined and were highly correlated to in vivo RUP-AA digestibility for all feed types, suggesting that the IDEA value of intact feeds may be a better predictor of RUP-AA digestibility than the IDEA value of the RUR. In conclusion, the modified TSP and IDEA kits are good approaches for estimating RUP-AA digestibility in soybean meal products, distillers dried grains with solubles, and fish meal samples.

  15. The elemental abundances (with uncertainties) of the most Earth-like planet

    NASA Astrophysics Data System (ADS)

    Wang, Haiyang S.; Lineweaver, Charles H.; Ireland, Trevor R.

    2018-01-01

    To first order, the Earth as well as other rocky planets in the Solar System and rocky exoplanets orbiting other stars, are refractory pieces of the stellar nebula out of which they formed. To estimate the chemical composition of rocky exoplanets based on their stellar hosts' elemental abundances, we need a better understanding of the devolatilization that produced the Earth. To quantify the chemical relationships between the Earth, the Sun and other bodies in the Solar System, the elemental abundances of the bulk Earth are required. The key to comparing Earth's composition with those of other objects is to have a determination of the bulk composition with an appropriate estimate of uncertainties. Here we present concordance estimates (with uncertainties) of the elemental abundances of the bulk Earth, which can be used in such studies. First we compile, combine and renormalize a large set of heterogeneous literature values of the primitive mantle (PM) and of the core. We then integrate standard radial density profiles of the Earth and renormalize them to the current best estimate for the mass of the Earth. Using estimates of the uncertainties in i) the density profiles, ii) the core-mantle boundary and iii) the inner core boundary, we employ standard error propagation to obtain a core mass fraction of 32.5 ± 0.3 wt%. Our bulk Earth abundances are the weighted sum of our concordance core abundances and concordance PM abundances. Unlike previous efforts, the uncertainty on the core mass fraction is propagated to the uncertainties on the bulk Earth elemental abundances. Our concordance estimates for the abundances of Mg, Sn, Br, B, Cd and Be are significantly lower than previous estimates of the bulk Earth. Our concordance estimates for the abundances of Na, K, Cl, Zn, Sr, F, Ga, Rb, Nb, Gd, Ta, He, Ar, and Kr are significantly higher. The uncertainties on our elemental abundances usefully calibrate the unresolved discrepancies between standard Earth models under various geochemical and geophysical assumptions.

  16. Estimating the energetic cost of feeding excess dietary nitrogen to dairy cows.

    PubMed

    Reed, K F; Bonfá, H C; Dijkstra, J; Casper, D P; Kebreab, E

    2017-09-01

    Feeding N in excess of requirement could require the use of additional energy to metabolize excess protein, and to synthesize and excrete urea; however, the amount and fate of this energy is unknown. Little progress has been made on this topic in recent decades, so an extension of work published in 1970 was conducted to quantify the effect of excess N on ruminant energetics. In part 1 of this study, the results of previous work were replicated using a simple linear regression to estimate the effect of excess N on energy balance. In part 2, mixed model methodology and a larger data set were used to improve upon the previously reported linear regression methods. In part 3, heat production, retained energy, and milk energy replaced the composite energy balance variable previously proposed as the dependent variable to narrow the effect of excess N. In addition, rumen degradable and undegradable protein intakes were estimated using table values and included as covariates in part 3. Excess N had opposite and approximately equal effects on heat production (+4.1 to +7.6 kcal/g of excess N) and retained energy (-4.2 to -6.6 kcal/g of excess N) but had a larger negative effect on milk gross energy (-52 to -68 kcal/g of excess N). The results suggest that feeding excess N increases heat production, but more investigation is required to determine why excess N has such a large effect on milk gross energy production. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  17. Emission of atmospherically significant halocarbons by naturally occurring and farmed tropical macroalgae

    NASA Astrophysics Data System (ADS)

    Leedham, E. C.; Hughes, C.; Keng, F. S. L.; Phang, S.-M.; Malin, G.; Sturges, W. T.

    2013-01-01

    Current estimates of global halocarbon emissions highlight the tropical coastal environment as an important source of very short-lived (VSL) biogenic halocarbons to the troposphere and stratosphere. This is due to a combination of assumed high primary productivity in tropical coastal waters and the prevalence of deep convective transport potentially capable of rapidly lifting surface emissions to the upper troposphere/lower stratosphere. However, despite this perceived importance direct measurements of tropical coastal biogenic halocarbon emissions, notably from macroalgae (seaweeds), have not been made. In light of this, we provide the first dedicated study of halocarbon production by a range of 15 common tropical macroalgal species and compare these results to those from previous studies of polar and temperate macroalgae. Variation between species was substantial; CHBr3 measured at the end of a 24 h incubation varied from 1.4 to 1129 pmol g FW-1 h-1 (FW = fresh weight of sample). We used our laboratory-determined emission rates to estimate emissions of CHBr3 and CH2Br2 (the two dominant VSL precursors of stratospheric bromine) from the coastlines of Malaysia and South East Asia. We compare these values to previous top-down model estimates of emissions from these regions, and conclude that the contribution of coastal CHBr3 emissions is likely to be lower than previously assumed. The contribution of tropical aquaculture to current emission budgets is also considered. Whilst the current aquaculture contribution to halocarbon emissions in this regional is small, the potential exists for substantial increases in aquaculture to make a significant contribution to regional halocarbon budgets.

  18. Assessment of Global Annual Atmospheric Energy Balance from Satellite Observations

    NASA Technical Reports Server (NTRS)

    Lin, Bing; Stackhouse, Paul; Minnis, Patrick; Wielicki, Bruce A.; Hu, Yongxiang; Sun, Wenbo; Fan, Tai-Fang (Alice); Hinkelman, Laura

    2008-01-01

    Global atmospheric energy balance is one of the fundamental processes for the earth's climate system. This study uses currently available satellite data sets of radiative energy at the top of atmosphere (TOA) and surface and latent and sensible heat over oceans for the year 2000 to assess the global annual energy budget. Over land, surface radiation data are used to constrain assimilated results and to force the radiation, turbulent heat, and heat storage into balance due to a lack of observation-based turbulent heat flux estimations. Global annual means of the TOA net radiation obtained from both direct measurements and calculations are close to zero. The net radiative energy fluxes into the surface and the surface latent heat transported into the atmosphere are about 113 and 86 Watts per square meter, respectively. The estimated atmospheric and surface heat imbalances are about -8 9 Watts per square meter, values that are within the uncertainties of surface radiation and sea surface turbulent flux estimates and likely systematic biases in the analyzed observations. The potential significant additional absorption of solar radiation within the atmosphere suggested by previous studies does not appear to be required to balance the energy budget the spurious heat imbalances in the current data are much smaller (about half) than those obtained previously and debated at about a decade ago. Progress in surface radiation and oceanic turbulent heat flux estimations from satellite measurements significantly reduces the bias errors in the observed global energy budgets of the climate system.

  19. Hand surface area estimation formula using 3D anthropometry.

    PubMed

    Hsu, Yao-Wen; Yu, Chi-Yuang

    2010-11-01

    Hand surface area is an important reference in occupational hygiene and many other applications. This study derives a formula for the palm surface area (PSA) and hand surface area (HSA) based on three-dimensional (3D) scan data. Two-hundred and seventy subjects, 135 males and 135 females, were recruited for this study. The hand was measured using a high-resolution 3D hand scanner. Precision and accuracy of the scanner is within 0.67%. Both the PSA and HSA were computed using the triangular mesh summation method. A comparison between this study and previous textbook values (such as in the U.K. teaching text and Lund and Browder chart discussed in the article) was performed first to show that previous textbooks overestimated the PSA by 12.0% and HSA by 8.7% (for the male, PSA 8.5% and HSA 4.7%, and for the female, PSA 16.2% and HSA 13.4%). Six 1D measurements were then extracted semiautomatically for use as candidate estimators for the PSA and HSA estimation formula. Stepwise regressions on these six 1D measurements and variable dependency test were performed. Results show that a pair of measurements (hand length and hand breadth) were able to account for 96% of the HSA variance and up to 98% of the PSA variance. A test of the gender-specific formula indicated that gender is not a significant factor in either the PSA or HSA estimation.

  20. Nurses wanted Is the job too harsh or is the wage too low?

    PubMed

    Di Tommaso, M L; Strøm, S; Saether, E M

    2009-05-01

    When entering the job market, nurses choose among different kind of jobs. Each of these jobs is characterized by wage, sector (primary care or hospital) and shift (daytime work or shift). This paper estimates a multi-sector-job-type random utility model of labor supply on data for Norwegian registered nurses (RNs) in 2000. The empirical model implies that labor supply is rather inelastic; 10% increase in the wage rates for all nurses is estimated to yield 3.3% increase in overall labor supply. This modest response shadows for much stronger inter-job-type responses. Our approach differs from previous studies in two ways: First, to our knowledge, it is the first time that a model of labor supply for nurses is estimated taking explicitly into account the choices that RN's have regarding work place and type of job. Second, it differs from previous studies with respect to the measurement of the compensations for different types of work. So far, it has been focused on wage differentials. But there are more attributes of a job than the wage. Based on the estimated random utility model we therefore calculate the expected value of compensation that makes a utility maximizing agent indifferent between types of jobs, here between shift work and daytime work. It turns out that Norwegian nurses working shifts may be willing to work shift relative to daytime work for a lower wage than the current one.

Top