Sample records for ratio estimation based

  1. A neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine

    NASA Astrophysics Data System (ADS)

    Guo, T. H.; Musgrave, J.

    1992-11-01

    In order to properly utilize the available fuel and oxidizer of a liquid propellant rocket engine, the mixture ratio is closed loop controlled during main stage (65 percent - 109 percent power) operation. However, because of the lack of flight-capable instrumentation for measuring mixture ratio, the value of mixture ratio in the control loop is estimated using available sensor measurements such as the combustion chamber pressure and the volumetric flow, and the temperature and pressure at the exit duct on the low pressure fuel pump. This estimation scheme has two limitations. First, the estimation formula is based on an empirical curve fitting which is accurate only within a narrow operating range. Second, the mixture ratio estimate relies on a few sensor measurements and loss of any of these measurements will make the estimate invalid. In this paper, we propose a neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine. The estimator is an extension of a previously developed neural network based sensor failure detection and recovery algorithm (sensor validation). This neural network uses an auto associative structure which utilizes the redundant information of dissimilar sensors to detect inconsistent measurements. Two approaches have been identified for synthesizing mixture ratio from measurement data using a neural network. The first approach uses an auto associative neural network for sensor validation which is modified to include the mixture ratio as an additional output. The second uses a new network for the mixture ratio estimation in addition to the sensor validation network. Although mixture ratio is not directly measured in flight, it is generally available in simulation and in test bed firing data from facility measurements of fuel and oxidizer volumetric flows. The pros and cons of these two approaches will be discussed in terms of robustness to sensor failures and accuracy of the estimate during typical transients using simulation data.

  2. A neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine

    NASA Technical Reports Server (NTRS)

    Guo, T. H.; Musgrave, J.

    1992-01-01

    In order to properly utilize the available fuel and oxidizer of a liquid propellant rocket engine, the mixture ratio is closed loop controlled during main stage (65 percent - 109 percent power) operation. However, because of the lack of flight-capable instrumentation for measuring mixture ratio, the value of mixture ratio in the control loop is estimated using available sensor measurements such as the combustion chamber pressure and the volumetric flow, and the temperature and pressure at the exit duct on the low pressure fuel pump. This estimation scheme has two limitations. First, the estimation formula is based on an empirical curve fitting which is accurate only within a narrow operating range. Second, the mixture ratio estimate relies on a few sensor measurements and loss of any of these measurements will make the estimate invalid. In this paper, we propose a neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine. The estimator is an extension of a previously developed neural network based sensor failure detection and recovery algorithm (sensor validation). This neural network uses an auto associative structure which utilizes the redundant information of dissimilar sensors to detect inconsistent measurements. Two approaches have been identified for synthesizing mixture ratio from measurement data using a neural network. The first approach uses an auto associative neural network for sensor validation which is modified to include the mixture ratio as an additional output. The second uses a new network for the mixture ratio estimation in addition to the sensor validation network. Although mixture ratio is not directly measured in flight, it is generally available in simulation and in test bed firing data from facility measurements of fuel and oxidizer volumetric flows. The pros and cons of these two approaches will be discussed in terms of robustness to sensor failures and accuracy of the estimate during typical transients using simulation data.

  3. Using effort information with change-in-ratio data for population estimation

    USGS Publications Warehouse

    Udevitz, Mark S.; Pollock, Kenneth H.

    1995-01-01

    Most change-in-ratio (CIR) methods for estimating fish and wildlife population sizes have been based only on assumptions about how encounter probabilities vary among population subclasses. When information on sampling effort is available, it is also possible to derive CIR estimators based on assumptions about how encounter probabilities vary over time. This paper presents a generalization of previous CIR models that allows explicit consideration of a range of assumptions about the variation of encounter probabilities among subclasses and over time. Explicit estimators are derived under this model for specific sets of assumptions about the encounter probabilities. Numerical methods are presented for obtaining estimators under the full range of possible assumptions. Likelihood ratio tests for these assumptions are described. Emphasis is on obtaining estimators based on assumptions about variation of encounter probabilities over time.

  4. Fully iterative scatter corrected digital breast tomosynthesis using GPU-based fast Monte Carlo simulation and composition ratio update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kyungsang; Ye, Jong Chul, E-mail: jong.ye@kaist.ac.kr; Lee, Taewon

    2015-09-15

    Purpose: In digital breast tomosynthesis (DBT), scatter correction is highly desirable, as it improves image quality at low doses. Because the DBT detector panel is typically stationary during the source rotation, antiscatter grids are not generally compatible with DBT; thus, a software-based scatter correction is required. This work proposes a fully iterative scatter correction method that uses a novel fast Monte Carlo simulation (MCS) with a tissue-composition ratio estimation technique for DBT imaging. Methods: To apply MCS to scatter estimation, the material composition in each voxel should be known. To overcome the lack of prior accurate knowledge of tissue compositionmore » for DBT, a tissue-composition ratio is estimated based on the observation that the breast tissues are principally composed of adipose and glandular tissues. Using this approximation, the composition ratio can be estimated from the reconstructed attenuation coefficients, and the scatter distribution can then be estimated by MCS using the composition ratio. The scatter estimation and image reconstruction procedures can be performed iteratively until an acceptable accuracy is achieved. For practical use, (i) the authors have implemented a fast MCS using a graphics processing unit (GPU), (ii) the MCS is simplified to transport only x-rays in the energy range of 10–50 keV, modeling Rayleigh and Compton scattering and the photoelectric effect using the tissue-composition ratio of adipose and glandular tissues, and (iii) downsampling is used because the scatter distribution varies rather smoothly. Results: The authors have demonstrated that the proposed method can accurately estimate the scatter distribution, and that the contrast-to-noise ratio of the final reconstructed image is significantly improved. The authors validated the performance of the MCS by changing the tissue thickness, composition ratio, and x-ray energy. The authors confirmed that the tissue-composition ratio estimation was quite accurate under a variety of conditions. Our GPU-based fast MCS implementation took approximately 3 s to generate each angular projection for a 6 cm thick breast, which is believed to make this process acceptable for clinical applications. In addition, the clinical preferences of three radiologists were evaluated; the preference for the proposed method compared to the preference for the convolution-based method was statistically meaningful (p < 0.05, McNemar test). Conclusions: The proposed fully iterative scatter correction method and the GPU-based fast MCS using tissue-composition ratio estimation successfully improved the image quality within a reasonable computational time, which may potentially increase the clinical utility of DBT.« less

  5. Ground-based remote sensing of HDO/H2O ratio profiles: introduction and validation of an innovative retrieval approach

    NASA Astrophysics Data System (ADS)

    Schneider, M.; Hase, F.; Blumenstock, T.

    2006-10-01

    We propose an innovative approach for analysing ground-based FTIR spectra which allows us to detect variabilities of lower and middle/upper tropospheric HDO/H2O ratios. We show that the proposed method is superior to common approaches. We estimate that lower tropospheric HDO/H2O ratios can be detected with a noise to signal ratio of 15% and middle/upper tropospheric ratios with a noise to signal ratio of 50%. The method requires the inversion to be performed on a logarithmic scale and to introduce an inter-species constraint. While common methods calculate the isotope ratio posterior to an independent, optimal estimation of the HDO and H2O profile, the proposed approach is an optimal estimator for the ratio itself. We apply the innovative approach to spectra measured continuously during 15 months and present, for the first time, an annual cycle of tropospheric HDO/H2O ratio profiles as detected by ground-based measurements. Outliers in the detected middle/upper tropospheric ratios are interpreted by backward trajectories.

  6. Ground-based remote sensing of HDO/H2O ratio profiles: introduction and validation of an innovative retrieval approach

    NASA Astrophysics Data System (ADS)

    Schneider, M.; Hase, F.; Blumenstock, T.

    2006-06-01

    We propose an innovative approach for analysing ground-based FTIR spectra which allows us to detect variabilities of lower and middle/upper tropospheric HDO/H2O ratios. We show that the proposed method is superior to common approaches. We estimate that lower tropospheric HDO/H2O ratios can be detected with a noise to signal ratio of 15% and middle/upper tropospheric ratios with a noise to signal ratio of 50%. The method requires the inversion to be performed on a logarithmic scale and to introduce an inter-species constraint. While common methods calculate the isotope ratio posterior to an independent, optimal estimation of the HDO and H2O profile, the proposed approach is an optimal estimator for the ratio itself. We apply the innovative approach to spectra measured continuously during 15 months and present, for the first time, an annual cycle of tropospheric HDO/H2O ratio profiles as detected by ground-based measurements. Outliers in the detected middle/upper tropospheric ratios are interpreted by backward trajectories.

  7. System and method for controlling an engine based on ammonia storage in multiple selective catalytic reduction catalysts

    DOEpatents

    Sun, MIn; Perry, Kevin L.

    2015-11-20

    A system according to the principles of the present disclosure includes a storage estimation module and an air/fuel ratio control module. The storage estimation module estimates a first amount of ammonia stored in a first selective catalytic reduction (SCR) catalyst and estimates a second amount of ammonia stored in a second SCR catalyst. The air/fuel ratio control module controls an air/fuel ratio of an engine based on the first amount, the second amount, and a temperature of a substrate disposed in the second SCR catalyst.

  8. Calibrating recruitment estimates for mourning doves from harvest age ratios

    USGS Publications Warehouse

    Miller, David A.; Otis, David L.

    2010-01-01

    We examined results from the first national-scale effort to estimate mourning dove (Zenaida macroura) age ratios and developed a simple, efficient, and generalizable methodology for calibrating estimates. Our method predicted age classes of unknown-age wings based on backward projection of molt distributions from fall harvest collections to preseason banding. We estimated 1) the proportion of late-molt individuals in each age class, and 2) the molt rates of juvenile and adult birds. Monte Carlo simulations demonstrated our estimator was minimally biased. We estimated model parameters using 96,811 wings collected from hunters and 42,189 birds banded during preseason from 68 collection blocks in 22 states during the 2005–2007 hunting seasons. We also used estimates to derive a correction factor, based on latitude and longitude of samples, which can be applied to future surveys. We estimated differential vulnerability of age classes to harvest using data from banded birds and applied that to harvest age ratios to estimate population age ratios. Average, uncorrected age ratio of known-age wings for states that allow hunting was 2.25 (SD 0.85) juveniles:adult, and average, corrected ratio was 1.91 (SD 0.68), as determined from harvest age ratios from an independent sample of 41,084 wings collected from random hunters in 2007 and 2008. We used an independent estimate of differential vulnerability to adjust corrected harvest age ratios and estimated the average population age ratio as 1.45 (SD 0.52), a direct measure of recruitment rates. Average annual recruitment rates were highest east of the Mississippi River and in the northwestern United States, with lower rates between. Our results demonstrate a robust methodology for calibrating recruitment estimates for mourning doves and represent the first large-scale estimates of recruitment for the species. Our methods can be used by managers to correct future harvest survey data to generate recruitment estimates for use in formulating harvest management strategies.

  9. Reconciling estimates of the ratio of heat and salt fluxes at the ice-ocean interface

    NASA Astrophysics Data System (ADS)

    Keitzl, T.; Mellado, J. P.; Notz, D.

    2016-12-01

    The heat exchange between floating ice and the underlying ocean is determined by the interplay of diffusive fluxes directly at the ice-ocean interface and turbulent fluxes away from it. In this study, we examine this interplay through direct numerical simulations of free convection. Our results show that an estimation of the interface flux ratio based on direct measurements of the turbulent fluxes can be difficult because the flux ratio varies with depth. As an alternative, we present a consistent evaluation of the flux ratio based on the total heat and salt fluxes across the boundary layer. This approach allows us to reconcile previous estimates of the ice-ocean interface conditions. We find that the ratio of heat and salt fluxes directly at the interface is 83-100 rather than 33 as determined by previous turbulence measurements in the outer layer. This can cause errors in the estimated ice-ablation rate from field measurements of up to 40% if they are based on the three-equation formulation.

  10. A random sampling approach for robust estimation of tissue-to-plasma ratio from extremely sparse data.

    PubMed

    Chu, Hui-May; Ette, Ene I

    2005-09-02

    his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.

  11. Ratio-based vs. model-based methods to correct for urinary creatinine concentrations.

    PubMed

    Jain, Ram B

    2016-08-01

    Creatinine-corrected urinary analyte concentration is usually computed as the ratio of the observed level of analyte concentration divided by the observed level of the urinary creatinine concentration (UCR). This ratio-based method is flawed since it implicitly assumes that hydration is the only factor that affects urinary creatinine concentrations. On the contrary, it has been shown in the literature, that age, gender, race/ethnicity, and other factors also affect UCR. Consequently, an optimal method to correct for UCR should correct for hydration as well as other factors like age, gender, and race/ethnicity that affect UCR. Model-based creatinine correction in which observed UCRs are used as an independent variable in regression models has been proposed. This study was conducted to evaluate the performance of ratio-based and model-based creatinine correction methods when the effects of gender, age, and race/ethnicity are evaluated one factor at a time for selected urinary analytes and metabolites. It was observed that ratio-based method leads to statistically significant pairwise differences, for example, between males and females or between non-Hispanic whites (NHW) and non-Hispanic blacks (NHB), more often than the model-based method. However, depending upon the analyte of interest, the reverse is also possible. The estimated ratios of geometric means (GM), for example, male to female or NHW to NHB, were also compared for the two methods. When estimated UCRs were higher for the group (for example, males) in the numerator of this ratio, these ratios were higher for the model-based method, for example, male to female ratio of GMs. When estimated UCR were lower for the group (for example, NHW) in the numerator of this ratio, these ratios were higher for the ratio-based method, for example, NHW to NHB ratio of GMs. Model-based method is the method of choice if all factors that affect UCR are to be accounted for.

  12. Sample-based estimation of tree species richness in a wet tropical forest compartment

    Treesearch

    Steen Magnussen; Raphael Pelissier

    2007-01-01

    Petersen's capture-recapture ratio estimator and the well-known bootstrap estimator are compared across a range of simulated low-intensity simple random sampling with fixed-area plots of 100 m? in a rich wet tropical forest compartment with 93 tree species in the Western Ghats of India. Petersen's ratio estimator was uniformly superior to the bootstrap...

  13. Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding

    NASA Technical Reports Server (NTRS)

    Mahmoud, Saad; Hi, Jianjun

    2012-01-01

    The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of the deviation. This method is more complicated than the Pilot-Guided Method due to the gain control circuitry, but does not have the real-time computation complexity of the Blind Estimation method. Each of these methods can be used to provide an accurate estimation of the combining ratio, and the final selection of the estimation method depends on other design constraints.

  14. A New Monte Carlo Method for Estimating Marginal Likelihoods.

    PubMed

    Wang, Yu-Bo; Chen, Ming-Hui; Kuo, Lynn; Lewis, Paul O

    2018-06-01

    Evaluating the marginal likelihood in Bayesian analysis is essential for model selection. Estimators based on a single Markov chain Monte Carlo sample from the posterior distribution include the harmonic mean estimator and the inflated density ratio estimator. We propose a new class of Monte Carlo estimators based on this single Markov chain Monte Carlo sample. This class can be thought of as a generalization of the harmonic mean and inflated density ratio estimators using a partition weighted kernel (likelihood times prior). We show that our estimator is consistent and has better theoretical properties than the harmonic mean and inflated density ratio estimators. In addition, we provide guidelines on choosing optimal weights. Simulation studies were conducted to examine the empirical performance of the proposed estimator. We further demonstrate the desirable features of the proposed estimator with two real data sets: one is from a prostate cancer study using an ordinal probit regression model with latent variables; the other is for the power prior construction from two Eastern Cooperative Oncology Group phase III clinical trials using the cure rate survival model with similar objectives.

  15. Influence of tire dynamics on slip ratio estimation of independent driving wheel system

    NASA Astrophysics Data System (ADS)

    Li, Jianqiu; Song, Ziyou; Wei, Yintao; Ouyang, Minggao

    2014-11-01

    The independent driving wheel system, which is composed of in-wheel permanent magnet synchronous motor(I-PMSM) and tire, is more convenient to estimate the slip ratio because the rotary speed of the rotor can be accurately measured. However, the ring speed of the tire ring doesn't equal to the rotor speed considering the tire deformation. For this reason, a deformable tire and a detailed I-PMSM are modeled by using Matlab/Simulink. Moreover, the tire/road contact interface(a slippery road) is accurately described by the non-linear relaxation length-based model and the Magic Formula pragmatic model. Based on the relatively accurate model, the error of slip ratio estimated by the rotor rotary speed is analyzed in both time and frequency domains when a quarter car is started by the I-PMSM with a definite target torque input curve. In addition, the natural frequencies(NFs) of the driving wheel system with variable parameters are illustrated to present the relationship between the slip ratio estimation error and the NF. According to this relationship, a low-pass filter, whose cut-off frequency corresponds to the NF, is proposed to eliminate the error in the estimated slip ratio. The analysis, concerning the effect of the driving wheel parameters and road conditions on slip ratio estimation, shows that the peak estimation error can be reduced up to 75% when the LPF is adopted. The robustness and effectiveness of the LPF are therefore validated. This paper builds up the deformable tire model and the detailed I-PMSM models, and analyzes the effect of the driving wheel parameters and road conditions on slip ratio estimation.

  16. Spectral ratio method for measuring emissivity

    USGS Publications Warehouse

    Watson, K.

    1992-01-01

    The spectral ratio method is based on the concept that although the spectral radiances are very sensitive to small changes in temperature the ratios are not. Only an approximate estimate of temperature is required thus, for example, we can determine the emissivity ratio to an accuracy of 1% with a temperature estimate that is only accurate to 12.5 K. Selecting the maximum value of the channel brightness temperatures is an unbiased estimate. Laboratory and field spectral data are easily converted into spectral ratio plots. The ratio method is limited by system signal:noise and spectral band-width. The images can appear quite noisy because ratios enhance high frequencies and may require spatial filtering. Atmospheric effects tend to rescale the ratios and require using an atmospheric model or a calibration site. ?? 1992.

  17. Two-step estimation in ratio-of-mediator-probability weighted causal mediation analysis.

    PubMed

    Bein, Edward; Deutsch, Jonah; Hong, Guanglei; Porter, Kristin E; Qin, Xu; Yang, Cheng

    2018-04-15

    This study investigates appropriate estimation of estimator variability in the context of causal mediation analysis that employs propensity score-based weighting. Such an analysis decomposes the total effect of a treatment on the outcome into an indirect effect transmitted through a focal mediator and a direct effect bypassing the mediator. Ratio-of-mediator-probability weighting estimates these causal effects by adjusting for the confounding impact of a large number of pretreatment covariates through propensity score-based weighting. In step 1, a propensity score model is estimated. In step 2, the causal effects of interest are estimated using weights derived from the prior step's regression coefficient estimates. Statistical inferences obtained from this 2-step estimation procedure are potentially problematic if the estimated standard errors of the causal effect estimates do not reflect the sampling uncertainty in the estimation of the weights. This study extends to ratio-of-mediator-probability weighting analysis a solution to the 2-step estimation problem by stacking the score functions from both steps. We derive the asymptotic variance-covariance matrix for the indirect effect and direct effect 2-step estimators, provide simulation results, and illustrate with an application study. Our simulation results indicate that the sampling uncertainty in the estimated weights should not be ignored. The standard error estimation using the stacking procedure offers a viable alternative to bootstrap standard error estimation. We discuss broad implications of this approach for causal analysis involving propensity score-based weighting. Copyright © 2018 John Wiley & Sons, Ltd.

  18. 13C-Labeled Gluconate Tracing as a Direct and Accurate Method for Determining the Pentose Phosphate Pathway Split Ratio in Penicillium chrysogenum

    PubMed Central

    Kleijn, Roelco J.; van Winden, Wouter A.; Ras, Cor; van Gulik, Walter M.; Schipper, Dick; Heijnen, Joseph J.

    2006-01-01

    In this study we developed a new method for accurately determining the pentose phosphate pathway (PPP) split ratio, an important metabolic parameter in the primary metabolism of a cell. This method is based on simultaneous feeding of unlabeled glucose and trace amounts of [U-13C]gluconate, followed by measurement of the mass isotopomers of the intracellular metabolites surrounding the 6-phosphogluconate node. The gluconate tracer method was used with a penicillin G-producing chemostat culture of the filamentous fungus Penicillium chrysogenum. For comparison, a 13C-labeling-based metabolic flux analysis (MFA) was performed for glycolysis and the PPP of P. chrysogenum. For the first time mass isotopomer measurements of 13C-labeled primary metabolites are reported for P. chrysogenum and used for a 13C-based MFA. Estimation of the PPP split ratio of P. chrysogenum at a growth rate of 0.02 h−1 yielded comparable values for the gluconate tracer method and the 13C-based MFA method, 51.8% and 51.1%, respectively. A sensitivity analysis of the estimated PPP split ratios showed that the 95% confidence interval was almost threefold smaller for the gluconate tracer method than for the 13C-based MFA method (40.0 to 63.5% and 46.0 to 56.5%, respectively). From these results we concluded that the gluconate tracer method permits accurate determination of the PPP split ratio but provides no information about the remaining cellular metabolism, while the 13C-based MFA method permits estimation of multiple fluxes but provides a less accurate estimate of the PPP split ratio. PMID:16820467

  19. Poisson regression models outperform the geometrical model in estimating the peak-to-trough ratio of seasonal variation: a simulation study.

    PubMed

    Christensen, A L; Lundbye-Christensen, S; Dethlefsen, C

    2011-12-01

    Several statistical methods of assessing seasonal variation are available. Brookhart and Rothman [3] proposed a second-order moment-based estimator based on the geometrical model derived by Edwards [1], and reported that this estimator is superior in estimating the peak-to-trough ratio of seasonal variation compared with Edwards' estimator with respect to bias and mean squared error. Alternatively, seasonal variation may be modelled using a Poisson regression model, which provides flexibility in modelling the pattern of seasonal variation and adjustments for covariates. Based on a Monte Carlo simulation study three estimators, one based on the geometrical model, and two based on log-linear Poisson regression models, were evaluated in regards to bias and standard deviation (SD). We evaluated the estimators on data simulated according to schemes varying in seasonal variation and presence of a secular trend. All methods and analyses in this paper are available in the R package Peak2Trough[13]. Applying a Poisson regression model resulted in lower absolute bias and SD for data simulated according to the corresponding model assumptions. Poisson regression models had lower bias and SD for data simulated to deviate from the corresponding model assumptions than the geometrical model. This simulation study encourages the use of Poisson regression models in estimating the peak-to-trough ratio of seasonal variation as opposed to the geometrical model. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  20. Concurrent estimates of carbon export reveal physical biases in ΔO2/Ar-based net community production estimates in the Southern California Bight

    NASA Astrophysics Data System (ADS)

    Haskell, William Z.; Fleming, John C.

    2018-07-01

    Net community production (NCP) represents the amount of biologically-produced organic carbon that is available to be exported out of the surface ocean and is typically estimated using measurements of the O2/Ar ratio in the surface mixed layer under the assumption of negligible vertical transport. However, physical processes can significantly bias NCP estimates based on this in-situ tracer. It is actively debated whether discrepancies between O2/Ar-based NCP and carbon export estimates are due to differences in the location of biological production and export, or the result of physical biases. In this study, we calculate export production across the euphotic depth during two months of upwelling in Southern California in 2014, based on an estimate of the consumption rate of dissolved organic carbon (DOC) and the dissolved: total organic carbon consumption ratio below the euphotic depth. This estimate equals the concurrent O2/Ar-based NCP estimates over the same period that are corrected for physical biases, but is significantly different than NCP estimated without a correction for vertical transport. This comparison demonstrates that concurrent physical transport estimates would significantly improve O2/Ar-based estimates of NCP, particularly in settings with vertical advection. Potential approaches to mitigate this bias are discussed.

  1. Estimating cost ratio distribution between fatal and non-fatal road accidents in Malaysia

    NASA Astrophysics Data System (ADS)

    Hamdan, Nurhidayah; Daud, Noorizam

    2014-07-01

    Road traffic crashes are a global major problem, and should be treated as a shared responsibility. In Malaysia, road accident tragedies kill 6,917 people and injure or disable 17,522 people in year 2012, and government spent about RM9.3 billion in 2009 which cost the nation approximately 1 to 2 percent loss of gross domestic product (GDP) reported annually. The current cost ratio for fatal and non-fatal accident used by Ministry of Works Malaysia simply based on arbitrary value of 6:4 or equivalent 1.5:1 depends on the fact that there are six factors involved in the calculation accident cost for fatal accident while four factors for non-fatal accident. The simple indication used by the authority to calculate the cost ratio is doubted since there is lack of mathematical and conceptual evidence to explain how this ratio is determined. The main aim of this study is to determine the new accident cost ratio for fatal and non-fatal accident in Malaysia based on quantitative statistical approach. The cost ratio distributions will be estimated based on Weibull distribution. Due to the unavailability of official accident cost data, insurance claim data both for fatal and non-fatal accident have been used as proxy information for the actual accident cost. There are two types of parameter estimates used in this study, which are maximum likelihood (MLE) and robust estimation. The findings of this study reveal that accident cost ratio for fatal and non-fatal claim when using MLE is 1.33, while, for robust estimates, the cost ratio is slightly higher which is 1.51. This study will help the authority to determine a more accurate cost ratio between fatal and non-fatal accident as compared to the official ratio set by the government, since cost ratio is an important element to be used as a weightage in modeling road accident related data. Therefore, this study provides some guidance tips to revise the insurance claim set by the Malaysia road authority, hence the appropriate method that suitable to implement in Malaysia can be analyzed.

  2. Maximum likelihood estimation of signal-to-noise ratio and combiner weight

    NASA Technical Reports Server (NTRS)

    Kalson, S.; Dolinar, S. J.

    1986-01-01

    An algorithm for estimating signal to noise ratio and combiner weight parameters for a discrete time series is presented. The algorithm is based upon the joint maximum likelihood estimate of the signal and noise power. The discrete-time series are the sufficient statistics obtained after matched filtering of a biphase modulated signal in additive white Gaussian noise, before maximum likelihood decoding is performed.

  3. Change-in-ratio estimators for populations with more than two subclasses

    USGS Publications Warehouse

    Udevitz, Mark S.; Pollock, Kenneth H.

    1991-01-01

    Change-in-ratio methods have been developed to estimate the size of populations with two or three population subclasses. Most of these methods require the often unreasonable assumption of equal sampling probabilities for individuals in all subclasses. This paper presents new models based on the weaker assumption that ratios of sampling probabilities are constant over time for populations with three or more subclasses. Estimation under these models requires that a value be assumed for one of these ratios when there are two samples. Explicit expressions are given for the maximum likelihood estimators under models for two samples with three or more subclasses and for three samples with two subclasses. A numerical method using readily available statistical software is described for obtaining the estimators and their standard errors under all of the models. Likelihood ratio tests that can be used in model selection are discussed. Emphasis is on the two-sample, three-subclass models for which Monte-Carlo simulation results and an illustrative example are presented.

  4. X-ray dual energy spectral parameter optimization for bone Calcium/Phosphorus mass ratio estimation

    NASA Astrophysics Data System (ADS)

    Sotiropoulou, P. I.; Fountos, G. P.; Martini, N. D.; Koukou, V. N.; Michail, C. M.; Valais, I. G.; Kandarakis, I. S.; Nikiforidis, G. C.

    2015-09-01

    Calcium (Ca) and Phosphorus (P) bone mass ratio has been identified as an important, yet underutilized, risk factor in osteoporosis diagnosis. The purpose of this simulation study is to investigate the use of effective or mean mass attenuation coefficient in Ca/P mass ratio estimation with the use of a dual-energy method. The investigation was based on the minimization of the accuracy of Ca/P ratio, with respect to the Coefficient of Variation of the ratio. Different set-ups were examined, based on the K-edge filtering technique and single X-ray exposure. The modified X-ray output was attenuated by various Ca/P mass ratios resulting in nine calibration points, while keeping constant the total bone thickness. The simulated data were obtained considering a photon counting energy discriminating detector. The standard deviation of the residuals was used to compare and evaluate the accuracy between the different dual energy set-ups. The optimum mass attenuation coefficient for the Ca/P mass ratio estimation was the effective coefficient in all the examined set-ups. The variation of the residuals between the different set-ups was not significant.

  5. Predictive dose-based estimation of systemic exposure multiples in mouse and monkey relative to human for antisense oligonucleotides with 2'-o-(2-methoxyethyl) modifications.

    PubMed

    Yu, Rosie Z; Grundy, John S; Henry, Scott P; Kim, Tae-Won; Norris, Daniel A; Burkey, Jennifer; Wang, Yanfeng; Vick, Andrew; Geary, Richard S

    2015-01-20

    Evaluation of species differences and systemic exposure multiples (or ratios) in toxicological animal species versus human is an ongoing exercise during the course of drug development. The systemic exposure ratios are best estimated by directly comparing area under the plasma concentration-time curves (AUCs), and sometimes by comparing the dose administered, with the dose being adjusted either by body surface area (BSA) or body weight (BW). In this study, the association between AUC ratio and the administered dose ratio from animals to human were studied using a retrospective data-driven approach. The dataset included nine antisense oligonucleotides (ASOs) with 2'-O-(2-methoxyethyl) modifications, evaluated in two animal species (mouse and monkey) following single and repeated parenteral administrations. We found that plasma AUCs were similar between ASOs within the same species, and are predictable to human exposure using a single animal species, either mouse or monkey. Between monkey and human, the plasma exposure ratio can be predicted directly based on BW-adjusted dose ratios, whereas between mouse and human, the exposure ratio would be nearly fivefold lower in mouse compared to human based on BW-adjusted dose values. Thus, multiplying a factor of 5 for the mouse BW-adjusted dose would likely provide a reasonable AUC exposure estimate in human at steady-state.

  6. The impact of differences in EQ-5D and SF-6D utility scores on the acceptability of cost-utility ratios: results across five trial-based cost-utility studies.

    PubMed

    Joore, Manuela; Brunenberg, Danielle; Nelemans, Patricia; Wouters, Emiel; Kuijpers, Petra; Honig, Adriaan; Willems, Danielle; de Leeuw, Peter; Severens, Johan; Boonen, Annelies

    2010-01-01

    This article investigates whether differences in utility scores based on the EQ-5D and the SF-6D have impact on the incremental cost-utility ratios in five distinct patient groups. We used five empirical data sets of trial-based cost-utility studies that included patients with different disease conditions and severity (musculoskeletal disease, cardiovascular pulmonary disease, and psychological disorders) to calculate differences in quality-adjusted life-years (QALYs) based on EQ-5D and SF-6D utility scores. We compared incremental QALYs, incremental cost-utility ratios, and the probability that the incremental cost-utility ratio was acceptable within and across the data sets. We observed small differences in incremental QALYs, but large differences in the incremental cost-utility ratios and in the probability that these ratios were acceptable at a given threshold, in the majority of the presented cost-utility analyses. More specifically, in the patient groups with relatively mild health conditions the probability of acceptance of the incremental cost-utility ratio was considerably larger when using the EQ-5D to estimate utility. While in the patient groups with worse health conditions the probability of acceptance of the incremental cost-utility ratio was considerably larger when using the SF-6D to estimate utility. Much of the appeal in using QALYs as measure of effectiveness in economic evaluations is in the comparability across conditions and interventions. The incomparability of the results of cost-utility analyses using different instruments to estimate a single index value for health severely undermines this aspect and reduces the credibility of the use of incremental cost-utility ratios for decision-making.

  7. A Ratio of Spore to Viable Organisms: A Case Study of the JPL-SAF Cleanroom

    NASA Technical Reports Server (NTRS)

    Hendrickson, Ryan; Urbaniak, Camilla; Malli Mohan, Ganesh Babu; Aronson, Heidi; Venkateswaran, Kasthuri

    2017-01-01

    Spacecraft surfaces that are destined to land on potential life-harboring celestial bodies are required to be rigorously cleaned and continuously monitored for spore bioburden as a proxy for spacecraft cleanliness. The NASA standard assay (NSA), used for spacecraft bioburden estimates, specifically measures spores that are cultivable, aerobic, resistant to heat shock, and grow at 30 C in a nutrient-rich medium. Since the vast majority of microorganisms cannot be cultivated using the NSA, it is necessary to utilize state-of-the art molecular techniques to better understand the presence of all viable microorganisms, not just those measured with the NSA. In this study, the nutrient-deprived low biomass cleanrooms, where spacecraft are assembled, were used as a surrogate for spacecraft surfaces to measure the ratio of NSA spores in relation to the total viable microorganism population in order to make comparisons with the 2006 Space Studies Board (SSB) estimate of 1 spore per approximately 50,000 viable organisms. Ninety-eight surface wipe samples were collected from the Spacecraft Assembly Facility (SAF) cleanroom at the Jet Propulsion Laboratory (JPL) over a 6-month period. The samples were processed and analyzed using classical microbiology along with molecular methodology. Traditional microbiology plating methods were used to determine the cultivable bacterial, fungal, and spore populations. Molecular assays were used to determine the total organisms (TO, dead and live) and the viable organisms (VO, live). The TO was measured using adenosine triphosphate (ATP) and quantitative polymerase chain reaction (qPCR) assays. The VO was measured using internal ATP, propidium monoazide (PMA)-qPCR, and flow cytometry (after staining for viable microorganisms) assays. Based on the results, it was possible to establish a ratio between spore counts and VO for each viability assay. The ATP-based spore to VO ratio ranged from 149-746, and the bacterial PMA-qPCR assay-based ratio ranged from 314-1,491 VO, per spore. The most conservative estimate came from fluorescent-assisted cell sorting (FACS), which estimated the ratio to be 12,091 VO per 1 NSA spore. Since archaeal (less than 1%) and fungal (approximately 2%) populations were negligible, the spore to VO ratios were based on bacterial population estimates. The most conservative ratio from this study can be used as a replacement for the SSB estimate on nutrient-deprived (oligotrophic) desiccated spacecraft surfaces, to estimate the VO from NSA measurements without utilizing state-of-the art molecular methods that are costly and require more biomass than is typically found on spacecraft surfaces.

  8. Estimating the ratio of pond size to irrigated soybeans land in Mississippi: A case study

    USDA-ARS?s Scientific Manuscript database

    Although more on-farm storage ponds have been constructed in recent years to mitigate groundwater resources depletion in Mississippi, little effort has been devoted to estimating the ratio of pond size to irrigated crop land based on pond matric and its hydrological conditions. Knowledge of this ra...

  9. Combining matched and unmatched control groups in case-control studies.

    PubMed

    le Cessie, Saskia; Nagelkerke, Nico; Rosendaal, Frits R; van Stralen, Karlijn J; Pomp, Elisabeth R; van Houwelingen, Hans C

    2008-11-15

    Multiple control groups in case-control studies are used to control for different sources of confounding. For example, cases can be contrasted with matched controls to adjust for multiple genetic or unknown lifestyle factors and simultaneously contrasted with an unmatched population-based control group. Inclusion of different control groups for a single exposure analysis yields several estimates of the odds ratio, all using only part of the data. Here the authors introduce an easy way to combine odds ratios from several case-control analyses with the same cases. The approach is based upon methods used for meta-analysis but takes into account the fact that the same cases are used and that the estimated odds ratios are therefore correlated. Two ways of estimating this correlation are discussed: sandwich methodology and the bootstrap. Confidence intervals for the pooled estimates and a test for checking whether the odds ratios in the separate case-control studies differ significantly are derived. The performance of the method is studied by simulation and by applying the methods to a large study on risk factors for thrombosis, the MEGA Study (1999-2004), wherein cases with first venous thrombosis were included with a matched control group of partners and an unmatched population-based control group.

  10. Estimation of selected streamflow statistics for a network of low-flow partial-record stations in areas affected by Base Realignment and Closure (BRAC) in Maryland

    USGS Publications Warehouse

    Ries, Kernell G.; Eng, Ken

    2010-01-01

    The U.S. Geological Survey, in cooperation with the Maryland Department of the Environment, operated a network of 20 low-flow partial-record stations during 2008 in a region that extends from southwest of Baltimore to the northeastern corner of Maryland to obtain estimates of selected streamflow statistics at the station locations. The study area is expected to face a substantial influx of new residents and businesses as a result of military and civilian personnel transfers associated with the Federal Base Realignment and Closure Act of 2005. The estimated streamflow statistics, which include monthly 85-percent duration flows, the 10-year recurrence-interval minimum base flow, and the 7-day, 10-year low flow, are needed to provide a better understanding of the availability of water resources in the area to be affected by base-realignment activities. Streamflow measurements collected for this study at the low-flow partial-record stations and measurements collected previously for 8 of the 20 stations were related to concurrent daily flows at nearby index streamgages to estimate the streamflow statistics. Three methods were used to estimate the streamflow statistics and two methods were used to select the index streamgages. Of the three methods used to estimate the streamflow statistics, two of them--the Moments and MOVE1 methods--rely on correlating the streamflow measurements at the low-flow partial-record stations with concurrent streamflows at nearby, hydrologically similar index streamgages to determine the estimates. These methods, recommended for use by the U.S. Geological Survey, generally require about 10 streamflow measurements at the low-flow partial-record station. The third method transfers the streamflow statistics from the index streamgage to the partial-record station based on the average of the ratios of the measured streamflows at the partial-record station to the concurrent streamflows at the index streamgage. This method can be used with as few as one pair of streamflow measurements made on a single streamflow recession at the low-flow partial-record station, although additional pairs of measurements will increase the accuracy of the estimates. Errors associated with the two correlation methods generally were lower than the errors associated with the flow-ratio method, but the advantages of the flow-ratio method are that it can produce reasonably accurate estimates from streamflow measurements much faster and at lower cost than estimates obtained using the correlation methods. The two index-streamgage selection methods were (1) selection based on the highest correlation coefficient between the low-flow partial-record station and the index streamgages, and (2) selection based on Euclidean distance, where the Euclidean distance was computed as a function of geographic proximity and the basin characteristics: drainage area, percentage of forested area, percentage of impervious area, and the base-flow recession time constant, t. Method 1 generally selected index streamgages that were significantly closer to the low-flow partial-record stations than method 2. The errors associated with the estimated streamflow statistics generally were lower for method 1 than for method 2, but the differences were not statistically significant. The flow-ratio method for estimating streamflow statistics at low-flow partial-record stations was shown to be independent from the two correlation-based estimation methods. As a result, final estimates were determined for eight low-flow partial-record stations by weighting estimates from the flow-ratio method with estimates from one of the two correlation methods according to the respective variances of the estimates. Average standard errors of estimate for the final estimates ranged from 90.0 to 7.0 percent, with an average value of 26.5 percent. Average standard errors of estimate for the weighted estimates were, on average, 4.3 percent less than the best average standard errors of estima

  11. Change-in-ratio

    USGS Publications Warehouse

    Udevitz, Mark S.; El-Shaarawi, Abdel H.; Piegorsch, Walter W.

    2002-01-01

    Change-in-ratio (CIR) methods are used to estimate parameters for ecological populations subject to differential removals from population subclasses. Subclasses can be defined according to criteria such as sex, age, or size of individuals. Removals are generally in the form of closely monitored sport or commercial harvests. Estimation is based on observed changes in subclass proportions caused by the removals.

  12. Change-in-ratio

    USGS Publications Warehouse

    Udevitz, Mark S.

    2014-01-01

    Change-in-ratio (CIR) methods are used to estimate parameters for ecological populations subject to differential removals from population subclasses. Subclasses can be defined according to criteria such as sex, age, or size of individuals. Removals are generally in the form of closely monitored sport or commercial harvests. Estimation is based on observed changes in subclass proportions caused by the removals.

  13. A particle filter for ammonia coverage ratio and input simultaneous estimations in Diesel-engine SCR system.

    PubMed

    Sun, Kangfeng; Ji, Fenzhu; Yan, Xiaoyu; Jiang, Kai; Yang, Shichun

    2018-01-01

    As NOx emissions legislation for Diesel-engines is becoming more stringent than ever before, an aftertreatment system has been widely used in many countries. Specifically, to reduce the NOx emissions, a selective catalytic reduction(SCR) system has become one of the most promising techniques for Diesel-engine vehicle applications. In the SCR system, input ammonia concentration and ammonia coverage ratio are regarded as essential states in the control-oriental model. Currently, an ammonia sensor placed before the SCR Can is a good strategy for the input ammonia concentration value. However, physical sensor would increase the SCR system cost and the ammonia coverage ratio information cannot be directly measured by physical sensor. Aiming to tackle this problem, an observer based on particle filter(PF) is investigated to estimate the input ammonia concentration and ammonia coverage ratio. Simulation results through the experimentally-validated full vehicle simulator cX-Emission show that the performance of observer based on PF is outstanding, and the estimation error is very small.

  14. A particle filter for ammonia coverage ratio and input simultaneous estimations in Diesel-engine SCR system

    PubMed Central

    Ji, Fenzhu; Yan, Xiaoyu; Jiang, Kai

    2018-01-01

    As NOx emissions legislation for Diesel-engines is becoming more stringent than ever before, an aftertreatment system has been widely used in many countries. Specifically, to reduce the NOx emissions, a selective catalytic reduction(SCR) system has become one of the most promising techniques for Diesel-engine vehicle applications. In the SCR system, input ammonia concentration and ammonia coverage ratio are regarded as essential states in the control-oriental model. Currently, an ammonia sensor placed before the SCR Can is a good strategy for the input ammonia concentration value. However, physical sensor would increase the SCR system cost and the ammonia coverage ratio information cannot be directly measured by physical sensor. Aiming to tackle this problem, an observer based on particle filter(PF) is investigated to estimate the input ammonia concentration and ammonia coverage ratio. Simulation results through the experimentally-validated full vehicle simulator cX-Emission show that the performance of observer based on PF is outstanding, and the estimation error is very small. PMID:29408924

  15. Three methods to construct predictive models using logistic regression and likelihood ratios to facilitate adjustment for pretest probability give similar results.

    PubMed

    Chan, Siew Foong; Deeks, Jonathan J; Macaskill, Petra; Irwig, Les

    2008-01-01

    To compare three predictive models based on logistic regression to estimate adjusted likelihood ratios allowing for interdependency between diagnostic variables (tests). This study was a review of the theoretical basis, assumptions, and limitations of published models; and a statistical extension of methods and application to a case study of the diagnosis of obstructive airways disease based on history and clinical examination. Albert's method includes an offset term to estimate an adjusted likelihood ratio for combinations of tests. Spiegelhalter and Knill-Jones method uses the unadjusted likelihood ratio for each test as a predictor and computes shrinkage factors to allow for interdependence. Knottnerus' method differs from the other methods because it requires sequencing of tests, which limits its application to situations where there are few tests and substantial data. Although parameter estimates differed between the models, predicted "posttest" probabilities were generally similar. Construction of predictive models using logistic regression is preferred to the independence Bayes' approach when it is important to adjust for dependency of tests errors. Methods to estimate adjusted likelihood ratios from predictive models should be considered in preference to a standard logistic regression model to facilitate ease of interpretation and application. Albert's method provides the most straightforward approach.

  16. Semi-empirical estimation of organic compound fugacity ratios at environmentally relevant system temperatures.

    PubMed

    van Noort, Paul C M

    2009-06-01

    Fugacity ratios of organic compounds are used to calculate (subcooled) liquid properties, such as solubility or vapour pressure, from solid properties and vice versa. They can be calculated from the entropy of fusion, the melting temperature, and heat capacity data for the solid and the liquid. For many organic compounds, values for the fusion entropy are lacking. Heat capacity data are even scarcer. In the present study, semi-empirical compound class specific equations were derived to estimate fugacity ratios from molecular weight and melting temperature for polycyclic aromatic hydrocarbons and polychlorinated benzenes, biphenyls, dibenzo[p]dioxins and dibenzofurans. These equations estimate fugacity ratios with an average standard error of about 0.05 log units. In addition, for compounds with known fusion entropy values, a general semi-empirical correction equation based on molecular weight and melting temperature was derived for estimation of the contribution of heat capacity differences to the fugacity ratio. This equation estimates the heat capacity contribution correction factor with an average standard error of 0.02 log units for polycyclic aromatic hydrocarbons, polychlorinated benzenes, biphenyls, dibenzo[p]dioxins and dibenzofurans.

  17. Estimating population size for Capercaillie (Tetrao urogallus L.) with spatial capture-recapture models based on genotypes from one field sample

    USGS Publications Warehouse

    Mollet, Pierre; Kery, Marc; Gardner, Beth; Pasinelli, Gilberto; Royle, Andy

    2015-01-01

    We conducted a survey of an endangered and cryptic forest grouse, the capercaillie Tetrao urogallus, based on droppings collected on two sampling occasions in eight forest fragments in central Switzerland in early spring 2009. We used genetic analyses to sex and individually identify birds. We estimated sex-dependent detection probabilities and population size using a modern spatial capture-recapture (SCR) model for the data from pooled surveys. A total of 127 capercaillie genotypes were identified (77 males, 46 females, and 4 of unknown sex). The SCR model yielded atotal population size estimate (posterior mean) of 137.3 capercaillies (posterior sd 4.2, 95% CRI 130–147). The observed sex ratio was skewed towards males (0.63). The posterior mean of the sex ratio under the SCR model was 0.58 (posterior sd 0.02, 95% CRI 0.54–0.61), suggesting a male-biased sex ratio in our study area. A subsampling simulation study indicated that a reduced sampling effort representing 75% of the actual detections would still yield practically acceptable estimates of total size and sex ratio in our population. Hence, field work and financial effort could be reduced without compromising accuracy when the SCR model is used to estimate key population parameters of cryptic species.

  18. The Impact of New Estimates of Mixing Ratio and Flux-based Halogen Scenarios on Ozone Evolution

    NASA Technical Reports Server (NTRS)

    Oman, Luke D.; Douglass, Anne R.; Liang, Qing; Strahan, Susan E.

    2014-01-01

    The evolution of ozone in the 21st century has been shown to be mainly impacted by the halogen emissions scenario and predicted changes in the circulation of the stratosphere. New estimates of mixing ratio and flux-based emission scenarios have been produced from the SPARC Lifetime Assessment 2013. Simulations using the Goddard Earth Observing System Chemistry-Climate Model (GEOSCCM) are conducted using this new A1 2014 halogen scenario and compared to ones using the A1 2010 scenario. This updated version of GEOSCCM includes a realistic representation of the Quasi-Biennial Oscillation and improvements related to the break up of the Antarctic polar vortex. We will present results of the ozone evolution over the recent past and 21st century to the A1 2010, A1 2014 mixing ratio, and an A1 2014 flux-based halogen scenario. Implications of the uncertainties in these estimates as well as those from possible circulation changes will be discussed.

  19. Association of seat belt use with death: a comparison of estimates based on data from police and estimates based on data from trained crash investigators

    PubMed Central

    Cummings, P

    2002-01-01

    Objective: Estimates of any protective effect of seat belts could be exaggerated if some crash survivors falsely claimed to police that they were belted in order to avoid a fine. The aim of this study was to determine whether estimates of seat belt effectiveness differed when based on belt use as recorded by the police and belt use determined by trained crash investigators. Design: Matched cohort study. Setting: United States. Subjects: Adult driver-passenger pairs in the same vehicle with at least one death (n=1689) sampled from crashes during 1988–2000; data from the National Accident Sampling System Crashworthiness Data System. Main outcome measure: Risk ratio for death among belted occupants compared with those not belted. Results: Trained investigators determined post-crash seat belt use by vehicle inspections for 92% of the occupants, confidential interviews with survivors for 5%, and medical or autopsy reports for 3%. Using this information, the adjusted risk ratio for belted persons was 0.36 (95% confidence interval 0.29 to 0.46). The risk ratio was also 0.36 using police reported belt use for the same crashes. Conclusions: Estimates of seat belt effects based upon police data were not substantially different from estimates which used data obtained by trained crash investigators who were not police officers. These results were from vehicles in which at least one front seat occupant died; these findings may not apply to estimates which use data from crashes without a death. PMID:12460976

  20. Accounting for Time-Varying Confounding in the Relationship Between Obesity and Coronary Heart Disease: Analysis With G-Estimation: The ARIC Study.

    PubMed

    Shakiba, Maryam; Mansournia, Mohammad Ali; Salari, Arsalan; Soori, Hamid; Mansournia, Nasrin; Kaufman, Jay S

    2018-06-01

    In longitudinal studies, standard analysis may yield biased estimates of exposure effect in the presence of time-varying confounders that are also intermediate variables. We aimed to quantify the relationship between obesity and coronary heart disease (CHD) by appropriately adjusting for time-varying confounders. This study was performed in a subset of participants from the Atherosclerosis Risk in Communities (ARIC) Study (1987-2010), a US study designed to investigate risk factors for atherosclerosis. General obesity was defined as body mass index (weight (kg)/height (m)2) ≥30, and abdominal obesity (AOB) was defined according to either waist circumference (≥102 cm in men and ≥88 cm in women) or waist:hip ratio (≥0.9 in men and ≥0.85 in women). The association of obesity with CHD was estimated by G-estimation and compared with results from accelerated failure-time models using 3 specifications. The first model, which adjusted for baseline covariates, excluding metabolic mediators of obesity, showed increased risk of CHD for all obesity measures. Further adjustment for metabolic mediators in the second model and time-varying variables in the third model produced negligible changes in the hazard ratios. The hazard ratios estimated by G-estimation were 1.15 (95% confidence interval (CI): 0.83, 1.47) for general obesity, 1.65 (95% CI: 1.35, 1.92) for AOB based on waist circumference, and 1.38 (95% CI: 1.13, 1.99) for AOB based on waist:hip ratio, suggesting that AOB increased the risk of CHD. The G-estimated hazard ratios for both measures were further from the null than those derived from standard models.

  1. Two models for evaluating landslide hazards

    USGS Publications Warehouse

    Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.

    2006-01-01

    Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.

  2. Estimates and Standard Errors for Ratios of Normalizing Constants from Multiple Markov Chains via Regeneration

    PubMed Central

    Doss, Hani; Tan, Aixin

    2017-01-01

    In the classical biased sampling problem, we have k densities π1(·), …, πk(·), each known up to a normalizing constant, i.e. for l = 1, …, k, πl(·) = νl(·)/ml, where νl(·) is a known function and ml is an unknown constant. For each l, we have an iid sample from πl,·and the problem is to estimate the ratios ml/ms for all l and all s. This problem arises frequently in several situations in both frequentist and Bayesian inference. An estimate of the ratios was developed and studied by Vardi and his co-workers over two decades ago, and there has been much subsequent work on this problem from many different perspectives. In spite of this, there are no rigorous results in the literature on how to estimate the standard error of the estimate. We present a class of estimates of the ratios of normalizing constants that are appropriate for the case where the samples from the πl’s are not necessarily iid sequences, but are Markov chains. We also develop an approach based on regenerative simulation for obtaining standard errors for the estimates of ratios of normalizing constants. These standard error estimates are valid for both the iid case and the Markov chain case. PMID:28706463

  3. Estimates and Standard Errors for Ratios of Normalizing Constants from Multiple Markov Chains via Regeneration.

    PubMed

    Doss, Hani; Tan, Aixin

    2014-09-01

    In the classical biased sampling problem, we have k densities π 1 (·), …, π k (·), each known up to a normalizing constant, i.e. for l = 1, …, k , π l (·) = ν l (·)/ m l , where ν l (·) is a known function and m l is an unknown constant. For each l , we have an iid sample from π l , · and the problem is to estimate the ratios m l /m s for all l and all s . This problem arises frequently in several situations in both frequentist and Bayesian inference. An estimate of the ratios was developed and studied by Vardi and his co-workers over two decades ago, and there has been much subsequent work on this problem from many different perspectives. In spite of this, there are no rigorous results in the literature on how to estimate the standard error of the estimate. We present a class of estimates of the ratios of normalizing constants that are appropriate for the case where the samples from the π l 's are not necessarily iid sequences, but are Markov chains. We also develop an approach based on regenerative simulation for obtaining standard errors for the estimates of ratios of normalizing constants. These standard error estimates are valid for both the iid case and the Markov chain case.

  4. From reading numbers to seeing ratios: a benefit of icons for risk comprehension.

    PubMed

    Tubau, Elisabet; Rodríguez-Ferreiro, Javier; Barberia, Itxaso; Colomé, Àngels

    2018-06-21

    Promoting a better understanding of statistical data is becoming increasingly important for improving risk comprehension and decision-making. In this regard, previous studies on Bayesian problem solving have shown that iconic representations help infer frequencies in sets and subsets. Nevertheless, the mechanisms by which icons enhance performance remain unclear. Here, we tested the hypothesis that the benefit offered by icon arrays lies in a better alignment between presented and requested relationships, which should facilitate the comprehension of the requested ratio beyond the represented quantities. To this end, we analyzed individual risk estimates based on data presented either in standard verbal presentations (percentages and natural frequency formats) or as icon arrays. Compared to the other formats, icons led to estimates that were more accurate, and importantly, promoted the use of equivalent expressions for the requested probability. Furthermore, whereas the accuracy of the estimates based on verbal formats depended on their alignment with the text, all the estimates based on icons were equally accurate. Therefore, these results support the proposal that icons enhance the comprehension of the ratio and its mapping onto the requested probability and point to relational misalignment as potential interference for text-based Bayesian reasoning. The present findings also argue against an intrinsic difficulty with understanding single-event probabilities.

  5. Evaluation of Radar Vegetation Indices for Vegetation Water Content Estimation Using Data from a Ground-Based SMAP Simulator

    NASA Technical Reports Server (NTRS)

    Srivastava, Prashant K.; O'Neill, Peggy; Cosh, Michael; Lang, Roger; Joseph, Alicia

    2015-01-01

    Vegetation water content (VWC) is an important component of microwave soil moisture retrieval algorithms. This paper aims to estimate VWC using L band active and passive radar/radiometer datasets obtained from a NASA ground-based Soil Moisture Active Passive (SMAP) simulator known as ComRAD (Combined Radar/Radiometer). Several approaches to derive vegetation information from radar and radiometer data such as HH, HV, VV, Microwave Polarization Difference Index (MPDI), HH/VV ratio, HV/(HH+VV), HV/(HH+HV+VV) and Radar Vegetation Index (RVI) are tested for VWC estimation through a generalized linear model (GLM). The overall analysis indicates that HV radar backscattering could be used for VWC content estimation with highest performance followed by HH, VV, MPDI, RVI, and other ratios.

  6. Applying a low energy HPGe detector gamma ray spectrometric technique for the evaluation of Pu/Am ratio in biological samples.

    PubMed

    Singh, I S; Mishra, Lokpati; Yadav, J R; Nadar, M Y; Rao, D D; Pradeepkumar, K S

    2015-10-01

    The estimation of Pu/(241)Am ratio in the biological samples is an important input for the assessment of internal dose received by the workers. The radiochemical separation of Pu isotopes and (241)Am in a sample followed by alpha spectrometry is a widely used technique for the determination of Pu/(241)Am ratio. However, this method is time consuming and many times quick estimation is required. In this work, Pu/(241)Am ratio in the biological sample was estimated with HPGe detector based measurements using gamma/X-rays emitted by these radionuclides. These results were compared with those obtained from alpha spectroscopy of sample after radiochemical analysis and found to be in good agreement. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Sex ratio estimations of loggerhead sea turtle hatchlings at Kuriat islands, Tunisia: can minor nesting sites contribute to compensate globally female-biased sex ratio?

    PubMed

    Jribi, Imed; Bradai, Mohamed Nejmeddine

    2014-01-01

    Hatchling sex ratios in the loggerhead turtle Caretta caretta were estimated by placing electronic temperature recorders in seven nests at Kuriat islands (Tunisia) during the 2013 nesting season. Based on the mean temperatures during the middle third of the incubation period, and on incubation duration, the sex ratio of hatchlings at Kuriat islands was highly male-biased. Presently, the majority of hatchling sex ratio studies are focused on major nesting areas, whereby the sex ratios are universally believed to be heavily female-biased. Here we present findings from a minor nesting site in the Mediterranean, where the hatchling sex ratio was found to be male-biased, suggesting a potential difference between major and minor nesting sites.

  8. Evaluation for relationship among source parameters of underground nuclear tests in Northern Korean Peninsula

    NASA Astrophysics Data System (ADS)

    Kim, G.; Che, I. Y.

    2017-12-01

    We evaluated relationship among source parameters of underground nuclear tests in northern Korean Peninsula using regional seismic data. Dense global and regional seismic networks are incorporated to measure locations and origin times precisely. Location analyses show that distance among the locations is tiny on a regional scale. The tiny location-differences validate a linear model assumption. We estimated source spectral ratios by excluding path effects based spectral ratios of the observed seismograms. We estimated empirical relationship among depth of burials and yields based on theoretical source models.

  9. A bounding estimate of neutron dose based on measured photon dose around single pass reactors at the Hanford site.

    PubMed

    Taulbee, Timothy D; Glover, Samuel E; Macievic, Gregory V; Hunacek, Mickey; Smith, Cheryl; DeBord, Gary W; Morris, Donald; Fix, Jack

    2010-07-01

    Neutron and photon radiation survey records have been used to evaluate and develop a neutron to photon (NP) ratio to reconstruct neutron doses to workers around Hanford's single pass reactors that operated from 1945 to 1972. A total of 5,773 paired neutron and photon measurements extracted from 57 boxes of survey records were used in the development of the NP ratio. The development of the NP ratio enables the use of the recorded dose from an individual's photon dosimeter badge to be used to estimate the unmonitored neutron dose. The Pearson rank correlation between the neutron and photon measurements was 0.71. The NP ratio best fit a lognormal distribution with a geometric mean (GM) of 0.8, a geometric standard deviation (GSD) of 2.95, and the upper 95 th % of this distribution was 4.75. An estimate of the neutron dose based on this NP ratio is considered bounding due to evidence that up to 70% of the total photon exposure received by workers around the single pass reactors occurs during shutdown maintenance and refueling activities when there is no significant neutron exposure. Thus when this NP ratio is applied to the total measured photon dose from an individual film badge dosimeter, the resulting neutron dose is considered bounded.

  10. Estimating the ratio of pond size to irrigated soybean land in Mississippi: a case study

    Treesearch

    Ying Ouyang; G. Feng; J. Read; T. D. Leininger; J. N. Jenkins

    2016-01-01

    Although more on-farm storage ponds have been constructed in recent years to mitigate groundwater resources depletion in Mississippi, little effort has been devoted to estimating the ratio of on-farm water storage pond size to irrigated crop land based on pond metric and its hydrogeological conditions.  In this study, two simulation scenarios were chosen to...

  11. Estimation of S/G ratio in woods using 1064 nm FT-Raman spectroscopy

    Treesearch

    Umesh P. Agarwal; Sally A. Ralph; Dharshana Padmakshan; Sarah Liu; Steven D. Karlen; Cliff Foster; John Ralph

    2015-01-01

    Two simple methods based on the 370 cm-1 Raman band intensity were developed for estimation of syringyl-to-guaiacyl (S/G) ratio in woods. The methods, in principle, are representative of the whole cell wall lignin and not just the portion of lignin that gets cleaved to release monomers, for example, during certain S/G chemical analyses. As such,...

  12. Nonlinear least squares regression for single image scanning electron microscope signal-to-noise ratio estimation.

    PubMed

    Sim, K S; Norhisham, S

    2016-11-01

    A new method based on nonlinear least squares regression (NLLSR) is formulated to estimate signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images. The estimation of SNR value based on NLLSR method is compared with the three existing methods of nearest neighbourhood, first-order interpolation and the combination of both nearest neighbourhood and first-order interpolation. Samples of SEM images with different textures, contrasts and edges were used to test the performance of NLLSR method in estimating the SNR values of the SEM images. It is shown that the NLLSR method is able to produce better estimation accuracy as compared to the other three existing methods. According to the SNR results obtained from the experiment, the NLLSR method is able to produce approximately less than 1% of SNR error difference as compared to the other three existing methods. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  13. Evaluation of freeway motorist assist program : final report, September 30, 2009.

    DOT National Transportation Integrated Search

    2010-02-01

    This evaluation of the Motorist Assist (MA) program in St. Louis estimated that MA has an annual benefit-cost ratio (B/C) of 38.25:1 using 2009 dollars. This estimate was based on nationally accepted AASHTO methodology and was based on 1082 secondary...

  14. Novel point estimation from a semiparametric ratio estimator (SPRE): long-term health outcomes from short-term linear data, with application to weight loss in obesity.

    PubMed

    Weissman-Miller, Deborah

    2013-11-02

    Point estimation is particularly important in predicting weight loss in individuals or small groups. In this analysis, a new health response function is based on a model of human response over time to estimate long-term health outcomes from a change point in short-term linear regression. This important estimation capability is addressed for small groups and single-subject designs in pilot studies for clinical trials, medical and therapeutic clinical practice. These estimations are based on a change point given by parameters derived from short-term participant data in ordinary least squares (OLS) regression. The development of the change point in initial OLS data and the point estimations are given in a new semiparametric ratio estimator (SPRE) model. The new response function is taken as a ratio of two-parameter Weibull distributions times a prior outcome value that steps estimated outcomes forward in time, where the shape and scale parameters are estimated at the change point. The Weibull distributions used in this ratio are derived from a Kelvin model in mechanics taken here to represent human beings. A distinct feature of the SPRE model in this article is that initial treatment response for a small group or a single subject is reflected in long-term response to treatment. This model is applied to weight loss in obesity in a secondary analysis of data from a classic weight loss study, which has been selected due to the dramatic increase in obesity in the United States over the past 20 years. A very small relative error of estimated to test data is shown for obesity treatment with the weight loss medication phentermine or placebo for the test dataset. An application of SPRE in clinical medicine or occupational therapy is to estimate long-term weight loss for a single subject or a small group near the beginning of treatment.

  15. Crop area estimation based on remotely-sensed data with an accurate but costly subsample

    NASA Technical Reports Server (NTRS)

    Gunst, R. F.

    1983-01-01

    Alternatives to sampling-theory stratified and regression estimators of crop production and timber biomass were examined. An alternative estimator which is viewed as especially promising is the errors-in-variable regression estimator. Investigations established the need for caution with this estimator when the ratio of two error variances is not precisely known.

  16. Ocean subsurface particulate backscatter estimation from CALIPSO spaceborne lidar measurements

    NASA Astrophysics Data System (ADS)

    Chen, Peng; Pan, Delu; Wang, Tianyu; Mao, Zhihua

    2017-10-01

    A method for ocean subsurface particulate backscatter estimation from the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) on the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) satellite was demonstrated. The effects of the CALIOP receiver's transient response on the attenuated backscatter profile were first removed. The two-way transmittance of the overlying atmosphere was then estimated as the ratio of the measured ocean surface attenuated backscatter to the theoretical value computed from wind driven wave slope variance. Finally, particulate backscatter was estimated from the depolarization ratio as the ratio of the column-integrated cross-polarized and co-polarized channels. Statistical results show that the derived particulate backscatter by the method based on CALIOP data agree reasonably well with chlorophyll-a concentration using MODIS data. It indicates a potential use of space-borne lidar to estimate global primary productivity and particulate carbon stock.

  17. Pilot-based parametric channel estimation algorithm for DCO-OFDM-based visual light communications

    NASA Astrophysics Data System (ADS)

    Qian, Xuewen; Deng, Honggui; He, Hailang

    2017-10-01

    Due to wide modulation bandwidth in optical communication, multipath channels may be non-sparse and deteriorate communication performance heavily. Traditional compressive sensing-based channel estimation algorithm cannot be employed in this kind of situation. In this paper, we propose a practical parametric channel estimation algorithm for orthogonal frequency division multiplexing (OFDM)-based visual light communication (VLC) systems based on modified zero correlation code (ZCC) pair that has the impulse-like correlation property. Simulation results show that the proposed algorithm achieves better performances than existing least squares (LS)-based algorithm in both bit error ratio (BER) and frequency response estimation.

  18. Mapping surface energy balance components by combining landsat thematic mapper and ground-based meteorological data

    USGS Publications Warehouse

    Moran, M.S.; Jackson, R. D.; Raymond, L.H.; Gay, L.W.; Slater, P.N.

    1989-01-01

    Surface energy balance components were evaluated by combining satellite-based spectral data with on-site measurements of solar irradiance, air temperature, wind speed, and vapor pressure. Maps of latent heat flux density (??E) and net radiant flux density (Rn) were produced using Landsat Thematic Mapper (TM) data for three dates: 23 July 1985, 5 April 1986, and 24 June 1986. On each date, a Bowen-ratio apparatus, located in a vegetated field, was used to measure ??E and Rn at a point within the field. Estimates of ??E and Rn were also obtained using radiometers aboard an aircraft flown at 150 m above ground level. The TM-based estimates differed from the Bowen-ratio and aircraft-based estimates by less than 12 % over mature fields of cotton, wheat, and alfalfa, where ??E and Rn ranged from 400 to 700 Wm-2. ?? 1989.

  19. Ocean Lidar Measurements of Beam Attenuation and a Roadmap to Accurate Phytoplankton Biomass Estimates

    NASA Astrophysics Data System (ADS)

    Hu, Yongxiang; Behrenfeld, Mike; Hostetler, Chris; Pelon, Jacques; Trepte, Charles; Hair, John; Slade, Wayne; Cetinic, Ivona; Vaughan, Mark; Lu, Xiaomei; Zhai, Pengwang; Weimer, Carl; Winker, David; Verhappen, Carolus C.; Butler, Carolyn; Liu, Zhaoyan; Hunt, Bill; Omar, Ali; Rodier, Sharon; Lifermann, Anne; Josset, Damien; Hou, Weilin; MacDonnell, David; Rhew, Ray

    2016-06-01

    Beam attenuation coefficient, c, provides an important optical index of plankton standing stocks, such as phytoplankton biomass and total particulate carbon concentration. Unfortunately, c has proven difficult to quantify through remote sensing. Here, we introduce an innovative approach for estimating c using lidar depolarization measurements and diffuse attenuation coefficients from ocean color products or lidar measurements of Brillouin scattering. The new approach is based on a theoretical formula established from Monte Carlo simulations that links the depolarization ratio of sea water to the ratio of diffuse attenuation Kd and beam attenuation C (i.e., a multiple scattering factor). On July 17, 2014, the CALIPSO satellite was tilted 30° off-nadir for one nighttime orbit in order to minimize ocean surface backscatter and demonstrate the lidar ocean subsurface measurement concept from space. Depolarization ratios of ocean subsurface backscatter are measured accurately. Beam attenuation coefficients computed from the depolarization ratio measurements compare well with empirical estimates from ocean color measurements. We further verify the beam attenuation coefficient retrievals using aircraft-based high spectral resolution lidar (HSRL) data that are collocated with in-water optical measurements.

  20. Wald Sequential Probability Ratio Test for Analysis of Orbital Conjunction Data

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis; Gold, Dara

    2013-01-01

    We propose a Wald Sequential Probability Ratio Test for analysis of commonly available predictions associated with spacecraft conjunctions. Such predictions generally consist of a relative state and relative state error covariance at the time of closest approach, under the assumption that prediction errors are Gaussian. We show that under these circumstances, the likelihood ratio of the Wald test reduces to an especially simple form, involving the current best estimate of collision probability, and a similar estimate of collision probability that is based on prior assumptions about the likelihood of collision.

  1. Evapotranspiration from areas of native vegetation in west-central Florida

    USGS Publications Warehouse

    Bidlake, W.R.; Woodham, W.M.; Lopez, M.A.

    1993-01-01

    A study was made to examine the suitability of three different micrometeorological methods for estimating evapotranspiration from selected areas of native vegetation in west-central Florida and to estimate annual evapotranspiration from those areas. Evapotranspiration was estimated using the energy- balance Bowen ratio and eddy correlation methods. Potential evapotranspiration was computed using the Penman equation. The energy-balance Bowen ratio method was used to estimate diurnal evapotrans- piration at unforested sites and yielded reasonable results; however, measurements indicated that the magnitudes of air temperature and vapor-pressure gradients above the forested sites were too small to obtain reliable evapotranspiration measurements with the energy balance Bowen ratio system. Analysis of the surface energy-balance indicated that sensible and latent heat fluxes computed using standard eddy correlation computation methods did not adequately account for available energy. Eddy correlation data were combined with the equation for the surface energy balance to yield two additional estimates of evapotranspiration. Daily potential evapotranspiration and evapotranspira- tion estimated using the energy-balance Bowen ratio method were not correlated at a unforested, dry prairie site, but they were correlated at a marsh site. Estimates of annual evapotranspiration for sites within the four vegetation types, which were based on energy-balance Bowen ratio and eddy correlation measurements, were 1,010 millimeters for dry prairie sites, 990 millimeters for marsh sites, 1,060 millimeters for pine flatwood sites, and 970 millimeters for a cypress swamp site.

  2. Assessment of Export Efficiency Equations in the Southern Ocean Applied to Satellite-Based Net Primary Production

    NASA Astrophysics Data System (ADS)

    Arteaga, Lionel; Haëntjens, Nils; Boss, Emmanuel; Johnson, Kenneth S.; Sarmiento, Jorge L.

    2018-04-01

    Carbon export efficiency (e-ratio) is defined as the fraction of organic carbon fixed through net primary production (NPP) that is exported out of the surface productive layer of the ocean. Recent observations for the Southern Ocean suggest a negative e-ratio versus NPP relationship, and a reduced dependency of export efficiency on temperature, different than in the global domain. In this study, we complement information from a passive satellite sensor with novel space-based lidar observations of ocean particulate backscattering to infer NPP over the entire annual cycle, and estimate Southern Ocean export rates from five different empirical models of export efficiency. Inferred Southern Ocean NPP falls within the range of previous studies, with a mean estimate of 15.8 (± 3.9) Pg C yr-1 for the region south of 30°S during the 2005-2016 period. We find that an export efficiency model that accounts for silica(Si)-ballasting, which is constrained by observations with a negative e-ratio versus NPP relationship, shows the best agreement with in situ-based estimates of annual net community production (annual export of 2.7 ± 0.6 Pg C yr-1 south of 30°S). By contrast, models based on the analysis of global observations with a positive e-ratio versus NPP relationship predict annually integrated export rates that are ˜ 33% higher than the Si-dependent model. Our results suggest that accounting for Si-induced ballasting is important for the estimation of carbon export in the Southern Ocean.

  3. Multi-Satellite Estimates of Land-Surface Properties for Determination of Energy and Water Budgets

    NASA Technical Reports Server (NTRS)

    Menzel, W. Paul; Rabin, Robert M.; Neale, Christopher M. U.; Gallo, Kevin; Diak, George R.

    1998-01-01

    Using the WETNET database, existing methods for the estimation of surface wetness from SSM/I data have been assessed and further developed. A physical-statistical method for optimal estimation of daily surface heat flux and Bowen ratio on the mesoscale has been developed and tested. This method is based on observations of daytime planetary boundary layer (PBL) growth from operational ravansonde and daytime land-surface temperature amplitude from Geostationary Operational Environmental (GOES) satellites. The mesoscale patterns of these heat fluxes have been compared with an AVHRR-based vegetation index and surface wetness (separately estimated from SSM/I and in situ observations). Cases of the 1988 Midwest drought and a surface/atmosphere moisture gradient (dry-line) in the southern Plains were studied. The analyses revealed significant variations in sensible heat flux (S(sub 0), and Bowen ratio, B(sub 0)) associated with vegetation cover and antecedent precipitation. Relationships for surface heat flux (and Bowen ratio) from antecedent precipitation and vegetation index have been developed and compared to other findings. Results from this project are reported in the following reviewed literature.

  4. Precipitation recycling in the Amazon basin

    NASA Technical Reports Server (NTRS)

    Eltahir, E. A. B.; Bras, R. L.

    1994-01-01

    Precipitation recycling is the contribution of evaporation within a region to precipitation in that same region. The recycling rate is a diagnostic measure of the potential for interactions between land surface hydrology and regional climate. In this paper we present a model for describing the seasonal and spatial variability of the recycling process. The precipitation recycling ratio, rho, is the basic variable in describing the recycling process. Rho is the fraction of precipitation at a certain location and time which is contributed by evaporation within the region under study. The recycling model is applied in studyiing the hydrologic cycle in the Amazon basin. It is estimated that about 25% of all the rain that falls in the Amazon basin is contributed by evaporation within the basin. This estimate is based on analysis of a data set supplied by the European Centre for Medium-range Weather Forecasts (ECMWF). The same analysis is repeated using a different data set from the Geophysical Fluid Dynamics Laboratory (GFDL). Based on this data set, the recycling ratio is estimated to be 35%. The seasonal variability of the recycling ratio is small compared with the yearly average. The new estimates of the recycling ratio are compared with results of previous studies, and the differences are explained.

  5. Bladder cancer mapping in Libya based on standardized morbidity ratio and log-normal model

    NASA Astrophysics Data System (ADS)

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-05-01

    Disease mapping contains a set of statistical techniques that detail maps of rates based on estimated mortality, morbidity, and prevalence. A traditional approach to measure the relative risk of the disease is called Standardized Morbidity Ratio (SMR). It is the ratio of an observed and expected number of accounts in an area, which has the greatest uncertainty if the disease is rare or if geographical area is small. Therefore, Bayesian models or statistical smoothing based on Log-normal model are introduced which might solve SMR problem. This study estimates the relative risk for bladder cancer incidence in Libya from 2006 to 2007 based on the SMR and log-normal model, which were fitted to data using WinBUGS software. This study starts with a brief review of these models, starting with the SMR method and followed by the log-normal model, which is then applied to bladder cancer incidence in Libya. All results are compared using maps and tables. The study concludes that the log-normal model gives better relative risk estimates compared to the classical method. The log-normal model has can overcome the SMR problem when there is no observed bladder cancer in an area.

  6. Sex ratios of fledgling and recaptured subadult spotted owls in the southern Sierra Nevada

    Treesearch

    George N. Steger

    1995-01-01

    Estimates of instantaneous growth rates (A) of spotted owl (Strix occidentalis) populations have been based on demographic data that uniformly assumed an equal sex ratio among fledglings. In this study, sex ratios of subadults, banded as juveniles, and fledgling California spotted owls (S. o. occidentalis) were observed and compared to an assumed 1 : 1 ratio. The...

  7. Age Estimation of African Lions Panthera leo by Ratio of Tooth Areas

    PubMed Central

    Ikanda, Dennis; Ferrante, Luigi; Chardonnet, Philippe; Mesochina, Pascal; Cameriere, Roberto

    2016-01-01

    Improved age estimation of African lions Panthera leo is needed to address a number of pressing conservation issues. Here we present a formula for estimating lion age to within six months of known age based on measuring the extent of pulp closure from X-rays, or Ratio Of tooth AReas (ROAR). Derived from measurements taken from lions aged 3–13 years for which exact ages were known, the formula explains 92% of the total variance. The method of calculating the pulp/tooth area ratio, which has been used extensively in forensic science, is novel in the study of lion aging. As a quantifiable measure, ROAR offers improved lion age estimates for population modeling and investigations of age-related mortality, and may assist national and international wildlife authorities in judging compliance with regulatory measures involving age. PMID:27089506

  8. Age Estimation of African Lions Panthera leo by Ratio of Tooth Areas.

    PubMed

    White, Paula A; Ikanda, Dennis; Ferrante, Luigi; Chardonnet, Philippe; Mesochina, Pascal; Cameriere, Roberto

    2016-01-01

    Improved age estimation of African lions Panthera leo is needed to address a number of pressing conservation issues. Here we present a formula for estimating lion age to within six months of known age based on measuring the extent of pulp closure from X-rays, or Ratio Of tooth AReas (ROAR). Derived from measurements taken from lions aged 3-13 years for which exact ages were known, the formula explains 92% of the total variance. The method of calculating the pulp/tooth area ratio, which has been used extensively in forensic science, is novel in the study of lion aging. As a quantifiable measure, ROAR offers improved lion age estimates for population modeling and investigations of age-related mortality, and may assist national and international wildlife authorities in judging compliance with regulatory measures involving age.

  9. Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.

    PubMed

    Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram

    2017-02-01

    In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.

  10. Linear regression techniques for use in the EC tracer method of secondary organic aerosol estimation

    NASA Astrophysics Data System (ADS)

    Saylor, Rick D.; Edgerton, Eric S.; Hartsell, Benjamin E.

    A variety of linear regression techniques and simple slope estimators are evaluated for use in the elemental carbon (EC) tracer method of secondary organic carbon (OC) estimation. Linear regression techniques based on ordinary least squares are not suitable for situations where measurement uncertainties exist in both regressed variables. In the past, regression based on the method of Deming [1943. Statistical Adjustment of Data. Wiley, London] has been the preferred choice for EC tracer method parameter estimation. In agreement with Chu [2005. Stable estimate of primary OC/EC ratios in the EC tracer method. Atmospheric Environment 39, 1383-1392], we find that in the limited case where primary non-combustion OC (OC non-comb) is assumed to be zero, the ratio of averages (ROA) approach provides a stable and reliable estimate of the primary OC-EC ratio, (OC/EC) pri. In contrast with Chu [2005. Stable estimate of primary OC/EC ratios in the EC tracer method. Atmospheric Environment 39, 1383-1392], however, we find that the optimal use of Deming regression (and the more general York et al. [2004. Unified equations for the slope, intercept, and standard errors of the best straight line. American Journal of Physics 72, 367-375] regression) provides excellent results as well. For the more typical case where OC non-comb is allowed to obtain a non-zero value, we find that regression based on the method of York is the preferred choice for EC tracer method parameter estimation. In the York regression technique, detailed information on uncertainties in the measurement of OC and EC is used to improve the linear best fit to the given data. If only limited information is available on the relative uncertainties of OC and EC, then Deming regression should be used. On the other hand, use of ROA in the estimation of secondary OC, and thus the assumption of a zero OC non-comb value, generally leads to an overestimation of the contribution of secondary OC to total measured OC.

  11. Dosimetry and prescription in liver radioembolization with 90Y microspheres: 3D calculation of tumor-to-liver ratio from global 99mTc-MAA SPECT information

    NASA Astrophysics Data System (ADS)

    Mañeru, Fernando; Abós, Dolores; Bragado, Laura; Fuentemilla, Naiara; Caudepón, Fernando; Pellejero, Santiago; Miquelez, Santiago; Rubio, Anastasio; Goñi, Elena; Hernández-Vitoria, Araceli

    2017-12-01

    Dosimetry in liver radioembolization with 90Y microspheres is a fundamental tool, both for the optimization of each treatment and for improving knowledge of the treatment effects in the tissues. Different options are available for estimating the administered activity and the tumor/organ dose, among them the so-called partition method. The key factor in the partition method is the tumor/normal tissue activity uptake ratio (T/N), which is obtained by a single-photon emission computed tomography (SPECT) scan during a pre-treatment simulation. The less clear the distinction between healthy and tumor parenchyma within the liver, the more difficult it becomes to estimate the T/N ratio; therefore the use of the method is limited. This study presents a methodology to calculate the T/N ratio using global information from the SPECT. The T/N ratio is estimated by establishing uptake thresholds consistent with previously performed volumetry. This dose calculation method was validated against 3D voxel dosimetry, and was also compared with the standard partition method based on freehand regions of interest (ROI) outlining on SPECT slices. Both comparisons were done on a sample of 20 actual cases of hepatocellular carcinoma treated with resin microspheres. The proposed method and the voxel dosimetry method yield similar results, while the ROI-based method tends to over-estimate the dose to normal tissues. In addition, the variability associated with the ROI-based method is more extreme than the other methods. The proposed method is simpler than either the ROI or voxel dosimetry approaches and avoids the subjectivity associated with the manual selection of regions.

  12. Comprehensive analysis of proton range uncertainties related to stopping-power-ratio estimation using dual-energy CT imaging

    NASA Astrophysics Data System (ADS)

    Li, B.; Lee, H. C.; Duan, X.; Shen, C.; Zhou, L.; Jia, X.; Yang, M.

    2017-09-01

    The dual-energy CT-based (DECT) approach holds promise in reducing the overall uncertainty in proton stopping-power-ratio (SPR) estimation as compared to the conventional stoichiometric calibration approach. The objective of this study was to analyze the factors contributing to uncertainty in SPR estimation using the DECT-based approach and to derive a comprehensive estimate of the range uncertainty associated with SPR estimation in treatment planning. Two state-of-the-art DECT-based methods were selected and implemented on a Siemens SOMATOM Force DECT scanner. The uncertainties were first divided into five independent categories. The uncertainty associated with each category was estimated for lung, soft and bone tissues separately. A single composite uncertainty estimate was eventually determined for three tumor sites (lung, prostate and head-and-neck) by weighting the relative proportion of each tissue group for that specific site. The uncertainties associated with the two selected DECT methods were found to be similar, therefore the following results applied to both methods. The overall uncertainty (1σ) in SPR estimation with the DECT-based approach was estimated to be 3.8%, 1.2% and 2.0% for lung, soft and bone tissues, respectively. The dominant factor contributing to uncertainty in the DECT approach was the imaging uncertainties, followed by the DECT modeling uncertainties. Our study showed that the DECT approach can reduce the overall range uncertainty to approximately 2.2% (2σ) in clinical scenarios, in contrast to the previously reported 1%.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Qishi; Berry, M. L..; Grieme, M.

    We propose a localization-based radiation source detection (RSD) algorithm using the Ratio of Squared Distance (ROSD) method. Compared with the triangulation-based method, the advantages of this ROSD method are multi-fold: i) source location estimates based on four detectors improve their accuracy, ii) ROSD provides closed-form source location estimates and thus eliminates the imaginary-roots issue, and iii) ROSD produces a unique source location estimate as opposed to two real roots (if any) in triangulation, and obviates the need to identify real phantom roots during clustering.

  14. Determining the slag fraction, water/binder ratio and degree of hydration in hardened cement pastes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yio, M.H.N., E-mail: marcus.yio11@imperial.ac.uk; Phelan, J.C.; Wong, H.S.

    2014-02-15

    A method for determining the original mix composition of hardened slag-blended cement-based materials based on analysis of backscattered electron images combined with loss on ignition measurements is presented. The method does not require comparison to reference standards or prior knowledge of the composition of the binders used. Therefore, it is well-suited for application to real structures. The method is also able to calculate the degrees of reaction of slag and cement. Results obtained from an experimental study involving sixty samples with a wide range of water/binder (w/b) ratios (0.30 to 0.50), slag/binder ratios (0 to 0.6) and curing ages (3more » days to 1 year) show that the method is very promising. The mean absolute errors for the estimated slag, water and cement contents (kg/m{sup 3}), w/b and s/b ratios were 9.1%, 1.5%, 2.5%, 4.7% and 8.7%, respectively. 91% of the estimated w/b ratios were within 0.036 of the actual values. -- Highlights: •A new method for estimating w/b ratio and slag content in cement pastes is proposed. •The method is also able to calculate the degrees of reaction of slag and cement. •Reference standards or prior knowledge of the binder composition are not required. •The method was tested on samples with varying w/b ratios and slag content.« less

  15. Estimation of indoor and outdoor ratios of selected volatile organic compounds in Canada

    NASA Astrophysics Data System (ADS)

    Xu, Jing; Szyszkowicz, Mieczyslaw; Jovic, Branka; Cakmak, Sabit; Austin, Claire C.; Zhu, Jiping

    2016-09-01

    Indoor air and outdoor air concentration (I/O) ratio can be used to identify the origins of volatile organic compounds (VOCs). I/O ratios of 25 VOCs in Canada were estimated based on the data collected in various areas in Canada between September 2009 and December 2011. The indoor VOC data were extracted from the Canadian Health Measures Survey (CHMS). Outdoor VOC data were obtained from Canada's National Air Pollution Surveillance (NAPS) Network. The sampling locations covered nine areas in six provinces in Canada. Indoor air concentrations were found higher than outdoor air for all studied VOCs, except for carbon tetrachloride. Two different approaches were employed to estimate the I/O ratios; both approaches produced similar I/O values. The I/O ratios obtained from this study were similar to two other Canadian studies where indoor air and outdoor air of individual dwellings were measured. However, the I/O ratios found in Canada were higher than those in European cities and in two large USA cities, possibly due to the fact that the outdoor air concentrations recorded in the Canadian studies were lower. Possible source origins identified for the studied VOCs based on their I/O ratios were similar to those reported by others. In general, chlorinated hydrocarbons, short-chain (C5, C6) n-alkanes and benzene had significant outdoor sources, while long-chain (C10sbnd C12) n-alkanes, terpenes, naphthalene and styrene had significant indoor sources. The remaining VOCs had mixed indoor and outdoor sources.

  16. Agreement between clinicians' and care givers' assessment of intelligence in Nigerian children with intellectual disability: 'ratio IQ' as a viable option in the absence of standardized 'deviance IQ' tests in sub-Saharan Africa.

    PubMed

    Bakare, Muideen O; Ubochi, Vincent N; Okoroikpa, Ifeoma N; Aguocha, Chinyere M; Ebigbo, Peter O

    2009-09-15

    There may be need to assess intelligent quotient (IQ) scores in sub-Saharan African children with intellectual disability, either for the purpose of educational needs assessment or research. However, modern intelligence scales developed in the western parts of the world suffer limitation of widespread use because of the influence of socio-cultural variations across the world. This study examined the agreement between IQ scores estimation among Nigerian children with intellectual disability using clinicians' judgment based on International Classification of Diseases, tenth Edition(ICD - 10) criteria for mental retardation and caregivers judgment based on 'ratio IQ' scores calculated from estimated mental age in the context of socio-cultural milieu of the children. It proposed a viable option of IQ score assessment among sub-Saharan African children with intellectual disability, using a ratio of culture-specific estimated mental age and chronological age of the child in the absence of standardized alternatives, borne out of great diversity in socio-cultural context of sub-Saharan Africa. Clinicians and care-givers independently assessed the children in relation to their socio-cultural background. Clinicians assessed the IQ scores of the children based on the ICD - 10 diagnostic criteria for mental retardation. 'Ratio IQ' scores were calculated from the ratio of estimated mental age and chronological age of each child. The IQ scores as assessed by the clinicians were then compared with the 'ratio IQ' scores using correlation statistics. A total of forty-four (44) children with intellectual disability were assessed. There was a significant correlation between clinicians' assessed IQ scores and the 'ratio IQ' scores employing zero order correlation without controlling for the chronological age of the children (r = 0.47, df = 42, p = 0.001). First order correlation controlling for the chronological age of the children showed higher correlation score between clinicians' assessed IQ scores and 'ratio IQ' scores (r = 0.75, df = 41, p = 0.000). Agreement between clinicians' assessed IQ scores and 'ratio IQ' scores was good. 'Ratio IQ' test would provide a viable option of assessing IQ scores in sub-Saharan African children with intellectual disability in the absence of culture-appropriate standardized intelligence scales, which is often the case because of great diversity in socio-cultural structures of sub-Saharan Africa.

  17. Combining tracer flux ratio methodology with low-flying aircraft measurements to estimate dairy farm CH4 emissions

    NASA Astrophysics Data System (ADS)

    Daube, C.; Conley, S.; Faloona, I. C.; Yacovitch, T. I.; Roscioli, J. R.; Morris, M.; Curry, J.; Arndt, C.; Herndon, S. C.

    2017-12-01

    Livestock activity, enteric fermentation of feed and anaerobic digestion of waste, contributes significantly to the methane budget of the United States (EPA, 2016). Studies question the reported magnitude of these methane sources (Miller et. al., 2013), calling for more detailed research of agricultural animals (Hristov, 2014). Tracer flux ratio is an attractive experimental method to bring to this problem because it does not rely on estimates of atmospheric dispersion. Collection of data occurred during one week at two dairy farms in central California (June, 2016). Each farm varied in size, layout, head count, and general operation. The tracer flux ratio method involves releasing ethane on-site with a known flow rate to serve as a tracer gas. Downwind mixed enhancements in ethane (from the tracer) and methane (from the dairy) were measured, and their ratio used to infer the unknown methane emission rate from the farm. An instrumented van drove transects downwind of each farm on public roads while tracer gases were released on-site, employing the tracer flux ratio methodology to assess simultaneous methane and tracer gas plumes. Flying circles around each farm, a small instrumented aircraft made measurements to perform a mass balance evaluation of methane gas. In the course of these two different methane quantification techniques, we were able to validate yet a third method: tracer flux ratio measured via aircraft. Ground-based tracer release rates were applied to the aircraft-observed methane-to-ethane ratios, yielding whole-site methane emission rates. Never before has the tracer flux ratio method been executed with aircraft measurements. Estimates from this new application closely resemble results from the standard ground-based technique to within their respective uncertainties. Incorporating this new dimension to the tracer flux ratio methodology provides additional context for local plume dynamics and validation of both ground and flight-based data.

  18. Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates

    USGS Publications Warehouse

    Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.

    2008-01-01

    Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.

  19. Determining the multi-scale hedge ratios of stock index futures using the lower partial moments method

    NASA Astrophysics Data System (ADS)

    Dai, Jun; Zhou, Haigang; Zhao, Shaoquan

    2017-01-01

    This paper considers a multi-scale future hedge strategy that minimizes lower partial moments (LPM). To do this, wavelet analysis is adopted to decompose time series data into different components. Next, different parametric estimation methods with known distributions are applied to calculate the LPM of hedged portfolios, which is the key to determining multi-scale hedge ratios over different time scales. Then these parametric methods are compared with the prevailing nonparametric kernel metric method. Empirical results indicate that in the China Securities Index 300 (CSI 300) index futures and spot markets, hedge ratios and hedge efficiency estimated by the nonparametric kernel metric method are inferior to those estimated by parametric hedging model based on the features of sequence distributions. In addition, if minimum-LPM is selected as a hedge target, the hedging periods, degree of risk aversion, and target returns can affect the multi-scale hedge ratios and hedge efficiency, respectively.

  20. Human variability in mercury toxicokinetics and steady state biomarker ratios.

    PubMed

    Bartell, S M; Ponce, R A; Sanga, R N; Faustman, E M

    2000-10-01

    Regulatory guidelines regarding methylmercury exposure depend on dose-response models relating observed mercury concentrations in maternal blood, cord blood, and maternal hair to developmental neurobehavioral endpoints. Generalized estimates of the maternal blood-to-hair, blood-to-intake, or hair-to-intake ratios are necessary for linking exposure to biomarker-based dose-response models. Most assessments have used point estimates for these ratios; however, significant interindividual and interstudy variability has been reported. For example, a maternal ratio of 250 ppm in hair per mg/L in blood is commonly used in models, but a 1990 WHO review reports mean ratios ranging from 140 to 370 ppm per mg/L. To account for interindividual and interstudy variation in applying these ratios to risk and safety assessment, some researchers have proposed representing the ratios with probability distributions and conducting probabilistic assessments. Such assessments would allow regulators to consider the range and like-lihood of mercury exposures in a population, rather than limiting the evaluation to an estimate of the average exposure or a single conservative exposure estimate. However, no consensus exists on the most appropriate distributions for representing these parameters. We discuss published reviews of blood-to-hair and blood-to-intake steady state ratios for mercury and suggest statistical approaches for combining existing datasets to form generalized probability distributions for mercury distribution ratios. Although generalized distributions may not be applicable to all populations, they allow a more informative assessment than point estimates where individual biokinetic information is unavailable. Whereas development and use of these distributions will improve existing exposure and risk models, additional efforts in data generation and model development are required.

  1. QuantFusion: Novel Unified Methodology for Enhanced Coverage and Precision in Quantifying Global Proteomic Changes in Whole Tissues.

    PubMed

    Gunawardena, Harsha P; O'Brien, Jonathon; Wrobel, John A; Xie, Ling; Davies, Sherri R; Li, Shunqiang; Ellis, Matthew J; Qaqish, Bahjat F; Chen, Xian

    2016-02-01

    Single quantitative platforms such as label-based or label-free quantitation (LFQ) present compromises in accuracy, precision, protein sequence coverage, and speed of quantifiable proteomic measurements. To maximize the quantitative precision and the number of quantifiable proteins or the quantifiable coverage of tissue proteomes, we have developed a unified approach, termed QuantFusion, that combines the quantitative ratios of all peptides measured by both LFQ and label-based methodologies. Here, we demonstrate the use of QuantFusion in determining the proteins differentially expressed in a pair of patient-derived tumor xenografts (PDXs) representing two major breast cancer (BC) subtypes, basal and luminal. Label-based in-spectra quantitative peptides derived from amino acid-coded tagging (AACT, also known as SILAC) of a non-malignant mammary cell line were uniformly added to each xenograft with a constant predefined ratio, from which Ratio-of-Ratio estimates were obtained for the label-free peptides paired with AACT peptides in each PDX tumor. A mixed model statistical analysis was used to determine global differential protein expression by combining complementary quantifiable peptide ratios measured by LFQ and Ratio-of-Ratios, respectively. With minimum number of replicates required for obtaining the statistically significant ratios, QuantFusion uses the distinct mechanisms to "rescue" the missing data inherent to both LFQ and label-based quantitation. Combined quantifiable peptide data from both quantitative schemes increased the overall number of peptide level measurements and protein level estimates. In our analysis of the PDX tumor proteomes, QuantFusion increased the number of distinct peptide ratios by 65%, representing differentially expressed proteins between the BC subtypes. This quantifiable coverage improvement, in turn, not only increased the number of measurable protein fold-changes by 8% but also increased the average precision of quantitative estimates by 181% so that some BC subtypically expressed proteins were rescued by QuantFusion. Thus, incorporating data from multiple quantitative approaches while accounting for measurement variability at both the peptide and global protein levels make QuantFusion unique for obtaining increased coverage and quantitative precision for tissue proteomes. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.

  2. Automatic estimation of heart boundaries and cardiothoracic ratio from chest x-ray images

    NASA Astrophysics Data System (ADS)

    Dallal, Ahmed H.; Agarwal, Chirag; Arbabshirani, Mohammad R.; Patel, Aalpen; Moore, Gregory

    2017-03-01

    Cardiothoracic ratio (CTR) is a widely used radiographic index to assess heart size on chest X-rays (CXRs). Recent studies have suggested that also two-dimensional CTR might contain clinical information about the heart function. However, manual measurement of such indices is both subjective and time consuming. This study proposes a fast algorithm to automatically estimate CTR indices based on CXRs. The algorithm has three main steps: 1) model based lung segmentation, 2) estimation of heart boundaries from lung contours, and 3) computation of cardiothoracic indices from the estimated boundaries. We extended a previously employed lung detection algorithm to automatically estimate heart boundaries without using ground truth heart markings. We used two datasets: a publicly available dataset with 247 images as well as clinical dataset with 167 studies from Geisinger Health System. The models of lung fields are learned from both datasets. The lung regions in a given test image are estimated by registering the learned models to patient CXRs. Then, heart region is estimated by applying Harris operator on segmented lung fields to detect the corner points corresponding to the heart boundaries. The algorithm calculates three indices, CTR1D, CTR2D, and cardiothoracic area ratio (CTAR). The method was tested on 103 clinical CXRs and average error rates of 7.9%, 25.5%, and 26.4% (for CTR1D, CTR2D, and CTAR respectively) were achieved. The proposed method outperforms previous CTR estimation methods without using any heart templates. This method can have important clinical implications as it can provide fast and accurate estimate of cardiothoracic indices.

  3. Estimated areal extent of colonies of black-tailed prairie dogs in the northern Great Plains

    USGS Publications Warehouse

    Sidle, John G.; Johnson, Douglas H.; Euliss, Betty R.

    2001-01-01

    During 1997–1998, we undertook an aerial survey, with an aerial line-intercept technique, to estimate the extent of colonies of black-tailed prairie dogs (Cynomys ludovicianus) in the northern Great Plains states of Nebraska, North Dakota, South Dakota, and Wyoming. We stratified the survey based on knowledge of colony locations, computed 2 types of estimates for each stratum, and combined ratio estimates for high-density strata with average density estimates for low-density strata. Estimates of colony areas for black-tailed prairie dogs were derived from the average percentages of lines intercepting prairie dog colonies and ratio estimators. We selected the best estimator based on the correlation between length of transect line and length of intercepted colonies. Active colonies of black-tailed prairie dogs occupied 2,377.8 km2 ± 186.4 SE, whereas inactive colonies occupied 560.4 ± 89.2 km2. These data represent the 1st quantitative assessment of black-tailed prairie dog colonies in the northern Great Plains. The survey dispels popular notions that millions of hectares of colonies of black-tailed prairie dogs exist in the northern Great Plains and can form the basis for future survey efforts.

  4. Bivariate categorical data analysis using normal linear conditional multinomial probability model.

    PubMed

    Sun, Bingrui; Sutradhar, Brajendra

    2015-02-10

    Bivariate multinomial data such as the left and right eyes retinopathy status data are analyzed either by using a joint bivariate probability model or by exploiting certain odds ratio-based association models. However, the joint bivariate probability model yields marginal probabilities, which are complicated functions of marginal and association parameters for both variables, and the odds ratio-based association model treats the odds ratios involved in the joint probabilities as 'working' parameters, which are consequently estimated through certain arbitrary 'working' regression models. Also, this later odds ratio-based model does not provide any easy interpretations of the correlations between two categorical variables. On the basis of pre-specified marginal probabilities, in this paper, we develop a bivariate normal type linear conditional multinomial probability model to understand the correlations between two categorical variables. The parameters involved in the model are consistently estimated using the optimal likelihood and generalized quasi-likelihood approaches. The proposed model and the inferences are illustrated through an intensive simulation study as well as an analysis of the well-known Wisconsin Diabetic Retinopathy status data. Copyright © 2014 John Wiley & Sons, Ltd.

  5. Sand/cement ratio evaluation on mortar using neural networks and ultrasonic transmission inspection.

    PubMed

    Molero, M; Segura, I; Izquierdo, M A G; Fuente, J V; Anaya, J J

    2009-02-01

    The quality and degradation state of building materials can be determined by nondestructive testing (NDT). These materials are composed of a cementitious matrix and particles or fragments of aggregates. Sand/cement ratio (s/c) provides the final material quality; however, the sand content can mask the matrix properties in a nondestructive measurement. Therefore, s/c ratio estimation is needed in nondestructive characterization of cementitious materials. In this study, a methodology to classify the sand content in mortar is presented. The methodology is based on ultrasonic transmission inspection, data reduction, and features extraction by principal components analysis (PCA), and neural network classification. This evaluation is carried out with several mortar samples, which were made while taking into account different cement types and s/c ratios. The estimated s/c ratio is determined by ultrasonic spectral attenuation with three different broadband transducers (0.5, 1, and 2 MHz). Statistical PCA to reduce the dimension of the captured traces has been applied. Feed-forward neural networks (NNs) are trained using principal components (PCs) and their outputs are used to display the estimated s/c ratios in false color images, showing the s/c ratio distribution of the mortar samples.

  6. Comparison of LCModel and SAGE in Analysis of Brain Metabolite Concentrations-A study of Patients with Mild Cognitive Impairment.

    PubMed

    Shih, Chiu-Ming; Lai, Jui-Jen; Chang, Chin-Ching; Chen, Cheng-Sheng; Yeh, Yi-Chun; Jaw, Twei-Shiun; Hsu, Jui-Sheng; Li, Chun-Wei

    2017-03-15

    The purpose of this study was to compare brain metabolite concentration ratios determined by LCModel and Spectroscopy Analysis by General Electric (SAGE) quantitative methods to elucidate the advantages and disadvantages of each method. A total of 10 healthy volunteers and 10 patients with mild cognitive impairment (MCI) were recruited in this study. A point-resolved spectroscopy (PRESS) sequence was used to obtain the brain magnetic resonance spectroscopy (MRS) spectra of the volunteers and patients, as well as the General Electric (GE) MRS-HD-sphere phantom. The brain metabolite concentration ratios were estimated based on the peak area obtained from both LCModel and SAGE software. Three brain regions were sampled for each volunteer or patient, and 20 replicates were acquired at different times for the phantom analysis. The metabolite ratios of the GE phantom were estimated to be myo-inositol (mI)/creatine (Cr): 0.70 ± 0.01, choline (Cho)/Cr: 0.37 ± 0.00, N-acetylaspartate (NAA)/Cr: 1.26 ± 0.02, and NAA/mI: 1.81 ± 0.04 by LCModel, and mI/Cr: 0.88 ± 0.15, Cho/Cr: 0.35 ± 0.01, NAA/Cr: 1.33 ± 0.03, and NAA/mI: 1.55 ± 0.26 by SAGE. In the healthy volunteers and MCI patients, the ratios of mI/Cr and Cho/Cr estimated by LCModel were higher than those estimated by SAGE. In contrast, the ratio of NAA/Cr estimated by LCModel was lower than that estimated by SAGE. Both methods were acceptable in estimating brain metabolite concentration ratios. However, LCModel was marginally more accurate than SAGE because of its full automation, basis set, and user independency.

  7. Characterization of X Chromosome Inactivation Using Integrated Analysis of Whole-Exome and mRNA Sequencing

    PubMed Central

    Szelinger, Szabolcs; Malenica, Ivana; Corneveaux, Jason J.; Siniard, Ashley L.; Kurdoglu, Ahmet A.; Ramsey, Keri M.; Schrauwen, Isabelle; Trent, Jeffrey M.; Narayanan, Vinodh; Huentelman, Matthew J.; Craig, David W.

    2014-01-01

    In females, X chromosome inactivation (XCI) is an epigenetic, gene dosage compensatory mechanism by inactivation of one copy of X in cells. Random XCI of one of the parental chromosomes results in an approximately equal proportion of cells expressing alleles from either the maternally or paternally inherited active X, and is defined by the XCI ratio. Skewed XCI ratio is suggestive of non-random inactivation, which can play an important role in X-linked genetic conditions. Current methods rely on indirect, semi-quantitative DNA methylation-based assay to estimate XCI ratio. Here we report a direct approach to estimate XCI ratio by integrated, family-trio based whole-exome and mRNA sequencing using phase-by-transmission of alleles coupled with allele-specific expression analysis. We applied this method to in silico data and to a clinical patient with mild cognitive impairment but no clear diagnosis or understanding molecular mechanism underlying the phenotype. Simulation showed that phased and unphased heterozygous allele expression can be used to estimate XCI ratio. Segregation analysis of the patient's exome uncovered a de novo, interstitial, 1.7 Mb deletion on Xp22.31 that originated on the paternally inherited X and previously been associated with heterogeneous, neurological phenotype. Phased, allelic expression data suggested an 83∶20 moderately skewed XCI that favored the expression of the maternally inherited, cytogenetically normal X and suggested that the deleterious affect of the de novo event on the paternal copy may be offset by skewed XCI that favors expression of the wild-type X. This study shows the utility of integrated sequencing approach in XCI ratio estimation. PMID:25503791

  8. Basic numerical capacities and prevalence of developmental dyscalculia: the Havana Survey.

    PubMed

    Reigosa-Crespo, Vivian; Valdés-Sosa, Mitchell; Butterworth, Brian; Estévez, Nancy; Rodríguez, Marisol; Santos, Elsa; Torres, Paul; Suárez, Ramón; Lage, Agustín

    2012-01-01

    The association of enumeration and number comparison capacities with arithmetical competence was examined in a large sample of children from 2nd to 9th grades. It was found that efficiency on numerical capacities predicted separately more than 25% of the variance in the individual differences on a timed arithmetical test, and this occurred for both younger and older learners. These capacities were also significant predictors of individual variations in an untimed curriculum-based math achievement test and on the teacher scores of math performance over developmental time. Based on these findings, these numerical capacities were used for estimating the prevalence and gender ratio of basic numerical deficits and developmental dyscalculia (DD) over the grade range defined above (N = 11,652 children). The extent to which DD affects the population with poor ability on calculation was also examined. For this purpose, the prevalence and gender ratio of arithmetical dysfluency (AD) were estimated in the same cohort. The estimated prevalence of DD was 3.4%, and the male:female ratio was 4:1. However, the prevalence of AD was almost 3 times as high (9.35%), and no gender differences were found (male:female ratio = 1.07:1). Basic numerical deficits affect 4.54% of school-age population and affect more boys than girls (2.4:1). The differences between the corresponding estimates were highly significant (α < .01). Based on these contrastive findings, it is concluded that DD, defined as a defective sense of numerosity, could be a distinctive disorder that affects only a portion of children with AD.

  9. An improved estimator for the hydration of fat-free mass from in vivo measurements subject to additive technical errors.

    PubMed

    Kinnamon, Daniel D; Lipsitz, Stuart R; Ludwig, David A; Lipshultz, Steven E; Miller, Tracie L

    2010-04-01

    The hydration of fat-free mass, or hydration fraction (HF), is often defined as a constant body composition parameter in a two-compartment model and then estimated from in vivo measurements. We showed that the widely used estimator for the HF parameter in this model, the mean of the ratios of measured total body water (TBW) to fat-free mass (FFM) in individual subjects, can be inaccurate in the presence of additive technical errors. We then proposed a new instrumental variables estimator that accurately estimates the HF parameter in the presence of such errors. In Monte Carlo simulations, the mean of the ratios of TBW to FFM was an inaccurate estimator of the HF parameter, and inferences based on it had actual type I error rates more than 13 times the nominal 0.05 level under certain conditions. The instrumental variables estimator was accurate and maintained an actual type I error rate close to the nominal level in all simulations. When estimating and performing inference on the HF parameter, the proposed instrumental variables estimator should yield accurate estimates and correct inferences in the presence of additive technical errors, but the mean of the ratios of TBW to FFM in individual subjects may not.

  10. Estimation of Road Friction Coefficient in Different Road Conditions Based on Vehicle Braking Dynamics

    NASA Astrophysics Data System (ADS)

    Zhao, You-Qun; Li, Hai-Qing; Lin, Fen; Wang, Jian; Ji, Xue-Wu

    2017-07-01

    The accurate estimation of road friction coefficient in the active safety control system has become increasingly prominent. Most previous studies on road friction estimation have only used vehicle longitudinal or lateral dynamics and often ignored the load transfer, which tends to cause inaccurate of the actual road friction coefficient. A novel method considering load transfer of front and rear axles is proposed to estimate road friction coefficient based on braking dynamic model of two-wheeled vehicle. Sliding mode control technique is used to build the ideal braking torque controller, which control target is to control the actual wheel slip ratio of front and rear wheels tracking the ideal wheel slip ratio. In order to eliminate the chattering problem of the sliding mode controller, integral switching surface is used to design the sliding mode surface. A second order linear extended state observer is designed to observe road friction coefficient based on wheel speed and braking torque of front and rear wheels. The proposed road friction coefficient estimation schemes are evaluated by simulation in ADAMS/Car. The results show that the estimated values can well agree with the actual values in different road conditions. The observer can estimate road friction coefficient exactly in real-time and resist external disturbance. The proposed research provides a novel method to estimate road friction coefficient with strong robustness and more accurate.

  11. [Cost-effectiveness analysis and diet quality index applied to the WHO Global Strategy].

    PubMed

    Machado, Flávia Mori Sarti; Simões, Arlete Naresse

    2008-02-01

    To test the use of cost-effectiveness analysis as a decision making tool in the production of meals for the inclusion of the recommendations published in the World Health Organization's Global Strategy. Five alternative options for breakfast menu were assessed previously to their adoption in a food service at a university in the state of Sao Paulo, Southeastern Brazil, in 2006. Costs of the different options were based on market prices of food items (direct cost). Health benefits were estimated based on adaptation of the Diet Quality Index (DQI). Cost-effectiveness ratios were estimated by dividing benefits by costs and incremental cost-effectiveness ratios were estimated as cost differential per unit of additional benefit. The meal choice was based on health benefit units associated to direct production cost as well as incremental effectiveness per unit of differential cost. The analysis showed the most simple option with the addition of a fruit (DQI = 64 / cost = R$ 1.58) as the best alternative. Higher effectiveness was seen in the options with a fruit portion (DQI1=64 / DQI3=58 / DQI5=72) compared to the others (DQI2=48 / DQI4=58). The estimate of cost-effectiveness ratio allowed to identifying the best breakfast option based on cost-effectiveness analysis and Diet Quality Index. These instruments allow easy application easiness and objective evaluation which are key to the process of inclusion of public or private institutions under the Global Strategy directives.

  12. Validation of proton stopping power ratio estimation based on dual energy CT using fresh tissue samples

    NASA Astrophysics Data System (ADS)

    Taasti, Vicki T.; Michalak, Gregory J.; Hansen, David C.; Deisher, Amanda J.; Kruse, Jon J.; Krauss, Bernhard; Muren, Ludvig P.; Petersen, Jørgen B. B.; McCollough, Cynthia H.

    2018-01-01

    Dual energy CT (DECT) has been shown, in theoretical and phantom studies, to improve the stopping power ratio (SPR) determination used for proton treatment planning compared to the use of single energy CT (SECT). However, it has not been shown that this also extends to organic tissues. The purpose of this study was therefore to investigate the accuracy of SPR estimation for fresh pork and beef tissue samples used as surrogates of human tissues. The reference SPRs for fourteen tissue samples, which included fat, muscle and femur bone, were measured using proton pencil beams. The tissue samples were subsequently CT scanned using four different scanners with different dual energy acquisition modes, giving in total six DECT-based SPR estimations for each sample. The SPR was estimated using a proprietary algorithm (syngo.via DE Rho/Z Maps, Siemens Healthcare, Forchheim, Germany) for extracting the electron density and the effective atomic number. SECT images were also acquired and SECT-based SPR estimations were performed using a clinical Hounsfield look-up table. The mean and standard deviation of the SPR over large volume-of-interests were calculated. For the six different DECT acquisition methods, the root-mean-square errors (RMSEs) for the SPR estimates over all tissue samples were between 0.9% and 1.5%. For the SECT-based SPR estimation the RMSE was 2.8%. For one DECT acquisition method, a positive bias was seen in the SPR estimates, having a mean error of 1.3%. The largest errors were found in the very dense cortical bone from a beef femur. This study confirms the advantages of DECT-based SPR estimation although good results were also obtained using SECT for most tissues.

  13. Estimating Transmissivity from the Water Level Fluctuations of a Sinusoidally Forced Well

    USGS Publications Warehouse

    Mehnert, E.; Valocchi, A.J.; Heidari, M.; Kapoor, S.G.; Kumar, P.

    1999-01-01

    The water levels in wells are known to fluctuate in response to earth tides and changes in atmospheric pressure. These water level fluctuations can be analyzed to estimate transmissivity (T). A new method to estimate transmissivity, which assumes that the atmospheric pressure varies in a sinusoidal fashion, is presented. Data analysis for this simplified method involves using a set of type curves and estimating the ratio of the amplitudes of the well response over the atmospheric pressure. Type curves for this new method were generated based on a model for ground water flow between the well and aquifer developed by Cooper et al. (1965). Data analysis with this method confirmed these published results: (1) the amplitude ratio is a function of transmissivity, the well radius, and the frequency of the sinusoidal oscillation; and (2) the amplitude ratio is a weak function of storativity. Compared to other methods, the developed method involves simpler, more intuitive data analysis and allows shorter data sets to be analyzed. The effect of noise on estimating the amplitude ratio was evaluated and found to be more significant at lower T. For aquifers with low T, noise was shown to mask the water level fluctuations induced by atmospheric pressure changes. In addition, reducing the length of the data series did not affect the estimate of T, but the variance of the estimate was higher for the shorter series of noisy data.

  14. Inverse Modeling of Tropospheric Methane Constrained by 13C Isotope in Methane

    NASA Astrophysics Data System (ADS)

    Mikaloff Fletcher, S. E.; Tans, P. P.; Bruhwiler, L. M.

    2001-12-01

    Understanding the budget of methane is crucial to predicting climate change and managing earth's carbon reservoirs. Methane is responsible for approximately 15% of the anthropogenic greenhouse forcing and has a large impact on the oxidative capacity of Earth's atmosphere due to its reaction with hydroxyl radical. At present, many of the sources and sinks of methane are poorly understood, due in part to the large spatial and temporal variability of the methane flux. Model calculations of methane mixing ratios using most process-based source estimates typically over-predict the inter-hemispheric gradient of atmospheric methane. Inverse models, which estimate trace gas budgets by using observations of atmospheric mixing ratios and transport models to estimate sources and sinks, have been used to incorporate features of the atmospheric observations into methane budgets. While inverse models of methane generally tend to find a decrease in northern hemisphere sources and an increase in southern hemisphere sources relative to process-based estimates,no inverse study has definitively associated the inter-hemispheric gradient difference with a specific source process or group of processes. In this presentation, observations of isotopic ratios of 13C in methane and isotopic signatures of methane source processes are used in conjunction with an inverse model of methane to further constrain the source estimates of methane. In order to investigate the advantages of incorporating 13C, the TM3 three-dimensional transport model was used. The methane and carbon dioxide measurements used are from a cooperative international effort, the Cooperative Air Sampling Network, lead by the Climate Monitoring Diagnostics Laboratory (CMDL) at the National Oceanic and Atmospheric Administration (NOAA). Experiments using model calculations based on process-based source estimates show that the inter-hemispheric gradient of δ 13CH4 is not reproduced by these source estimates, showing that the addition of observations of δ 13CH4 should provide unique insight into the methane problem.

  15. A kinetic model for estimating net photosynthetic rates of cos lettuce leaves under pulsed light.

    PubMed

    Jishi, Tomohiro; Matsuda, Ryo; Fujiwara, Kazuhiro

    2015-04-01

    Time-averaged net photosynthetic rate (P n) under pulsed light (PL) is known to be affected by the PL frequency and duty ratio, even though the time-averaged photosynthetic photon flux density (PPFD) is unchanged. This phenomenon can be explained by considering that photosynthetic intermediates (PIs) are pooled during light periods and then consumed by partial photosynthetic reactions during dark periods. In this study, we developed a kinetic model to estimate P n of cos lettuce (Lactuca sativa L. var. longifolia) leaves under PL based on the dynamics of the amount of pooled PIs. The model inputs are average PPFD, duty ratio, and frequency; the output is P n. The rates of both PI accumulation and consumption at a given moment are assumed to be dependent on the amount of pooled PIs at that point. Required model parameters and three explanatory variables (average PPFD, frequency, and duty ratio) were determined for the simulation using P n values under PL based on several combinations of the three variables. The model simulation for various PL levels with a wide range of time-averaged PPFDs, frequencies, and duty ratios further demonstrated that P n under PL with high frequencies and duty ratios was comparable to, but did not exceed, P n under continuous light, and also showed that P n under PL decreased as either frequency or duty ratio was decreased. The developed model can be used to estimate P n under various light environments where PPFD changes cyclically.

  16. Systematic review and meta-analysis estimating association of cysticercosis and neurocysticercosis with epilepsy.

    PubMed

    Debacq, Gabrielle; Moyano, Luz M; Garcia, Héctor H; Boumediene, Farid; Marin, Benoit; Ngoungou, Edgard B; Preux, Pierre-Marie

    2017-03-01

    We reviewed studies that analyzed cysticercosis (CC), neurocysticercosis (NCC) and epilepsy across Latin America, Asia and Sub-Saharan Africa, to estimate the odds ratio and etiologic fraction of epilepsy due to CC in tropical regions. We conducted a systematic review of the literature on cysticercosis and epilepsy in the tropics, collecting data from case-control and cross-sectional studies. Exposure criteria for CC included one or more of the following: serum ELISA or EITB positivity, presence of subcutaneous cysts (both not verified and unverified by histology), histology consistent with calcified cysts, and brain CT scan consistent with NCC. A common odds-ratio was then estimated using meta-analysis. 37 studies from 23 countries were included (n = 24,646 subjects, 14,934 with epilepsy and 9,712 without epilepsy). Of these, 29 were case-control (14 matched). The association between CC and epilepsy was significant in 19 scientific articles. Odds ratios ranged from 0.2 to 25.4 (a posteriori power 4.5-100%) and the common odds ratio was 2.7 (95% CI 2.1-3.6, p <0.001). Three subgroup analyses performed gave odds ratios as: 2.2 (EITB-based studies), 3.2 (CT-based studies), 1.9 (neurologist-confirmed epilepsy; door-to-door survey and at least one matched control per case). Etiologic fraction was estimated to be 63% in the exposed group among the population. Despite differences in findings, this meta-analysis suggests that cysticercosis is a significant contributor to late-onset epilepsy in tropical regions around the world, and its impact may vary depending on transmission intensity.

  17. Hazard ratio estimation and inference in clinical trials with many tied event times.

    PubMed

    Mehrotra, Devan V; Zhang, Yiwei

    2018-06-13

    The medical literature contains numerous examples of randomized clinical trials with time-to-event endpoints in which large numbers of events accrued over relatively short follow-up periods, resulting in many tied event times. A generally common feature across such examples was that the logrank test was used for hypothesis testing and the Cox proportional hazards model was used for hazard ratio estimation. We caution that this common practice is particularly risky in the setting of many tied event times for two reasons. First, the estimator of the hazard ratio can be severely biased if the Breslow tie-handling approximation for the Cox model (the default in SAS and Stata software) is used. Second, the 95% confidence interval for the hazard ratio can include one even when the corresponding logrank test p-value is less than 0.05. To help establish a better practice, with applicability for both superiority and noninferiority trials, we use theory and simulations to contrast Wald and score tests based on well-known tie-handling approximations for the Cox model. Our recommendation is to report the Wald test p-value and corresponding confidence interval based on the Efron approximation. The recommended test is essentially as powerful as the logrank test, the accompanying point and interval estimates of the hazard ratio have excellent statistical properties even in settings with many tied event times, inferential alignment between the p-value and confidence interval is guaranteed, and implementation is straightforward using commonly used software. Copyright © 2018 John Wiley & Sons, Ltd.

  18. 12 CFR 327.52 - Annual dividend determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... the DIF reserve ratio as of December 31st of 2008 or any later year equals or exceeds 1.35 percent... dividend based upon the reserve ratio of the DIF as of December 31st of the preceding year, and the amount... ratio of the DIF equals or exceeds 1.35 percent of estimated insured deposits and does not exceed 1.50...

  19. Multiplication free neural network for cancer stem cell detection in H-and-E stained liver images

    NASA Astrophysics Data System (ADS)

    Badawi, Diaa; Akhan, Ece; Mallah, Ma'en; Üner, Ayşegül; ćetin-Atalay, Rengül; ćetin, A. Enis

    2017-05-01

    Markers such as CD13 and CD133 have been used to identify Cancer Stem Cells (CSC) in various tissue images. It is highly likely that CSC nuclei appear as brown in CD13 stained liver tissue images. We observe that there is a high correlation between the ratio of brown to blue colored nuclei in CD13 images and the ratio between the dark blue to blue colored nuclei in H&E stained liver images. Therefore, we recommend that a pathologist observing many dark blue nuclei in an H&E stained tissue image may also order CD13 staining to estimate the CSC ratio. In this paper, we describe a computer vision method based on a neural network estimating the ratio of dark blue to blue colored nuclei in an H&E stained liver tissue image. The neural network structure is based on a multiplication free operator using only additions and sign operations. Experimental results are presented.

  20. Application of a multiple scattering model to estimate optical depth, lidar ratio and ice crystal effective radius of cirrus clouds observed with lidar.

    NASA Astrophysics Data System (ADS)

    Gouveia, Diego; Baars, Holger; Seifert, Patric; Wandinger, Ulla; Barbosa, Henrique; Barja, Boris; Artaxo, Paulo; Lopes, Fabio; Landulfo, Eduardo; Ansmann, Albert

    2018-04-01

    Lidar measurements of cirrus clouds are highly influenced by multiple scattering (MS). We therefore developed an iterative approach to correct elastic backscatter lidar signals for multiple scattering to obtain best estimates of single-scattering cloud optical depth and lidar ratio as well as of the ice crystal effective radius. The approach is based on the exploration of the effect of MS on the molecular backscatter signal returned from above cloud top.

  1. Standard and reduced radiation dose liver CT images: adaptive statistical iterative reconstruction versus model-based iterative reconstruction-comparison of findings and image quality.

    PubMed

    Shuman, William P; Chan, Keith T; Busey, Janet M; Mitsumori, Lee M; Choi, Eunice; Koprowicz, Kent M; Kanal, Kalpana M

    2014-12-01

    To investigate whether reduced radiation dose liver computed tomography (CT) images reconstructed with model-based iterative reconstruction ( MBIR model-based iterative reconstruction ) might compromise depiction of clinically relevant findings or might have decreased image quality when compared with clinical standard radiation dose CT images reconstructed with adaptive statistical iterative reconstruction ( ASIR adaptive statistical iterative reconstruction ). With institutional review board approval, informed consent, and HIPAA compliance, 50 patients (39 men, 11 women) were prospectively included who underwent liver CT. After a portal venous pass with ASIR adaptive statistical iterative reconstruction images, a 60% reduced radiation dose pass was added with MBIR model-based iterative reconstruction images. One reviewer scored ASIR adaptive statistical iterative reconstruction image quality and marked findings. Two additional independent reviewers noted whether marked findings were present on MBIR model-based iterative reconstruction images and assigned scores for relative conspicuity, spatial resolution, image noise, and image quality. Liver and aorta Hounsfield units and image noise were measured. Volume CT dose index and size-specific dose estimate ( SSDE size-specific dose estimate ) were recorded. Qualitative reviewer scores were summarized. Formal statistical inference for signal-to-noise ratio ( SNR signal-to-noise ratio ), contrast-to-noise ratio ( CNR contrast-to-noise ratio ), volume CT dose index, and SSDE size-specific dose estimate was made (paired t tests), with Bonferroni adjustment. Two independent reviewers identified all 136 ASIR adaptive statistical iterative reconstruction image findings (n = 272) on MBIR model-based iterative reconstruction images, scoring them as equal or better for conspicuity, spatial resolution, and image noise in 94.1% (256 of 272), 96.7% (263 of 272), and 99.3% (270 of 272), respectively. In 50 image sets, two reviewers (n = 100) scored overall image quality as sufficient or good with MBIR model-based iterative reconstruction in 99% (99 of 100). Liver SNR signal-to-noise ratio was significantly greater for MBIR model-based iterative reconstruction (10.8 ± 2.5 [standard deviation] vs 7.7 ± 1.4, P < .001); there was no difference for CNR contrast-to-noise ratio (2.5 ± 1.4 vs 2.4 ± 1.4, P = .45). For ASIR adaptive statistical iterative reconstruction and MBIR model-based iterative reconstruction , respectively, volume CT dose index was 15.2 mGy ± 7.6 versus 6.2 mGy ± 3.6; SSDE size-specific dose estimate was 16.4 mGy ± 6.6 versus 6.7 mGy ± 3.1 (P < .001). Liver CT images reconstructed with MBIR model-based iterative reconstruction may allow up to 59% radiation dose reduction compared with the dose with ASIR adaptive statistical iterative reconstruction , without compromising depiction of findings or image quality. © RSNA, 2014.

  2. Testing homogeneity of proportion ratios for stratified correlated bilateral data in two-arm randomized clinical trials.

    PubMed

    Pei, Yanbo; Tian, Guo-Liang; Tang, Man-Lai

    2014-11-10

    Stratified data analysis is an important research topic in many biomedical studies and clinical trials. In this article, we develop five test statistics for testing the homogeneity of proportion ratios for stratified correlated bilateral binary data based on an equal correlation model assumption. Bootstrap procedures based on these test statistics are also considered. To evaluate the performance of these statistics and procedures, we conduct Monte Carlo simulations to study their empirical sizes and powers under various scenarios. Our results suggest that the procedure based on score statistic performs well generally and is highly recommended. When the sample size is large, procedures based on the commonly used weighted least square estimate and logarithmic transformation with Mantel-Haenszel estimate are recommended as they do not involve any computation of maximum likelihood estimates requiring iterative algorithms. We also derive approximate sample size formulas based on the recommended test procedures. Finally, we apply the proposed methods to analyze a multi-center randomized clinical trial for scleroderma patients. Copyright © 2014 John Wiley & Sons, Ltd.

  3. Stemflow estimation in a redwood forest using model-based stratified random sampling

    Treesearch

    Jack Lewis

    2003-01-01

    Model-based stratified sampling is illustrated by a case study of stemflow volume in a redwood forest. The approach is actually a model-assisted sampling design in which auxiliary information (tree diameter) is utilized in the design of stratum boundaries to optimize the efficiency of a regression or ratio estimator. The auxiliary information is utilized in both the...

  4. A Measure for the Reliability of a Rating Scale Based on Longitudinal Clinical Trial Data

    ERIC Educational Resources Information Center

    Laenen, Annouschka; Alonso, Ariel; Molenberghs, Geert

    2007-01-01

    A new measure for reliability of a rating scale is introduced, based on the classical definition of reliability, as the ratio of the true score variance and the total variance. Clinical trial data can be employed to estimate the reliability of the scale in use, whenever repeated measurements are taken. The reliability is estimated from the…

  5. Mapping surface energy balance components by combining Landsat Thematic Mapper and ground-based meteorological data

    NASA Technical Reports Server (NTRS)

    Moran, M. Susan; Jackson, Ray D.; Raymond, Lee H.; Gay, Lloyd W.; Slater, Philip N.

    1989-01-01

    Surface energy balance components were evaluated by combining satellite-based spectral data with on-site measurements of solar irradiance, air temperature, wind speed, and vapor pressure. Maps of latent heat flux density and net radiant flux density were produced using Landsat TM data for three dates. The TM-based estimates differed from Bowen-ratio and aircraft-based estimates by less than 12 percent over mature fields of cotton, wheat, and alfalfa.

  6. Comparison of radiographic and computed tomographic measurement of pedicle and vertebral body dimensions in Koreans: the ratio of pedicle transverse diameter to vertebral body transverse diameter.

    PubMed

    Kang, Ki Ser; Song, Kwang-Sup; Lee, Jong Seok; Yang, Jae Jun; Song, In Sup

    2011-03-01

    This study was designed to investigate the characteristics of pedicle transverse diameters (PD), vertebral body transverse diameters (VBD), especially the ratios of PD/VBD (CT ratio), which has never been discussed, in Koreans using computed tomography (CT) scans and to evaluate the possibility of obtaining more accurate estimations of PD from plain radiographs using the CT ratios in each spine level. The T1-L5 vertebrae of 50 participants were analyzed prospectively with CT scans (CT-VBD and CT-PD), and the T9-L5 vertebrae of the same participants were investigated with plain radiographs (X-VBD and X-PD). The CT ratio had a higher correlation with the CT-PD (r2 = 0.630) from T1 to L5, especially in the lower thoracic and lumbar spine (T9-L5, r2 = 0.737). The correlation of VBDs between the two radiologic tools (r2 = 0.896) was higher than that of the PDs (r2 = 0.665). Based on the data, equations for the estimation of a more accurate PD from plain radiographs were developed as follows: estimated PD = estimated VBD × [1.014 × (X-VBD) + 0.152] × the mean CT ratio at each spinal level. The correlation between the estimated PD and the CT-PD (r2 = 0.852) was improved compared with that (r2 = 0.665) between the X-PD and the CT-PD. In conclusion, the CT ratio showed a very similar changing trends to CT-PD from T1 to L5 regardless of sex and body mass, and the measurement error of PD from only plain radiographs could be minimized using estimated VBD and the mean CT ratio at each spinal level.

  7. A new method to measure Bowen ratios using high-resolution vertical dry and wet bulb temperature profiles

    NASA Astrophysics Data System (ADS)

    Euser, T.; Luxemburg, W. M. J.; Everson, C. S.; Mengistu, M. G.; Clulow, A. D.; Bastiaanssen, W. G. M.

    2014-06-01

    The Bowen ratio surface energy balance method is a relatively simple method to determine the latent heat flux and the actual land surface evaporation. The Bowen ratio method is based on the measurement of air temperature and vapour pressure gradients. If these measurements are performed at only two heights, correctness of data becomes critical. In this paper we present the concept of a new measurement method to estimate the Bowen ratio based on vertical dry and wet bulb temperature profiles with high spatial resolution. A short field experiment with distributed temperature sensing (DTS) in a fibre optic cable with 13 measurement points in the vertical was undertaken. A dry and a wetted section of a fibre optic cable were suspended on a 6 m high tower installed over a sugar beet trial plot near Pietermaritzburg (South Africa). Using the DTS cable as a psychrometer, a near continuous observation of vapour pressure and air temperature at 0.20 m intervals was established. These data allowed the computation of the Bowen ratio with a high spatial and temporal precision. The daytime latent and sensible heat fluxes were estimated by combining the Bowen ratio values from the DTS-based system with independent measurements of net radiation and soil heat flux. The sensible heat flux, which is the relevant term to evaluate, derived from the DTS-based Bowen ratio (BR-DTS) was compared with that derived from co-located eddy covariance (R2 = 0.91), surface layer scintillometer (R2 = 0.81) and surface renewal (R2 = 0.86) systems. By using multiple measurement points instead of two, more confidence in the derived Bowen ratio values is obtained.

  8. A Comparison of Methods for Computing the Residual Resistivity Ratio of High-Purity Niobium

    PubMed Central

    Splett, J. D.; Vecchia, D. F.; Goodrich, L. F.

    2011-01-01

    We compare methods for estimating the residual resistivity ratio (RRR) of high-purity niobium and investigate the effects of using different functional models. RRR is typically defined as the ratio of the electrical resistances measured at 273 K (the ice point) and 4.2 K (the boiling point of helium at standard atmospheric pressure). However, pure niobium is superconducting below about 9.3 K, so the low-temperature resistance is defined as the normal-state (i.e., non-superconducting state) resistance extrapolated to 4.2 K and zero magnetic field. Thus, the estimated value of RRR depends significantly on the model used for extrapolation. We examine three models for extrapolation based on temperature versus resistance, two models for extrapolation based on magnetic field versus resistance, and a new model based on the Kohler relationship that can be applied to combined temperature and field data. We also investigate the possibility of re-defining RRR so that the quantity is not dependent on extrapolation. PMID:26989580

  9. Systematic Biases in Parameter Estimation of Binary Black-Hole Mergers

    NASA Technical Reports Server (NTRS)

    Littenberg, Tyson B.; Baker, John G.; Buonanno, Alessandra; Kelly, Bernard J.

    2012-01-01

    Parameter estimation of binary-black-hole merger events in gravitational-wave data relies on matched filtering techniques, which, in turn, depend on accurate model waveforms. Here we characterize the systematic biases introduced in measuring astrophysical parameters of binary black holes by applying the currently most accurate effective-one-body templates to simulated data containing non-spinning numerical-relativity waveforms. For advanced ground-based detectors, we find that the systematic biases are well within the statistical error for realistic signal-to-noise ratios (SNR). These biases grow to be comparable to the statistical errors at high signal-to-noise ratios for ground-based instruments (SNR approximately 50) but never dominate the error budget. At the much larger signal-to-noise ratios expected for space-based detectors, these biases will become large compared to the statistical errors but are small enough (at most a few percent in the black-hole masses) that we expect they should not affect broad astrophysical conclusions that may be drawn from the data.

  10. Retinal vessel diameter and estimated cerebrospinal fluid pressure in arterial hypertension: the Beijing Eye Study.

    PubMed

    Jonas, Jost B; Wang, Ningli; Wang, Shuang; Wang, Ya Xing; You, Qi Sheng; Yang, Diya; Wei, Wen Bin; Xu, Liang

    2014-09-01

    Hypertensive retinal microvascular abnormalities include an increased retinal vein-to-artery diameter ratio. Because central retinal vein pressure depends on cerebrospinal fluid pressure (CSFP), we examined whether the retinal vein-to-artery diameter ratio and other retinal hypertensive signs are associated with CSFP. Participants of the population-based Beijing Eye Study (n = 1,574 subjects) underwent measurement of the temporal inferior and superior retinal artery and vein diameter. CSFP was calculated as 0.44 × body mass index (kg/m(2)) + 0.16 × diastolic blood pressure (mm Hg) - 0.18 × age (years) - 1.91. Larger retinal vein diameters and higher vein-to-artery diameter ratios were significantly associated with higher estimated CSFP (P = 0.001) in multivariable analysis. In contrast, temporal inferior retinal arterial diameter was marginally associated (P = 0.03) with estimated CSFP, and temporal superior artery diameter was not significantly associated (P = 0.10) with estimated CSFP; other microvascular abnormalities, such as arteriovenous crossing signs, were also not significantly associated with estimated CSFP. In a reverse manner, higher estimated CSFP as a dependent variable in the multivariable analysis was associated with wider retinal veins and higher vein-to-artery diameter ratio. In the same model, estimated CSFP was not significantly correlated with retinal artery diameters or other retinal microvascular abnormalities. Correspondingly, arterial hypertension was associated with retinal microvascular abnormalities such as arteriovenous crossing signs (P = 0.003), thinner temporal retinal arteries (P < 0.001), higher CSFP (P < 0.001), and wider retinal veins (P = 0.001) or, as a corollary, with a higher vein-to-artery diameter ratio in multivariable analysis. Wider retinal vein diameters are associated with higher estimated CSFP and vice versa. In arterial hypertension, an increased retinal vein-to-artery diameter ratio depends on elevated CSFP, which is correlated with blood pressure. © American Journal of Hypertension, Ltd 2014. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Estimation of average burnup of damaged fuels loaded in Fukushima Dai-ichi reactors by using the {sup 134}Cs/{sup 137}Cs ratio method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endo, T.; Sato, S.; Yamamoto, A.

    2012-07-01

    Average burnup of damaged fuels loaded in Fukushima Dai-ichi reactors is estimated, using the {sup 134}Cs/{sup 137}Cs ratio method for measured radioactivities of {sup 134}Cs and {sup 137}Cs in contaminated soils within the range of 100 km from the Fukushima Dai-ichi nuclear power plants. As a result, the measured {sup 134}Cs/{sup 137}Cs ratio from the contaminated soil is 0.996{+-}0.07 as of March 11, 2011. Based on the {sup 134}Cs/{sup 137}Cs ratio method, the estimated burnup of damaged fuels is approximately 17.2{+-}1.5 [GWd/tHM]. It is noted that the numerical results of various calculation codes (SRAC2006/PIJ, SCALE6.0/TRITON, and MVP-BURN) are almost themore » same evaluation values of {sup 134}Cs/ {sup 137}Cs ratio with same evaluated nuclear data library (ENDF-B/VII.0). The void fraction effect in depletion calculation has a major impact on {sup 134}Cs/{sup 137}Cs ratio compared with the differences between JENDL-4.0 and ENDF-B/VII.0. (authors)« less

  12. Quantile-based bias correction and uncertainty quantification of extreme event attribution statements

    DOE PAGES

    Jeon, Soyoung; Paciorek, Christopher J.; Wehner, Michael F.

    2016-02-16

    Extreme event attribution characterizes how anthropogenic climate change may have influenced the probability and magnitude of selected individual extreme weather and climate events. Attribution statements often involve quantification of the fraction of attributable risk (FAR) or the risk ratio (RR) and associated confidence intervals. Many such analyses use climate model output to characterize extreme event behavior with and without anthropogenic influence. However, such climate models may have biases in their representation of extreme events. To account for discrepancies in the probabilities of extreme events between observational datasets and model datasets, we demonstrate an appropriate rescaling of the model output basedmore » on the quantiles of the datasets to estimate an adjusted risk ratio. Our methodology accounts for various components of uncertainty in estimation of the risk ratio. In particular, we present an approach to construct a one-sided confidence interval on the lower bound of the risk ratio when the estimated risk ratio is infinity. We demonstrate the methodology using the summer 2011 central US heatwave and output from the Community Earth System Model. In this example, we find that the lower bound of the risk ratio is relatively insensitive to the magnitude and probability of the actual event.« less

  13. An improved multi-paths optimization method for video stabilization

    NASA Astrophysics Data System (ADS)

    Qin, Tao; Zhong, Sheng

    2018-03-01

    For video stabilization, the difference between original camera motion path and the optimized one is proportional to the cropping ratio and warping ratio. A good optimized path should preserve the moving tendency of the original one meanwhile the cropping ratio and warping ratio of each frame should be kept in a proper range. In this paper we use an improved warping-based motion representation model, and propose a gauss-based multi-paths optimization method to get a smoothing path and obtain a stabilized video. The proposed video stabilization method consists of two parts: camera motion path estimation and path smoothing. We estimate the perspective transform of adjacent frames according to warping-based motion representation model. It works well on some challenging videos where most previous 2D methods or 3D methods fail for lacking of long features trajectories. The multi-paths optimization method can deal well with parallax, as we calculate the space-time correlation of the adjacent grid, and then a kernel of gauss is used to weigh the motion of adjacent grid. Then the multi-paths are smoothed while minimize the crop ratio and the distortion. We test our method on a large variety of consumer videos, which have casual jitter and parallax, and achieve good results.

  14. Agreement between clinicians' and care givers' assessment of intelligence in Nigerian children with intellectual disability: 'ratio IQ' as a viable option in the absence of standardized 'deviance IQ' tests in sub-Saharan Africa

    PubMed Central

    Bakare, Muideen O; Ubochi, Vincent N; Okoroikpa, Ifeoma N; Aguocha, Chinyere M; Ebigbo, Peter O

    2009-01-01

    Background There may be need to assess intelligent quotient (IQ) scores in sub-Saharan African children with intellectual disability, either for the purpose of educational needs assessment or research. However, modern intelligence scales developed in the western parts of the world suffer limitation of widespread use because of the influence of socio-cultural variations across the world. This study examined the agreement between IQ scores estimation among Nigerian children with intellectual disability using clinicians' judgment based on International Classification of Diseases, tenth Edition (ICD - 10) criteria for mental retardation and caregivers judgment based on 'ratio IQ' scores calculated from estimated mental age in the context of socio-cultural milieu of the children. It proposed a viable option of IQ score assessment among sub-Saharan African children with intellectual disability, using a ratio of culture-specific estimated mental age and chronological age of the child in the absence of standardized alternatives, borne out of great diversity in socio-cultural context of sub-Saharan Africa. Methods Clinicians and care-givers independently assessed the children in relation to their socio-cultural background. Clinicians assessed the IQ scores of the children based on the ICD - 10 diagnostic criteria for mental retardation. 'Ratio IQ' scores were calculated from the ratio of estimated mental age and chronological age of each child. The IQ scores as assessed by the clinicians were then compared with the 'ratio IQ' scores using correlation statistics. Results A total of forty-four (44) children with intellectual disability were assessed. There was a significant correlation between clinicians' assessed IQ scores and the 'ratio IQ' scores employing zero order correlation without controlling for the chronological age of the children (r = 0.47, df = 42, p = 0.001). First order correlation controlling for the chronological age of the children showed higher correlation score between clinicians' assessed IQ scores and 'ratio IQ' scores (r = 0.75, df = 41, p = 0.000). Conclusion Agreement between clinicians' assessed IQ scores and 'ratio IQ' scores was good. 'Ratio IQ' test would provide a viable option of assessing IQ scores in sub-Saharan African children with intellectual disability in the absence of culture-appropriate standardized intelligence scales, which is often the case because of great diversity in socio-cultural structures of sub-Saharan Africa. PMID:19754953

  15. A method to estimate weight and dimensions of large and small gas turbine engines

    NASA Technical Reports Server (NTRS)

    Onat, E.; Klees, G. W.

    1979-01-01

    A computerized method was developed to estimate weight and envelope dimensions of large and small gas turbine engines within + or - 5% to 10%. The method is based on correlations of component weight and design features of 29 data base engines. Rotating components were estimated by a preliminary design procedure which is sensitive to blade geometry, operating conditions, material properties, shaft speed, hub tip ratio, etc. The development and justification of the method selected, and the various methods of analysis are discussed.

  16. Comparison of robustness to outliers between robust poisson models and log-binomial models when estimating relative risks for common binary outcomes: a simulation study.

    PubMed

    Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P

    2014-06-26

    To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.

  17. [Dental manpower prediction in Israel for 2017].

    PubMed

    Vered, Y; Zini, A; Mann, J

    2010-07-01

    A recent study published by the authors indicated that according to the Israeli Central Bureau of Statistics in 2008, Israel had 5800 active dentists, a figure well below the publication by the Ministry of Health. Based on this figure, using the manpower to population ratio method, the following results were obtained: The predicted number of dentist in 2017 would be 6090, based on, the estimated number of Israel: graduates, the estimated number of dentists who would arrive in Israel as immigrants or Israelis who studied abroad, based on an attrition rate of 3% and on the assumption that the number of dentists leaving the country is negligible. Table 2, based on manpower to population ratio, indicates that by 2017, Israel would have 1 dentist per 1400 population, a ratio which is still far above what many countries present, but high for Israel. This might reflect a dramatic change, from employment in public clinics, back to private practices. The results clearly indicate that a shortage of dentists is predicted in the near future and a major brainstorming is urgently required to evaluate these results.

  18. Compressive Sensing of Foot Gait Signals and Its Application for the Estimation of Clinically Relevant Time Series.

    PubMed

    Pant, Jeevan K; Krishnan, Sridhar

    2016-07-01

    A new signal reconstruction algorithm for compressive sensing based on the minimization of a pseudonorm which promotes block-sparse structure on the first-order difference of the signal is proposed. Involved optimization is carried out by using a sequential version of Fletcher-Reeves' conjugate-gradient algorithm, and the line search is based on Banach's fixed-point theorem. The algorithm is suitable for the reconstruction of foot gait signals which admit block-sparse structure on the first-order difference. An additional algorithm for the estimation of stride-interval, swing-interval, and stance-interval time series from the reconstructed foot gait signals is also proposed. This algorithm is based on finding zero crossing indices of the foot gait signal and using the resulting indices for the computation of time series. Extensive simulation results demonstrate that the proposed signal reconstruction algorithm yields improved signal-to-noise ratio and requires significantly reduced computational effort relative to several competing algorithms over a wide range of compression ratio. For a compression ratio in the range from 88% to 94%, the proposed algorithm is found to offer improved accuracy for the estimation of clinically relevant time-series parameters, namely, the mean value, variance, and spectral index of stride-interval, stance-interval, and swing-interval time series, relative to its nearest competitor algorithm. The improvement in performance for compression ratio as high as 94% indicates that the proposed algorithms would be useful for designing compressive sensing-based systems for long-term telemonitoring of human gait signals.

  19. Emission estimates of selected volatile organic compounds from tropical savanna burning in northern Australia

    NASA Astrophysics Data System (ADS)

    Shirai, T.; Blake, D. R.; Meinardi, S.; Rowland, F. S.; Russell-Smith, J.; Edwards, A.; Kondo, Y.; Koike, M.; Kita, K.; Machida, T.; Takegawa, N.; Nishi, N.; Kawakami, S.; Ogawa, T.

    2003-02-01

    Here we present measurements of a range of carbon-based compounds: carbon dioxide (CO2), carbon monoxide (CO), methane (CH4), nonmethane hydrocarbons (NMHCs), methyl halides, and dimethyl sulfide (DMS) emitted by Australian savanna fires studied as part of the Biomass Burning and Lightning Experiment (BIBLE) phase B aircraft campaign, which took place during the local late dry season (28 August to 13 September 1999). Significant enhancements of short-lived NMHCs were observed in the boundary layer (BL) over the region of intensive fires and indicate recent emissions for which the mean transport time was estimated to be about 9 hours. Emission ratios relative to CO were determined for 20 NMHCs, 3 methyl halides, DMS, and CH4 based on the BL enhancements in the source region. Tight correlations with CO were obtained for most of those compounds, indicating the homogeneity of the local savanna source. The emission ratios were in good agreement with some previous measurements of savanna fires for stable compounds but indicated the decay of emission ratios during transport for several reactive compounds. Based on the observed emission ratios, emission factors were derived and compared to previous studies. While emission factors (g species/kg dry mole) of CO2 varied little according to the vegetation types, those of CO and NMHCs varied significantly. Higher combustion efficiency and a lower emission factor for methane in this study, compared to forest fires, agreed well with results for savanna fires in other tropical regions. The amount of biomass burned was estimated by modeling methods using available satellite data, and showed that 1999 was an above average year for savanna burning. The gross emissions of the trace gases from Australian savanna fires were estimated.

  20. Support vector regression methodology for estimating global solar radiation in Algeria

    NASA Astrophysics Data System (ADS)

    Guermoui, Mawloud; Rabehi, Abdelaziz; Gairaa, Kacem; Benkaciali, Said

    2018-01-01

    Accurate estimation of Daily Global Solar Radiation (DGSR) has been a major goal for solar energy applications. In this paper we show the possibility of developing a simple model based on the Support Vector Regression (SVM-R), which could be used to estimate DGSR on the horizontal surface in Algeria based only on sunshine ratio as input. The SVM model has been developed and tested using a data set recorded over three years (2005-2007). The data was collected at the Applied Research Unit for Renewable Energies (URAER) in Ghardaïa city. The data collected between 2005-2006 are used to train the model while the 2007 data are used to test the performance of the selected model. The measured and the estimated values of DGSR were compared during the testing phase statistically using the Root Mean Square Error (RMSE), Relative Square Error (rRMSE), and correlation coefficient (r2), which amount to 1.59(MJ/m2), 8.46 and 97,4%, respectively. The obtained results show that the SVM-R is highly qualified for DGSR estimation using only sunshine ratio.

  1. An overall strategy based on regression models to estimate relative survival and model the effects of prognostic factors in cancer survival studies.

    PubMed

    Remontet, L; Bossard, N; Belot, A; Estève, J

    2007-05-10

    Relative survival provides a measure of the proportion of patients dying from the disease under study without requiring the knowledge of the cause of death. We propose an overall strategy based on regression models to estimate the relative survival and model the effects of potential prognostic factors. The baseline hazard was modelled until 10 years follow-up using parametric continuous functions. Six models including cubic regression splines were considered and the Akaike Information Criterion was used to select the final model. This approach yielded smooth and reliable estimates of mortality hazard and allowed us to deal with sparse data taking into account all the available information. Splines were also used to model simultaneously non-linear effects of continuous covariates and time-dependent hazard ratios. This led to a graphical representation of the hazard ratio that can be useful for clinical interpretation. Estimates of these models were obtained by likelihood maximization. We showed that these estimates could be also obtained using standard algorithms for Poisson regression. Copyright 2006 John Wiley & Sons, Ltd.

  2. Estimating adult sex ratios in nature.

    PubMed

    Ancona, Sergio; Dénes, Francisco V; Krüger, Oliver; Székely, Tamás; Beissinger, Steven R

    2017-09-19

    Adult sex ratio (ASR, the proportion of males in the adult population) is a central concept in population and evolutionary biology, and is also emerging as a major factor influencing mate choice, pair bonding and parental cooperation in both human and non-human societies. However, estimating ASR is fraught with difficulties stemming from the effects of spatial and temporal variation in the numbers of males and females, and detection/capture probabilities that differ between the sexes. Here, we critically evaluate methods for estimating ASR in wild animal populations, reviewing how recent statistical advances can be applied to handle some of these challenges. We review methods that directly account for detection differences between the sexes using counts of unmarked individuals (observed, trapped or killed) and counts of marked individuals using mark-recapture models. We review a third class of methods that do not directly sample the number of males and females, but instead estimate the sex ratio indirectly using relationships that emerge from demographic measures, such as survival, age structure, reproduction and assumed dynamics. We recommend that detection-based methods be used for estimating ASR in most situations, and point out that studies are needed that compare different ASR estimation methods and control for sex differences in dispersal.This article is part of the themed issue 'Adult sex ratios and reproductive decisions: a critical re-examination of sex differences in human and animal societies'. © 2017 The Author(s).

  3. Using Appendicitis to Improve Estimates of Childhood Medicaid Participation Rates.

    PubMed

    Silber, Jeffrey H; Zeigler, Ashley E; Reiter, Joseph G; Hochman, Lauren L; Ludwig, Justin M; Wang, Wei; Calhoun, Shawna R; Pati, Susmita

    2018-03-23

    Administrative data are often used to estimate state Medicaid/Children's Health Insurance Program duration of enrollment and insurance continuity, but they are generally not used to estimate participation (the fraction of eligible children enrolled) because administrative data do not include reasons for disenrollment and cannot observe eligible never-enrolled children, causing estimates of eligible unenrolled to be inaccurate. Analysts are therefore forced to either utilize survey information that is not generally linkable to administrative claims or rely on duration and continuity measures derived from administrative data and forgo estimating claims-based participation. We introduce appendectomy-based participation (ABP) to estimate statewide participation rates using claims by taking advantage of a natural experiment around statewide appendicitis admissions to improve the accuracy of participation rate estimates. We used Medicaid Analytic eXtract (MAX) for 2008-2010; and the American Community Survey for 2008-2010 from 43 states to calculate ABP, continuity ratio, duration, and participation based on the American Community Survey (ACS). In the validation study, median participation rate using ABP was 86% versus 87% for ACS-based participation estimates using logical edits and 84% without logical edits. Correlations between ABP and ACS with or without logical edits was 0.86 (P < .0001). Using regression analysis, ABP alone was a significant predictor of ACS (P < .0001) with or without logical edits, and adding duration and/or the continuity ratio did not significantly improve the model. Using the ABP rate derived from administrative claims (MAX) is a valid method to estimate statewide public insurance participation rates in children. Copyright © 2018 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

  4. Association between urinary biomarkers of total sugars intake and measures of obesity in a cross-sectional study

    PubMed Central

    Campbell, Rachel; Tasevska, Natasha; Jackson, Kim G.; Sagi-Kiss, Virag; di Paolo, Nick; Mindell, Jennifer S.; Lister, Susan J.; Khaw, Kay-Tee

    2017-01-01

    Obesity is an important modifiable risk factor for chronic diseases. While there is increasing focus on the role of dietary sugars, there remains a paucity of data establishing the association between sugar intake and obesity in the general public. The objective of this study was to investigate associations of estimated sugar intake with odds for obesity in a representative sample of English adults. We used data from 434 participants of the 2005 Health Survey of England. Biomarkers for total sugar intake were measured in 24 h urine samples and used to estimate intake. Linear and logistic regression analyses were used to investigate associations between biomarker-based estimated intake and measures of obesity (body mass intake (BMI), waist circumference and waist-to-hip ratio) and obesity risk, respectively. Estimated sugar intake was significantly associated with BMI, waist circumference and waist-to-hip ratio; these associations remained significant after adjustment for estimated protein intake as a marker of non-sugar energy intake. Estimated sugar intake was also associated with increased odds for obesity based on BMI (OR 1.02; 95%CI 1.00–1.04 per 10g), waist-circumference (1.03; 1.01–1.05) and waist-to-hip ratio (1.04; 1.02–1.06); all OR estimates remained significant after adjusting for estimated protein intake. Our results strongly support positive associations between total sugar intake, measures of obesity and likelihood of being obese. It is the first time that such an association has been shown in a nationally-representative sample of the general population using a validated biomarker. This biomarker could be used to monitor the efficacy of public health interventions to reduce sugar intake. PMID:28723954

  5. Association between urinary biomarkers of total sugars intake and measures of obesity in a cross-sectional study.

    PubMed

    Campbell, Rachel; Tasevska, Natasha; Jackson, Kim G; Sagi-Kiss, Virag; di Paolo, Nick; Mindell, Jennifer S; Lister, Susan J; Khaw, Kay-Tee; Kuhnle, Gunter G C

    2017-01-01

    Obesity is an important modifiable risk factor for chronic diseases. While there is increasing focus on the role of dietary sugars, there remains a paucity of data establishing the association between sugar intake and obesity in the general public. The objective of this study was to investigate associations of estimated sugar intake with odds for obesity in a representative sample of English adults. We used data from 434 participants of the 2005 Health Survey of England. Biomarkers for total sugar intake were measured in 24 h urine samples and used to estimate intake. Linear and logistic regression analyses were used to investigate associations between biomarker-based estimated intake and measures of obesity (body mass intake (BMI), waist circumference and waist-to-hip ratio) and obesity risk, respectively. Estimated sugar intake was significantly associated with BMI, waist circumference and waist-to-hip ratio; these associations remained significant after adjustment for estimated protein intake as a marker of non-sugar energy intake. Estimated sugar intake was also associated with increased odds for obesity based on BMI (OR 1.02; 95%CI 1.00-1.04 per 10g), waist-circumference (1.03; 1.01-1.05) and waist-to-hip ratio (1.04; 1.02-1.06); all OR estimates remained significant after adjusting for estimated protein intake. Our results strongly support positive associations between total sugar intake, measures of obesity and likelihood of being obese. It is the first time that such an association has been shown in a nationally-representative sample of the general population using a validated biomarker. This biomarker could be used to monitor the efficacy of public health interventions to reduce sugar intake.

  6. An internal reference model-based PRF temperature mapping method with Cramer-Rao lower bound noise performance analysis.

    PubMed

    Li, Cheng; Pan, Xinyi; Ying, Kui; Zhang, Qiang; An, Jing; Weng, Dehe; Qin, Wen; Li, Kuncheng

    2009-11-01

    The conventional phase difference method for MR thermometry suffers from disturbances caused by the presence of lipid protons, motion-induced error, and field drift. A signal model is presented with multi-echo gradient echo (GRE) sequence using a fat signal as an internal reference to overcome these problems. The internal reference signal model is fit to the water and fat signals by the extended Prony algorithm and the Levenberg-Marquardt algorithm to estimate the chemical shifts between water and fat which contain temperature information. A noise analysis of the signal model was conducted using the Cramer-Rao lower bound to evaluate the noise performance of various algorithms, the effects of imaging parameters, and the influence of the water:fat signal ratio in a sample on the temperature estimate. Comparison of the calculated temperature map and thermocouple temperature measurements shows that the maximum temperature estimation error is 0.614 degrees C, with a standard deviation of 0.06 degrees C, confirming the feasibility of this model-based temperature mapping method. The influence of sample water:fat signal ratio on the accuracy of the temperature estimate is evaluated in a water-fat mixed phantom experiment with an optimal ratio of approximately 0.66:1. (c) 2009 Wiley-Liss, Inc.

  7. Do fossil plants signal palaeoatmospheric carbon dioxide concentration in the geological past?

    PubMed Central

    McElwain, J. C.

    1998-01-01

    Fossil, subfossil, and herbarium leaves have been shown to provide a morphological signal of the atmospheric carbon dioxide environment in which they developed by means of their stomatal density and index. An inverse relationship between stomatal density/index and atmospheric carbon dioxide concentration has been documented for all the studies to date concerning fossil and subfossil material. Furthermore, this relationship has been demonstrated experimentally by growing plants under elevated and reducedcarbon dioxide concentrations. To date, the mechanism that controls the stomatal density response to atmospheric carbon dioxide concentration remains unknown. However, stomatal parameters of fossil plants have been successfully used as a proxy indicator of palaeo-carbon dioxide levels. This paper presents new estimates of palaeo-atmospheric carbon dioxide concentrations for the Middle Eocene (Lutetian), based on the stomatal ratios of fossil Lauraceae species from Bournemouth in England. Estimates of atmospheric carbon dioxide concentrations derived from stomatal data from plants of the Early Devonian, Late Carboniferous, Early Permian and Middle Jurassic ages are reviewed in the light of new data. Semi-quantitative palaeo-carbon dioxide estimates based on the stomatal ratio (a ratio of the stomatal index of a fossil plant to that of a selected nearest living equivalent) have in the past relied on the use of a Carboniferous standard. The application of a new standard based on the present-day carbon dioxide level is reported here for comparison. The resultant ranges of palaeo-carbon dioxide estimates made from standardized fossil stomatal ratio data are in good agreement with both carbon isotopic data from terrestrial and marine sources and long-term carbon cycle modelling estimates for all the time periods studied. These data indicate elevated atmospheric carbon dioxide concentrations during the Early Devonian, Middle Jurassic and Middle Eocene, and reduced concentrations during the Late Carboniferous and Early Permian. Such data are important in demonstrating the long-term responses of plants to changing carbon dioxide concentrations and in contributing to the database needed for general circulation model climatic analogues.

  8. Sex ratio estimations of loggerhead sea turtle hatchlings by histological examination and nest temperatures at Fethiye beach, Turkey

    NASA Astrophysics Data System (ADS)

    Kaska, Yakup; Ilgaz, Çetin; Özdemir, Adem; Başkale, Eyüp; Türkozan, Oğuz; Baran, Ibrahim; Stachowitsch, Michael

    2006-07-01

    Hatchling sex ratios in the loggerhead turtle ( Caretta caretta) were estimated by placing electronic temperature recorders in 21 nests at Fethiye beach during 2000 2002. Over the seasons, the mean temperature in the middle third of the incubation period ranged from 26.7 to 32.1°C, and incubation periods ranged from 49 to 67 days. Based on the mean temperatures during the middle third of the incubation period, and on histologically sexed dead hatchlings, the sex ratios of hatchlings at Fethiye beach were roughly equal, i.e. 60 65% of the hatchlings were females. This contrasts with the highly female-skewed sex ratios in loggerhead turtles elsewhere; Fethiye has a relatively high proportion of male hatchlings. For endangered sea turtles, the knowledge of hatchling sex ratios at different beaches, coupled with appropriate conservation measures, can make an important contribution to their survival.

  9. Inventory-based estimates of forest biomass carbon stocks in China: A comparison of three methods

    Treesearch

    Zhaodi Guo; Jingyun Fang; Yude Pan; Richard Birdsey

    2010-01-01

    Several studies have reported different estimates for forest biomass carbon (C) stocks in China. The discrepancy among these estimates may be largely attributed to the methods used. In this study, we used three methods [mean biomass density method (MBM), mean ratio method (MRM), and continuous biomass expansion factor (BEF) method (abbreviated as CBM)] applied to...

  10. A Simultaneous Density-Integral System for Estimating Stem Profile and Biomass: Slash Pine and Willow Oak

    Treesearch

    Bernard R. Parresol; Charles E. Thomas

    1996-01-01

    In the wood utilization industry, both stem profile and biomass are important quantities. The two have traditionally been estimated separately. The introduction of a density-integral method allows for coincident estimation of stem profile and biomass, based on the calculus of mass theory, and provides an alternative to weight-ratio methodology. In the initial...

  11. Systematic review and meta-analysis estimating association of cysticercosis and neurocysticercosis with epilepsy

    PubMed Central

    Debacq, Gabrielle; Garcia, Héctor H.; Boumediene, Farid; Marin, Benoit; Ngoungou, Edgard B.; Preux, Pierre-Marie

    2017-01-01

    Background We reviewed studies that analyzed cysticercosis (CC), neurocysticercosis (NCC) and epilepsy across Latin America, Asia and Sub-Saharan Africa, to estimate the odds ratio and etiologic fraction of epilepsy due to CC in tropical regions. Methodology We conducted a systematic review of the literature on cysticercosis and epilepsy in the tropics, collecting data from case-control and cross-sectional studies. Exposure criteria for CC included one or more of the following: serum ELISA or EITB positivity, presence of subcutaneous cysts (both not verified and unverified by histology), histology consistent with calcified cysts, and brain CT scan consistent with NCC. A common odds-ratio was then estimated using meta-analysis. Principal findings 37 studies from 23 countries were included (n = 24,646 subjects, 14,934 with epilepsy and 9,712 without epilepsy). Of these, 29 were case-control (14 matched). The association between CC and epilepsy was significant in 19 scientific articles. Odds ratios ranged from 0.2 to 25.4 (a posteriori power 4.5–100%) and the common odds ratio was 2.7 (95% CI 2.1–3.6, p <0.001). Three subgroup analyses performed gave odds ratios as: 2.2 (EITB-based studies), 3.2 (CT-based studies), 1.9 (neurologist-confirmed epilepsy; door-to-door survey and at least one matched control per case). Etiologic fraction was estimated to be 63% in the exposed group among the population. Significance Despite differences in findings, this meta-analysis suggests that cysticercosis is a significant contributor to late-onset epilepsy in tropical regions around the world, and its impact may vary depending on transmission intensity. PMID:28267746

  12. Parameter estimates in binary black hole collisions using neural networks

    NASA Astrophysics Data System (ADS)

    Carrillo, M.; Gracia-Linares, M.; González, J. A.; Guzmán, F. S.

    2016-10-01

    We present an algorithm based on artificial neural networks (ANNs), that estimates the mass ratio in a binary black hole collision out of given gravitational wave (GW) strains. In this analysis, the ANN is trained with a sample of GW signals generated with numerical simulations. The effectiveness of the algorithm is evaluated with GWs generated also with simulations for given mass ratios unknown to the ANN. We measure the accuracy of the algorithm in the interpolation and extrapolation regimes. We present the results for noise free signals and signals contaminated with Gaussian noise, in order to foresee the dependence of the method accuracy in terms of the signal to noise ratio.

  13. Estimation of contribution ratios of pollutant sources to a specific section based on an enhanced water quality model.

    PubMed

    Cao, Bibo; Li, Chuan; Liu, Yan; Zhao, Yue; Sha, Jian; Wang, Yuqiu

    2015-05-01

    Because water quality monitoring sections or sites could reflect the water quality status of rivers, surface water quality management based on water quality monitoring sections or sites would be effective. For the purpose of improving water quality of rivers, quantifying the contribution ratios of pollutant resources to a specific section is necessary. Because physical and chemical processes of nutrient pollutants are complex in water bodies, it is difficult to quantitatively compute the contribution ratios. However, water quality models have proved to be effective tools to estimate surface water quality. In this project, an enhanced QUAL2Kw model with an added module was applied to the Xin'anjiang Watershed, to obtain water quality information along the river and to assess the contribution ratios of each pollutant source to a certain section (the Jiekou state-controlled section). Model validation indicated that the results were reliable. Then, contribution ratios were analyzed through the added module. Results show that among the pollutant sources, the Lianjiang tributary contributes the largest part of total nitrogen (50.43%), total phosphorus (45.60%), ammonia nitrogen (32.90%), nitrate (nitrite + nitrate) nitrogen (47.73%), and organic nitrogen (37.87%). Furthermore, contribution ratios in different reaches varied along the river. Compared with pollutant loads ratios of different sources in the watershed, an analysis of contribution ratios of pollutant sources for each specific section, which takes the localized chemical and physical processes into consideration, was more suitable for local-regional water quality management. In summary, this method of analyzing the contribution ratios of pollutant sources to a specific section based on the QUAL2Kw model was found to support the improvement of the local environment.

  14. Bayesian estimation of a source term of radiation release with approximately known nuclide ratios

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek

    2016-04-01

    We are concerned with estimation of a source term in case of an accidental release from a known location, e.g. a power plant. Usually, the source term of an accidental release of radiation comprises of a mixture of nuclide. The gamma dose rate measurements do not provide a direct information on the source term composition. However, physical properties of respective nuclide (deposition properties, decay half-life) can be used when uncertain information on nuclide ratios is available, e.g. from known reactor inventory. The proposed method is based on linear inverse model where the observation vector y arise as a linear combination y = Mx of a source-receptor-sensitivity (SRS) matrix M and the source term x. The task is to estimate the unknown source term x. The problem is ill-conditioned and further regularization is needed to obtain a reasonable solution. In this contribution, we assume that nuclide ratios of the release is known with some degree of uncertainty. This knowledge is used to form the prior covariance matrix of the source term x. Due to uncertainty in the ratios the diagonal elements of the covariance matrix are considered to be unknown. Positivity of the source term estimate is guaranteed by using multivariate truncated Gaussian distribution. Following Bayesian approach, we estimate all parameters of the model from the data so that y, M, and known ratios are the only inputs of the method. Since the inference of the model is intractable, we follow the Variational Bayes method yielding an iterative algorithm for estimation of all model parameters. Performance of the method is studied on simulated 6 hour power plant release where 3 nuclide are released and 2 nuclide ratios are approximately known. The comparison with method with unknown nuclide ratios will be given to prove the usefulness of the proposed approach. This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).

  15. Prevalence of tuberculous infection and incidence of tuberculosis; a re-assessment of the Styblo rule

    PubMed Central

    van der Werf, MJ; Borgdorff, MW

    2008-01-01

    Abstract Objective To evaluate the validity of the fixed mathematical relationship between the annual risk of tuberculous infection (ARTI), the prevalence of smear-positive tuberculosis (TB) and the incidence of smear-positive TB specified as the Styblo rule, which TB control programmes use to estimate the incidence of TB disease at a population level and the case detection rate. Methods Population-based tuberculin surveys and surveys on prevalence of smear-positive TB since 1975 were identified through a literature search. For these surveys, the ratio between the number of tuberculous infections (based on ARTI estimates) and the number of smear-positive TB cases was calculated and compared to the ratio of 8 to 12 tuberculous infections per prevalent smear-positive TB case as part of the Styblo rule. Findings Three countries had national population-based data on both ARTI and prevalence of smear-positive TB for more than one point in time. In China the ratio ranged from 3.4 to 5.8, in the Philippines from 2.6 to 4.4, and in the Republic of Korea, from 3.2 to 4.7. All ratios were markedly lower than the ratio that is part of the Styblo rule. Conclusion According to recent country data, there are typically fewer than 8 to 12 tuberculous infections per prevalent smear-positive TB case, and it remains unclear whether this ratio varies significantly among countries. The decrease in the ratio compared to the Styblo rule probably relates to improvements in the prompt treatment of TB disease (by national TB programmes). A change in the number of tuberculous infections per prevalent smear-positive TB case in population-based surveys makes the assumed fixed mathematical relationship between ARTI and incidence of smear-positive TB no longer valid. PMID:18235886

  16. Past primary sex-ratio estimates of 4 populations of Loggerhead sea turtle based on TSP durations.

    NASA Astrophysics Data System (ADS)

    Monsinjon, Jonathan; Kaska, Yakup; Tucker, Tony; LeBlanc, Anne Marie; Williams, Kristina; Rostal, David; Girondot, Marc

    2016-04-01

    Ectothermic species are supposed to be strongly affected by climate change and particularly those that exhibit temperature-dependent sex-determination (TSD). Actually, predicting the embryonic response of such organism to incubation-temperature variations in natural conditions remains challenging. In order to assess the vulnerability of sea turtles, primary sex-ratio estimates should be produced at pertinent ecological time and spatial scales. Although information on this important demographic parameter is one of the priorities for conservation purpose, accurate methodology to produce such an estimate is still lacking. The most commonly used method invocates incubation duration as a proxy for sex-ratio. This method is inappropriate because temperature influences incubation duration during all development whereas sex is influenced by temperature during only part of development. The thermosensitive period of development for sex determination (TSP) lies in the middle third of development. A model of embryonic growth must be used to define precisely the position of the TSP at non-constant incubation temperatures. The thermal reaction norm for embryonic growth rate have been estimated for 4 distinct populations of the globally distributed and threatened marine turtle Caretta caretta. A thermal reaction norm describes the pattern of phenotypic expression of a single genotype across a range of temperatures. Moreover, incubation temperatures have been reconstructed for the last 35 years using a multi-correlative model with climate temperature. After development of embryos have been modelled, we estimated the primary sex-ratio based on the duration of the TSP. Our results suggests that Loggerhead sea turtles nesting phenology is linked with the period within which both sexes can be produced in variable proportions. Several hypotheses will be discussed to explain why Caretta caretta could be more resilient to climate change than generally thought for sex determination.

  17. Adolescent suicidal behaviours in 32 low- and middle-income countries

    PubMed Central

    Gariépy, Geneviève; Sentenac, Mariane; Elgar, Frank J

    2016-01-01

    Abstract Objective To estimate prevalence of suicidal ideation and suicidal ideation with a plan in each surveyed country and to examine cross-national differences in associated risk factors. Methods We analysed data of students aged 13–17 years who participated in the 2003–2012 Global School-based Health Surveys in 32 countries, of which 29 are low- and middle-income. We used random effects meta-analysis to generate regional and overall pooled estimates. Multivariable logistic regression was used to estimate risk ratios for the associated risk factors. Population attributable fractions were estimated based on adjusted risk ratios and the prevalence of the determinants within each exposure level. Findings Across all countries, the pooled 12-month prevalence of suicide ideation were 16.2% (95% confidence interval, CI: 15.6 to 16.7) among females and 12.2% (95% CI: 11.7 to 12.7) among males and ideation with a plan were 8.3% (95% CI: 7.9 to 8.7) among females and 5.8% (95% CI: 5.5 to 6.1) among males. Suicide ideation in the WHO Region of the Americas was higher in females than males, with an estimated prevalence ratio of 1.70 (95% CI: 1.60 to 1.81), while this ratio was 1.04 (95% CI: 0.98 to 1.10) in the WHO African Region. Factors associated with suicidal ideation in most countries included experiences of bullying and physical violence, loneliness, limited parental support and alcohol and tobacco use. Conclusion The prevalence of adolescent suicidal behaviours varies across countries, yet a consistent set of risk factors of suicidal behaviours emerged across all regions and most countries. PMID:27147764

  18. Improved estimation of random vibration loads in launch vehicles

    NASA Technical Reports Server (NTRS)

    Mehta, R.; Erwin, E.; Suryanarayan, S.; Krishna, Murali M. R.

    1993-01-01

    Random vibration induced load is an important component of the total design load environment for payload and launch vehicle components and their support structures. The current approach to random vibration load estimation is based, particularly at the preliminary design stage, on the use of Miles' equation which assumes a single degree-of-freedom (DOF) system and white noise excitation. This paper examines the implications of the use of multi-DOF system models and response calculation based on numerical integration using the actual excitation spectra for random vibration load estimation. The analytical study presented considers a two-DOF system and brings out the effects of modal mass, damping and frequency ratios on the random vibration load factor. The results indicate that load estimates based on the Miles' equation can be significantly different from the more accurate estimates based on multi-DOF models.

  19. Reconstruction and analysis of 137Cs fallout deposition patterns in the Marshall Islands.

    PubMed

    Whitcomb, Robert C

    2002-03-01

    Estimates of 137Cs deposition caused by fallout originating from nuclear weapons testing in the Marshall Islands have been estimated for several locations in the Marshall Islands. These retrospective estimates are based primarily on historical exposure rate and gummed film measurements. The methods used to reconstruct these deposition estimates are similar to those used in the National Cancer Institute study for reconstructing 131I deposition from the Nevada Test Site. Reconstructed cumulative deposition estimates are validated against contemporary measurements of 137Cs concentration in soil with account taken for estimated global fallout contributions. These validations show that the overall geometric bias in predicted-to-observed (P:O) ratios is 1.0 (indicating excellent agreement). The 5th to 95th percentile range of this distribution is 0.35-2.95. The P:O ratios for estimates using historical gummed film measurements tend to slightly overpredict more than estimates using exposure rate measurements. The deposition estimate methods, supported by the agreement between estimates and measurements, suggest that these methods can be used with confidence for other weapons testing fallout radionuclides.

  20. Time-frequency domain SNR estimation and its application in seismic data processing

    NASA Astrophysics Data System (ADS)

    Zhao, Yan; Liu, Yang; Li, Xuxuan; Jiang, Nansen

    2014-08-01

    Based on an approach estimating frequency domain signal-to-noise ratio (FSNR), we propose a method to evaluate time-frequency domain signal-to-noise ratio (TFSNR). This method adopts short-time Fourier transform (STFT) to estimate instantaneous power spectrum of signal and noise, and thus uses their ratio to compute TFSNR. Unlike FSNR describing the variation of SNR with frequency only, TFSNR depicts the variation of SNR with time and frequency, and thus better handles non-stationary seismic data. By considering TFSNR, we develop methods to improve the effects of inverse Q filtering and high frequency noise attenuation in seismic data processing. Inverse Q filtering considering TFSNR can better solve the problem of amplitude amplification of noise. The high frequency noise attenuation method considering TFSNR, different from other de-noising methods, distinguishes and suppresses noise using an explicit criterion. Examples of synthetic and real seismic data illustrate the correctness and effectiveness of the proposed methods.

  1. C:n:p Stoichiometry of New Production In The North Atlantic

    NASA Astrophysics Data System (ADS)

    Koeve, W.

    Recently and independently published estimates of global net community production which were based on seasonal changes of either nutrients (NO3 and PO4) or dissolved inorganic carbon (DIC) in the surface ocean indicate that the stoichiometry of new pro- duction strongly differs from the well established remineralisation ratios in the deep ocean (the Redfield ratio). This difference appears to be most pronounce in the North Atlantic ocean. Data quality issues as well as methodological differences in the data analysis applied in the published studies, however, make this comparison of nutri- ent and carbon based estimated ambigious. In this presentation historical data (World Ocean Atlas and Data 1998), data from the World Ocean Circulation Experiment and empirical approaches are combined in a consistent way to provide a reassessment of the C:N:P elemental ratio of new (export) production in the North Atlantic. It is found that published nutrient budgets are severe underestimates and hence apparent C:N:P ratios were overestimated. At least in the North Atlantic the uncertainty of the winter time distribution of nutrients (and DIC) is a major source of the uncertainty of the C:N:P ratio of net community production.

  2. The Improved Estimation of Ratio of Two Population Proportions

    ERIC Educational Resources Information Center

    Solanki, Ramkrishna S.; Singh, Housila P.

    2016-01-01

    In this article, first we obtained the correct mean square error expression of Gupta and Shabbir's linear weighted estimator of the ratio of two population proportions. Later we suggested the general class of ratio estimators of two population proportions. The usual ratio estimator, Wynn-type estimator, Singh, Singh, and Kaur difference-type…

  3. County-level estimates of nitrogen and phosphorus from commercial fertilizer for the Conterminous United States, 1987–2006

    USGS Publications Warehouse

    Gronberg, Jo Ann M.; Spahr, Norman E.

    2012-01-01

    The U.S. Geological Survey’s National Water-Quality Assessment program requires nutrient input for analysis of the national and regional assessment of water quality. Detailed information on nutrient inputs to the environment are needed to understand and address the many serious problems that arise from excess nutrients in the streams and groundwater of the Nation. This report updates estimated county-level farm and nonfarm nitrogen and phosphorus input from commercial fertilizer sales for the conterminous United States for 1987 through 2006. Estimates were calculated from the Association of American Plant Food Control Officials fertilizer sales data, Census of Agriculture fertilizer expenditures, and U.S. Census Bureau county population. A previous national approach for deriving farm and nonfarm fertilizer nutrient estimates was evaluated, and a revised method for selecting representative states to calculate national farm and nonfarm proportions was developed. A national approach was used to estimate farm and nonfarm fertilizer inputs because not all states distinguish between farm and nonfarm use, and the quality of fertilizer reporting varies from year to year. For states that distinguish between farm and nonfarm use, the spatial distribution of the ratios of nonfarm-to-total fertilizer estimates for nitrogen and phosphorus calculated using the national-based farm and nonfarm proportions were similar to the spatial distribution of the ratios generated using state-based farm and nonfarm proportions. In addition, the relative highs and lows in the temporal distribution of farm and nonfarm nitrogen and phosphorus input at the state level were maintained—the periods of high and low usage coincide between national- and state-based values. With a few exceptions, nonfarm nitrogen estimates were found to be reasonable when compared to the amounts that would result if the lawn application rates recommended by state and university agricultural agencies were used. Also, states with higher nonfarm-to-total fertilizer ratios for nitrogen and phosphorus tended to have higher urban land-use percentages.

  4. DEVELOPMENT OF AN ARMY STATIONARY AXLE TEST STAND FOR LUBRICANT EFFICIENCY EVALUATION-PART II

    DTIC Science & Technology

    2017-01-13

    value was estimated based on the engines maximum peak torque output, multiplied by the transmissions 1st gear ratio, high range transfer case ratio...efficiency test stand to allow for laboratory based investigation of Fuel Efficient Gear Oils (FEGO) and their impact on vehicle efficiency. Development...their impact on vehicle efficiency. The test stand was designed and developed with the following goals: • Provide a lower cost alternative for

  5. Smartphone-Based Cardiac Rehabilitation Program: Feasibility Study.

    PubMed

    Chung, Heewon; Ko, Hoon; Thap, Tharoeun; Jeong, Changwon; Noh, Se-Eung; Yoon, Kwon-Ha; Lee, Jinseok

    2016-01-01

    We introduce a cardiac rehabilitation program (CRP) that utilizes only a smartphone, with no external devices. As an efficient guide for cardiac rehabilitation exercise, we developed an application to automatically indicate the exercise intensity by comparing the estimated heart rate (HR) with the target heart rate zone (THZ). The HR is estimated using video images of a fingertip taken by the smartphone's built-in camera. The introduced CRP app includes pre-exercise, exercise with intensity guidance, and post-exercise. In the pre-exercise period, information such as THZ, exercise type, exercise stage order, and duration of each stage are set up. In the exercise with intensity guidance, the app estimates HR from the pulse obtained using the smartphone's built-in camera and compares the estimated HR with the THZ. Based on this comparison, the app adjusts the exercise intensity to shift the patient's HR to the THZ during exercise. In the post-exercise period, the app manages the ratio of the estimated HR to the THZ and provides a questionnaire on factors such as chest pain, shortness of breath, and leg pain during exercise, as objective and subjective evaluation indicators. As a key issue, HR estimation upon signal corruption due to motion artifacts is also considered. Through the smartphone-based CRP, we estimated the HR accuracy as mean absolute error and root mean squared error of 6.16 and 4.30bpm, respectively, with signal corruption due to motion artifacts being detected by combining the turning point ratio and kurtosis.

  6. Smartphone-Based Cardiac Rehabilitation Program: Feasibility Study

    PubMed Central

    Chung, Heewon; Yoon, Kwon-Ha; Lee, Jinseok

    2016-01-01

    We introduce a cardiac rehabilitation program (CRP) that utilizes only a smartphone, with no external devices. As an efficient guide for cardiac rehabilitation exercise, we developed an application to automatically indicate the exercise intensity by comparing the estimated heart rate (HR) with the target heart rate zone (THZ). The HR is estimated using video images of a fingertip taken by the smartphone’s built-in camera. The introduced CRP app includes pre-exercise, exercise with intensity guidance, and post-exercise. In the pre-exercise period, information such as THZ, exercise type, exercise stage order, and duration of each stage are set up. In the exercise with intensity guidance, the app estimates HR from the pulse obtained using the smartphone’s built-in camera and compares the estimated HR with the THZ. Based on this comparison, the app adjusts the exercise intensity to shift the patient’s HR to the THZ during exercise. In the post-exercise period, the app manages the ratio of the estimated HR to the THZ and provides a questionnaire on factors such as chest pain, shortness of breath, and leg pain during exercise, as objective and subjective evaluation indicators. As a key issue, HR estimation upon signal corruption due to motion artifacts is also considered. Through the smartphone-based CRP, we estimated the HR accuracy as mean absolute error and root mean squared error of 6.16 and 4.30bpm, respectively, with signal corruption due to motion artifacts being detected by combining the turning point ratio and kurtosis. PMID:27551969

  7. Robust Tracking of Small Displacements with a Bayesian Estimator

    PubMed Central

    Dumont, Douglas M.; Byram, Brett C.

    2016-01-01

    Radiation-force-based elasticity imaging describes a group of techniques that use acoustic radiation force (ARF) to displace tissue in order to obtain qualitative or quantitative measurements of tissue properties. Because ARF-induced displacements are on the order of micrometers, tracking these displacements in vivo can be challenging. Previously, it has been shown that Bayesian-based estimation can overcome some of the limitations of a traditional displacement estimator like normalized cross-correlation (NCC). In this work, we describe a Bayesian framework that combines a generalized Gaussian-Markov random field (GGMRF) prior with an automated method for selecting the prior’s width. We then evaluate its performance in the context of tracking the micrometer-order displacements encountered in an ARF-based method like acoustic radiation force impulse (ARFI) imaging. The results show that bias, variance, and mean-square error performance vary with prior shape and width, and that an almost one order-of-magnitude reduction in mean-square error can be achieved by the estimator at the automatically-selected prior width. Lesion simulations show that the proposed estimator has a higher contrast-to-noise ratio but lower contrast than NCC, median-filtered NCC, and the previous Bayesian estimator, with a non-Gaussian prior shape having better lesion-edge resolution than a Gaussian prior. In vivo results from a cardiac, radiofrequency ablation ARFI imaging dataset show quantitative improvements in lesion contrast-to-noise ratio over NCC as well as the previous Bayesian estimator. PMID:26529761

  8. An Innovative Concept for Spacebased Lidar Measurement of Ocean Carbon Biomass

    NASA Technical Reports Server (NTRS)

    Hu, Yongxiang; Behrenfeld, Michael; Hostetler, Chris; Pelon, Jacques; Trepte, Charles; Hair, John; Slade, Wayne; Cetinic, Ivona; Vaughan, Mark; Lu, Xiaomei; hide

    2015-01-01

    Beam attenuation coefficient, c, provides an important optical index of plankton standing stocks, such as phytoplankton biomass and total particulate carbon concentration. Unfortunately, c has proven difficult to quantify through remote sensing. Here, we introduce an innovative approach for estimating c using lidar depolarization measurements and diffuse attenuation coefficients from ocean color products or lidar measurements of Brillouin scattering. The new approach is based on a theoretical formula established from Monte Carlo simulations that links the depolarization ratio of sea water to the ratio of diffuse attenuation Kd and beam attenuation C (i.e., a multiple scattering factor). On July 17, 2014, the CALIPSO satellite was tilted 30Âdeg off-nadir for one nighttime orbit in order to minimize ocean surface backscatter and demonstrate the lidar ocean subsurface measurement concept from space. Depolarization ratios of ocean subsurface backscatter are measured accurately. Beam attenuation coefficients computed from the depolarization ratio measurements compare well with empirical estimates from ocean color measurements. We further verify the beam attenuation coefficient retrievals using aircraft-based high spectral resolution lidar (HSRL) data that are collocated with in-water optical measurements.

  9. Estimated SAGE II ozone mixing ratios in early 1993 and comparisons with Stratospheric Photochemistry, Aerosols and Dynamic Expedition measurements

    NASA Technical Reports Server (NTRS)

    Yue, G. K.; Veiga, R. E.; Poole, L. R.; Zawodny, J. M.; Proffitt, M. H.

    1994-01-01

    An empirical time-series model for estimating ozone mixing ratios based on Stratospheric Aerosols and Gas Experiment II (SAGE II) monthly mean ozone data for the period October 1984 through June 1991 has been developed. The modeling results for ozone mixing ratios in the 10- to 30- km region in early months of 1993 are presented. In situ ozone profiles obtained by a dual-beam UV-absorption ozone photometer during the Stratospheric Photochemistry, Aerosols and Dynamics Expedition (SPADE) campaign, May 1-14, 1993, are compared with the model results. With the exception of two profiles at altitudes below 16 km, ozone mixing ratios derived by the model and measured by the ozone photometer are in relatively good agreement within their individual uncertainties. The identified discrepancies in the two profiles are discussed.

  10. Estimating decades-long trends in petroleum field energy return on investment (EROI) with an engineering-based model.

    PubMed

    Tripathi, Vinay S; Brandt, Adam R

    2017-01-01

    This paper estimates changes in the energy return on investment (EROI) for five large petroleum fields over time using the Oil Production Greenhouse Gas Emissions Estimator (OPGEE). The modeled fields include Cantarell (Mexico), Forties (U.K.), Midway-Sunset (U.S.), Prudhoe Bay (U.S.), and Wilmington (U.S.). Data on field properties and production/processing parameters were obtained from a combination of government and technical literature sources. Key areas of uncertainty include details of the oil and gas surface processing schemes. We aim to explore how long-term trends in depletion at major petroleum fields change the effective energetic productivity of petroleum extraction. Four EROI ratios are estimated for each field as follows: The net energy ratio (NER) and external energy ratio (EER) are calculated, each using two measures of energy outputs, (1) oil-only and (2) all energy outputs. In all cases, engineering estimates of inputs are used rather than expenditure-based estimates (including off-site indirect energy use and embodied energy). All fields display significant declines in NER over the modeling period driven by a combination of (1) reduced petroleum production and (2) increased energy expenditures on recovery methods such as the injection of water, steam, or gas. The fields studied had NER reductions ranging from 46% to 88% over the modeling periods (accounting for all energy outputs). The reasons for declines in EROI differ by field. Midway-Sunset experienced a 5-fold increase in steam injected per barrel of oil produced. In contrast, Prudhoe Bay has experienced nearly a 30-fold increase in amount of gas processed and reinjected per unit of oil produced. In contrast, EER estimates are subject to greater variability and uncertainty due to the relatively small magnitude of external energy investments in most cases.

  11. Estimating decades-long trends in petroleum field energy return on investment (EROI) with an engineering-based model

    PubMed Central

    Tripathi, Vinay S.

    2017-01-01

    This paper estimates changes in the energy return on investment (EROI) for five large petroleum fields over time using the Oil Production Greenhouse Gas Emissions Estimator (OPGEE). The modeled fields include Cantarell (Mexico), Forties (U.K.), Midway-Sunset (U.S.), Prudhoe Bay (U.S.), and Wilmington (U.S.). Data on field properties and production/processing parameters were obtained from a combination of government and technical literature sources. Key areas of uncertainty include details of the oil and gas surface processing schemes. We aim to explore how long-term trends in depletion at major petroleum fields change the effective energetic productivity of petroleum extraction. Four EROI ratios are estimated for each field as follows: The net energy ratio (NER) and external energy ratio (EER) are calculated, each using two measures of energy outputs, (1) oil-only and (2) all energy outputs. In all cases, engineering estimates of inputs are used rather than expenditure-based estimates (including off-site indirect energy use and embodied energy). All fields display significant declines in NER over the modeling period driven by a combination of (1) reduced petroleum production and (2) increased energy expenditures on recovery methods such as the injection of water, steam, or gas. The fields studied had NER reductions ranging from 46% to 88% over the modeling periods (accounting for all energy outputs). The reasons for declines in EROI differ by field. Midway-Sunset experienced a 5-fold increase in steam injected per barrel of oil produced. In contrast, Prudhoe Bay has experienced nearly a 30-fold increase in amount of gas processed and reinjected per unit of oil produced. In contrast, EER estimates are subject to greater variability and uncertainty due to the relatively small magnitude of external energy investments in most cases. PMID:28178318

  12. Robust QRS detection for HRV estimation from compressively sensed ECG measurements for remote health-monitoring systems.

    PubMed

    Pant, Jeevan K; Krishnan, Sridhar

    2018-03-15

    To present a new compressive sensing (CS)-based method for the acquisition of ECG signals and for robust estimation of heart-rate variability (HRV) parameters from compressively sensed measurements with high compression ratio. CS is used in the biosensor to compress the ECG signal. Estimation of the locations of QRS segments is carried out by applying two algorithms on the compressed measurements. The first algorithm reconstructs the ECG signal by enforcing a block-sparse structure on the first-order difference of the signal, so the transient QRS segments are significantly emphasized on the first-order difference of the signal. Multiple block-divisions of the signals are carried out with various block lengths, and multiple reconstructed signals are combined to enhance the robustness of the localization of the QRS segments. The second algorithm removes errors in the locations of QRS segments by applying low-pass filtering and morphological operations. The proposed CS-based method is found to be effective for the reconstruction of ECG signals by enforcing transient QRS structures on the first-order difference of the signal. It is demonstrated to be robust not only to high compression ratio but also to various artefacts present in ECG signals acquired by using on-body wireless sensors. HRV parameters computed by using the QRS locations estimated from the signals reconstructed with a compression ratio as high as 90% are comparable with that computed by using QRS locations estimated by using the Pan-Tompkins algorithm. The proposed method is useful for the realization of long-term HRV monitoring systems by using CS-based low-power wireless on-body biosensors.

  13. Role of a plausible nuisance contributor in the declining obesity-mortality risks over time.

    PubMed

    Mehta, Tapan; Pajewski, Nicholas M; Keith, Scott W; Fontaine, Kevin; Allison, David B

    2016-12-15

    Recent analyses of epidemiological data including the National Health and Nutrition Examination Survey (NHANES) have suggested that the harmful effects of obesity may have decreased over calendar time. The shifting BMI distribution over time coupled with the application of fixed broad BMI categories in these analyses could be a plausible "nuisance contributor" to this observed change in the obesity-associated mortality over calendar time. To evaluate the extent to which observed temporal changes in the obesity-mortality association may be due to a shifting population distribution for body mass index (BMI), coupled with analyses based on static, broad BMI categories. Simulations were conducted using data from NHANES I and III linked with mortality data. Data from NHANES I were used to fit a "true" model treating BMI as a continuous variable. Coefficients estimated from this model were used to simulate mortality for participants in NHANES III. Hence, the population-level association between BMI and mortality in NHANES III was fixed to be identical to the association estimated in NHANES I. Hazard ratios (HRs) for obesity categories based on BMI for NHANES III with simulated mortality data were compared to the corresponding estimated HRs from NHANES I. Change in hazard ratios for simulated data in NHANES III compared to observed estimates from NHANES I. On average, hazard ratios for NHANES III based on simulated mortality data were 29.3% lower than the estimates from NHANES I using observed mortality follow-up. This reduction accounted for roughly three-fourths of the apparent decrease in the obesity-mortality association observed in a previous analysis of these data. Some of the apparent diminution of the association between obesity and mortality may be an artifact of treating BMI as a categorical variable. Copyright © 2016. Published by Elsevier Inc.

  14. System technology analysis of aeroassisted orbital transfer vehicles: Moderate lift/drag (0.75-1.5). Volume 3: Cost estimates and work breakdown structure/dictionary, phase 1 and 2

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Technology payoffs of representative ground based (Phase 1) and space based (Phase 2) mid lift/drag ratio aeroassisted orbit transfer vehicles (AOTV) were assessed and prioritized. A narrative summary of the cost estimates and work breakdown structure/dictionary for both study phases is presented. Costs were estimated using the Grumman Space Programs Algorithm for Cost Estimating (SPACE) computer program and results are given for four AOTV configurations. The work breakdown structure follows the standard of the joint government/industry Space Systems Cost Analysis Group (SSCAG). A table is provided which shows cost estimates for each work breakdown structure element.

  15. Adaptive Window Zero-Crossing-Based Instantaneous Frequency Estimation

    NASA Astrophysics Data System (ADS)

    Sekhar, S. Chandra; Sreenivas, TV

    2004-12-01

    We address the problem of estimating instantaneous frequency (IF) of a real-valued constant amplitude time-varying sinusoid. Estimation of polynomial IF is formulated using the zero-crossings of the signal. We propose an algorithm to estimate nonpolynomial IF by local approximation using a low-order polynomial, over a short segment of the signal. This involves the choice of window length to minimize the mean square error (MSE). The optimal window length found by directly minimizing the MSE is a function of the higher-order derivatives of the IF which are not available a priori. However, an optimum solution is formulated using an adaptive window technique based on the concept of intersection of confidence intervals. The adaptive algorithm enables minimum MSE-IF (MMSE-IF) estimation without requiring a priori information about the IF. Simulation results show that the adaptive window zero-crossing-based IF estimation method is superior to fixed window methods and is also better than adaptive spectrogram and adaptive Wigner-Ville distribution (WVD)-based IF estimators for different signal-to-noise ratio (SNR).

  16. Noise stochastic corrected maximum a posteriori estimator for birefringence imaging using polarization-sensitive optical coherence tomography

    PubMed Central

    Kasaragod, Deepa; Makita, Shuichi; Hong, Young-Joo; Yasuno, Yoshiaki

    2017-01-01

    This paper presents a noise-stochastic corrected maximum a posteriori estimator for birefringence imaging using Jones matrix optical coherence tomography. The estimator described in this paper is based on the relationship between probability distribution functions of the measured birefringence and the effective signal to noise ratio (ESNR) as well as the true birefringence and the true ESNR. The Monte Carlo method is used to numerically describe this relationship and adaptive 2D kernel density estimation provides the likelihood for a posteriori estimation of the true birefringence. Improved estimation is shown for the new estimator with stochastic model of ESNR in comparison to the old estimator, both based on the Jones matrix noise model. A comparison with the mean estimator is also done. Numerical simulation validates the superiority of the new estimator. The superior performance of the new estimator was also shown by in vivo measurement of optic nerve head. PMID:28270974

  17. Epidemiologic research using probabilistic outcome definitions.

    PubMed

    Cai, Bing; Hennessy, Sean; Lo Re, Vincent; Small, Dylan S

    2015-01-01

    Epidemiologic studies using electronic healthcare data often define the presence or absence of binary clinical outcomes by using algorithms with imperfect specificity, sensitivity, and positive predictive value. This results in misclassification and bias in study results. We describe and evaluate a new method called probabilistic outcome definition (POD) that uses logistic regression to estimate the probability of a clinical outcome using multiple potential algorithms and then uses multiple imputation to make valid inferences about the risk ratio or other epidemiologic parameters of interest. We conducted a simulation to evaluate the performance of the POD method with two variables that can predict the true outcome and compared the POD method with the conventional method. The simulation results showed that when the true risk ratio is equal to 1.0 (null), the conventional method based on a binary outcome provides unbiased estimates. However, when the risk ratio is not equal to 1.0, the traditional method, either using one predictive variable or both predictive variables to define the outcome, is biased when the positive predictive value is <100%, and the bias is very severe when the sensitivity or positive predictive value is poor (less than 0.75 in our simulation). In contrast, the POD method provides unbiased estimates of the risk ratio both when this measure of effect is equal to 1.0 and not equal to 1.0. Even when the sensitivity and positive predictive value are low, the POD method continues to provide unbiased estimates of the risk ratio. The POD method provides an improved way to define outcomes in database research. This method has a major advantage over the conventional method in that it provided unbiased estimates of risk ratios and it is easy to use. Copyright © 2014 John Wiley & Sons, Ltd.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, J.; Kucharek, H.; Möbius, E.

    In this paper we report on a two-year study to estimate the Ne/O abundance ratio in the gas phase of the local interstellar cloud (LIC). Based on the first two years of observations with the Interstellar Boundary Explorer, we determined the fluxes of interstellar neutral (ISN) O and Ne atoms at the Earth's orbit in spring 2009 and 2010. A temporal variation of the Ne/O abundance ratio at the Earth's orbit could be expected due to solar cycle-related effects such as changes of ionization. However, this study shows that there is no significant change in the Ne/O ratio at themore » Earths orbit from 2009 to 2010. We used time-dependent survival probabilities of the ISNs to calculate the Ne/O abundance ratio at the termination shock. Then we estimated the Ne/O abundance ratio in the gas phase of the LIC with the use of filtration factors and the ionization fractions. From our analysis, the Ne/O abundance ratio in the LIC is 0.33 ± 0.07, which is in agreement with the abundance ratio inferred from pickup-ion measurements.« less

  19. Estimating phosphorus availability for microbial growth in an emerging landscape

    USGS Publications Warehouse

    Schmidt, S.K.; Cleveland, C.C.; Nemergut, D.R.; Reed, S.C.; King, A.J.; Sowell, P.

    2011-01-01

    Estimating phosphorus (P) availability is difficult—particularly in infertile soils such as those exposed after glacial recession—because standard P extraction methods may not mimic biological acquisition pathways. We developed an approach, based on microbial CO2 production kinetics and conserved carbon:phosphorus (C:P) ratios, to estimate the amount of P available for microbial growth in soils and compared this method to traditional, operationally-defined indicators of P availability. Along a primary succession gradient in the High Andes of Perú, P additions stimulated the growth-related (logistic) kinetics of glutamate mineralization in soils that had been deglaciated from 0 to 5 years suggesting that microbial growth was limited by soil P availability. We then used a logistic model to estimate the amount of C incorporated into biomass in P-limited soils, allowing us to estimate total microbial P uptake based on a conservative C:P ratio of 28:1 (mass:mass). Using this approach, we estimated that there was < 1 μg/g of microbial-available P in recently de-glaciated soils in both years of this study. These estimates fell well below estimates of available soil P obtained using traditional extraction procedures. Our results give both theoretical and practical insights into the kinetics of C and P utilization in young soils, as well as show changes in microbial P availability during early stages of soil development.

  20. Screening-level models to estimate partition ratios of organic chemicals between polymeric materials, air and water.

    PubMed

    Reppas-Chrysovitsinos, Efstathios; Sobek, Anna; MacLeod, Matthew

    2016-06-15

    Polymeric materials flowing through the technosphere are repositories of organic chemicals throughout their life cycle. Equilibrium partition ratios of organic chemicals between these materials and air (KMA) or water (KMW) are required for models of fate and transport, high-throughput exposure assessment and passive sampling. KMA and KMW have been measured for a growing number of chemical/material combinations, but significant data gaps still exist. We assembled a database of 363 KMA and 910 KMW measurements for 446 individual compounds and nearly 40 individual polymers and biopolymers, collected from 29 studies. We used the EPI Suite and ABSOLV software packages to estimate physicochemical properties of the compounds and we employed an empirical correlation based on Trouton's rule to adjust the measured KMA and KMW values to a standard reference temperature of 298 K. Then, we used a thermodynamic triangle with Henry's law constant to calculate a complete set of 1273 KMA and KMW values. Using simple linear regression, we developed a suite of single parameter linear free energy relationship (spLFER) models to estimate KMA from the EPI Suite-estimated octanol-air partition ratio (KOA) and KMW from the EPI Suite-estimated octanol-water (KOW) partition ratio. Similarly, using multiple linear regression, we developed a set of polyparameter linear free energy relationship (ppLFER) models to estimate KMA and KMW from ABSOLV-estimated Abraham solvation parameters. We explored the two LFER approaches to investigate (1) their performance in estimating partition ratios, and (2) uncertainties associated with treating all different polymers as a single "bulk" polymeric material compartment. The models we have developed are suitable for screening assessments of the tendency for organic chemicals to be emitted from materials, and for use in multimedia models of the fate of organic chemicals in the indoor environment. In screening applications we recommend that KMA and KMW be modeled as 0.06 ×KOA and 0.06 ×KOW respectively, with an uncertainty range of a factor of 15.

  1. Combining optimization methods with response spectra curve-fitting toward improved damping ratio estimation

    NASA Astrophysics Data System (ADS)

    Brewick, Patrick T.; Smyth, Andrew W.

    2016-12-01

    The authors have previously shown that many traditional approaches to operational modal analysis (OMA) struggle to properly identify the modal damping ratios for bridges under traffic loading due to the interference caused by the driving frequencies of the traffic loads. This paper presents a novel methodology for modal parameter estimation in OMA that overcomes the problems presented by driving frequencies and significantly improves the damping estimates. This methodology is based on finding the power spectral density (PSD) of a given modal coordinate, and then dividing the modal PSD into separate regions, left- and right-side spectra. The modal coordinates were found using a blind source separation (BSS) algorithm and a curve-fitting technique was developed that uses optimization to find the modal parameters that best fit each side spectra of the PSD. Specifically, a pattern-search optimization method was combined with a clustering analysis algorithm and together they were employed in a series of stages in order to improve the estimates of the modal damping ratios. This method was used to estimate the damping ratios from a simulated bridge model subjected to moving traffic loads. The results of this method were compared to other established OMA methods, such as Frequency Domain Decomposition (FDD) and BSS methods, and they were found to be more accurate and more reliable, even for modes that had their PSDs distorted or altered by driving frequencies.

  2. Cost-effectiveness of interventions to control Campylobacter in the New Zealand poultry meat food supply.

    PubMed

    Lake, Robin J; Horn, Beverley J; Dunn, Alex H; Parris, Ruth; Green, F Terri; McNickle, Don C

    2013-07-01

    An analysis of the cost-effectiveness of interventions to control Campylobacter in the New Zealand poultry supply examined a series of interventions. Effectiveness was evaluated in terms of reduced health burden measured by disability-adjusted life years (DALYs). Costs of implementation were estimated from the value of cost elements, determined by discussions with industry. Benefits were estimated by changing the inputs to a poultry food chain quantitative risk model. Proportional reductions in the number of predicted Campylobacter infections were converted into reductions in the burden of disease measured in DALYs. Cost-effectiveness ratios were calculated for each intervention, as cost per DALY reduction and the ratios compared. The results suggest that the most cost-effective interventions (lowest ratios) are at the primary processing stage. Potential phage-based controls in broiler houses were also highly cost-effective. This study is limited by the ability to quantify costs of implementation and assumptions required to estimate health benefits, but it supports the implementation of interventions at the primary processing stage as providing the greatest quantum of benefit and lowest cost-effectiveness ratios.

  3. CALIPSO-Inferred Aerosol Direct Radiative Effects: Bias Estimates Using Ground-Based Raman Lidars

    NASA Technical Reports Server (NTRS)

    Thorsen, Tyler; Fu, Qiang

    2016-01-01

    Observational constraints on the change in the radiative energy budget caused by the presence of aerosols, i.e. the aerosol direct radiative effect (DRE), have recently been made using observations from the Cloud- Aerosol Lidar and Infrared Pathfinder Satellite (CALIPSO). CALIPSO observations have the potential to provide improved global estimates of aerosol DRE compared to passive sensor-derived estimates due to CALIPSO's ability to perform vertically-resolved aerosol retrievals over all surface types and over cloud. In this study we estimate the uncertainties in CALIPSO-inferred aerosol DRE using multiple years of observations from the Atmospheric Radiation Measurement (ARM) program's Raman lidars (RL) at midlatitude and tropical sites. Examined are assumptions about the ratio of extinction-to-backscatter (i.e. the lidar ratio) made by the CALIPSO retrievals, which are needed to retrieve the aerosol extinction profile. The lidar ratio is shown to introduce minimal error in the mean aerosol DRE at the top-of-atmosphere and surface. It is also shown that CALIPSO is unable to detect all radiatively-significant aerosol, resulting in an underestimate in the magnitude of the aerosol DRE by 30-50%. Therefore, global estimates of the aerosol DRE inferred from CALIPSO observations are likely too weak.

  4. Ratio-based estimators for a change point in persistence.

    PubMed

    Halunga, Andreea G; Osborn, Denise R

    2012-11-01

    We study estimation of the date of change in persistence, from [Formula: see text] to [Formula: see text] or vice versa. Contrary to statements in the original papers, our analytical results establish that the ratio-based break point estimators of Kim [Kim, J.Y., 2000. Detection of change in persistence of a linear time series. Journal of Econometrics 95, 97-116], Kim et al. [Kim, J.Y., Belaire-Franch, J., Badillo Amador, R., 2002. Corringendum to "Detection of change in persistence of a linear time series". Journal of Econometrics 109, 389-392] and Busetti and Taylor [Busetti, F., Taylor, A.M.R., 2004. Tests of stationarity against a change in persistence. Journal of Econometrics 123, 33-66] are inconsistent when a mean (or other deterministic component) is estimated for the process. In such cases, the estimators converge to random variables with upper bound given by the true break date when persistence changes from [Formula: see text] to [Formula: see text]. A Monte Carlo study confirms the large sample downward bias and also finds substantial biases in moderate sized samples, partly due to properties at the end points of the search interval.

  5. Quick estimate of oil discovery from gas-condensate reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarem, A.M.

    1966-10-24

    A quick method of estimating the depletion performance of gas-condensate reservoirs is presented by graphical representations. The method is based on correlations reported in the literature and expresses recoverable liquid as function of gas reserves, producing gas-oil ratio, and initial and final reservoir pressures. The amount of recoverable liquid reserves (RLR) under depletion conditions, is estimated from an equation which is given. Where the liquid-reserves are in stock-tank barrels the gas reserves are in Mcf, with the arbitrary constant, N calculated from one graphical representation by dividing fractional oil recovery by the initial gas-oil ratio and multiplying 10U6D for convenience.more » An equation is given for estimating the coefficient C. These factors (N and C) can be determined from the graphical representations. An example calculation is included.« less

  6. Application of chlorine-36 technique in determining the age of modern groundwater in the Al-Zulfi province, Saudi Arabia.

    PubMed

    Challan, Mohsen B

    2016-06-01

    The present study aims to estimate the residence time of groundwater based on bomb-produced (36)Cl. (36)Cl/Cl ratios in the water samples are determined by inductively coupled plasma mass spectrometry and liquid scintillation counting. (36)Cl/Cl ratios in the groundwater were estimated to be 1.0-2.0 × 10(-12). Estimates of residence time were obtained by comparing the measured bomb-derived (36)Cl concentrations in groundwater with the background reference. Dating based on a (36)Cl bomb pulse may be more reliable and sensitive for groundwater recharged before 1975, back as far as the mid-1950s. The above (36)Cl background concentration was deduced by determining the background-corrected Dye-3 ice core data from the frozen Arctic data, according to the estimated total (36)Cl resources. The residence time of 7.81 × 10(4) y is obtained from extrapolated groundwater flow velocity. (36)Cl concentration in groundwater does not reflect the input of bomb pulse (36)Cl, and it belongs to the era before 1950.

  7. Modeling additive and non-additive effects in a hybrid population using genome-wide genotyping: prediction accuracy implications

    PubMed Central

    Bouvet, J-M; Makouanzi, G; Cros, D; Vigneron, Ph

    2016-01-01

    Hybrids are broadly used in plant breeding and accurate estimation of variance components is crucial for optimizing genetic gain. Genome-wide information may be used to explore models designed to assess the extent of additive and non-additive variance and test their prediction accuracy for the genomic selection. Ten linear mixed models, involving pedigree- and marker-based relationship matrices among parents, were developed to estimate additive (A), dominance (D) and epistatic (AA, AD and DD) effects. Five complementary models, involving the gametic phase to estimate marker-based relationships among hybrid progenies, were developed to assess the same effects. The models were compared using tree height and 3303 single-nucleotide polymorphism markers from 1130 cloned individuals obtained via controlled crosses of 13 Eucalyptus urophylla females with 9 Eucalyptus grandis males. Akaike information criterion (AIC), variance ratios, asymptotic correlation matrices of estimates, goodness-of-fit, prediction accuracy and mean square error (MSE) were used for the comparisons. The variance components and variance ratios differed according to the model. Models with a parent marker-based relationship matrix performed better than those that were pedigree-based, that is, an absence of singularities, lower AIC, higher goodness-of-fit and accuracy and smaller MSE. However, AD and DD variances were estimated with high s.es. Using the same criteria, progeny gametic phase-based models performed better in fitting the observations and predicting genetic values. However, DD variance could not be separated from the dominance variance and null estimates were obtained for AA and AD effects. This study highlighted the advantages of progeny models using genome-wide information. PMID:26328760

  8. Developments in the Use of Proximity and Ratio Cues in Velocity Judgments.

    ERIC Educational Resources Information Center

    Shire, Beatrice; Durkin, Kevin

    Young children's responses to a velocity inference task based on static pictorial stimuli giving cues of proximity and ratio were examined. Subjects (N=65) in preschool through second grade viewed pictures of snails moving horizontally or spiders suspended vertically and were asked to estimate which competitor would reach its destination first.…

  9. Association between GFR Estimated by Multiple Methods at Dialysis Commencement and Patient Survival

    PubMed Central

    Wong, Muh Geot; Pollock, Carol A.; Cooper, Bruce A.; Branley, Pauline; Collins, John F.; Craig, Jonathan C.; Kesselhut, Joan; Luxton, Grant; Pilmore, Andrew; Harris, David C.

    2014-01-01

    Summary Background and objectives The Initiating Dialysis Early and Late study showed that planned early or late initiation of dialysis, based on the Cockcroft and Gault estimation of GFR, was associated with identical clinical outcomes. This study examined the association of all-cause mortality with estimated GFR at dialysis commencement, which was determined using multiple formulas. Design, setting, participants, & measurements Initiating Dialysis Early and Late trial participants were stratified into tertiles according to the estimated GFR measured by Cockcroft and Gault, Modification of Diet in Renal Disease, or Chronic Kidney Disease-Epidemiology Collaboration formula at dialysis commencement. Patient survival was determined using multivariable Cox proportional hazards model regression. Results Only Initiating Dialysis Early and Late trial participants who commenced on dialysis were included in this study (n=768). A total of 275 patients died during the study. After adjustment for age, sex, racial origin, body mass index, diabetes, and cardiovascular disease, no significant differences in survival were observed between estimated GFR tertiles determined by Cockcroft and Gault (lowest tertile adjusted hazard ratio, 1.11; 95% confidence interval, 0.82 to 1.49; middle tertile hazard ratio, 1.29; 95% confidence interval, 0.96 to 1.74; highest tertile reference), Modification of Diet in Renal Disease (lowest tertile hazard ratio, 0.88; 95% confidence interval, 0.63 to 1.24; middle tertile hazard ratio, 1.20; 95% confidence interval, 0.90 to 1.61; highest tertile reference), and Chronic Kidney Disease-Epidemiology Collaboration equations (lowest tertile hazard ratio, 0.93; 95% confidence interval, 0.67 to 1.27; middle tertile hazard ratio, 1.15; 95% confidence interval, 0.86 to 1.54; highest tertile reference). Conclusion Estimated GFR at dialysis commencement was not significantly associated with patient survival, regardless of the formula used. However, a clinically important association cannot be excluded, because observed confidence intervals were wide. PMID:24178976

  10. Estimated water use and availability in the lower Blackstone River basin, northern Rhode Island and south-central Massachusetts, 1995-99

    USGS Publications Warehouse

    Barolw, Lora K.

    2003-01-01

    The Blackstone River basin includes approximately 475 square miles in northern Rhode Island and south-central Massachusetts. The study area (198 square miles) comprises six subbasins of the lower Blackstone River basin. The estimated population for the study period 1995?99 was 149,651 persons. Water-use data including withdrawals, use, and return flows for the study area were collected. Withdrawals averaged 29.869 million gallons per day (Mgal/d) with an estimated 12.327 Mgal/d exported and an estimated 2.852 Mgal/d imported; this resulted in a net export of 9.475 Mgal/d. Public-supply withdrawals were 22.694 Mgal/d and self-supply withdrawals were 7.170 Mgal/d, which is about 24 percent of total withdrawals. Two users withdrew 4.418 Mgal/d of the 7.170 Mgal/d of self-supply withdrawals. Total water use averaged 20.388 Mgal/d. The largest aggregate water use was for domestic supply (10.113 Mgal/d, 50 percent of total water use), followed by industrial water use (4.127 Mgal/d, 20 percent), commercial water use (4.026 Mgal/d, 20 percent), non-account water use (1.866 Mgal/d, 9 percent) and agricultural water use (0.252 Mgal/d, 1 percent). Wastewater disposal averaged 15.219 Mgal/d with 10.395 Mgal/d or 68 percent disposed at National Pollution Discharge Elimination System (NPDES) outfalls for municipal wastewater-treatment facilities. The remaining 4.824 Mgal/d or 32 percent was self-disposed, 1.164 Mgal/d of which was disposed through commercial and industrial NPDES outfalls. Water availability (base flow plus safe-yield estimates minus streamflow criteria) was estimated for the low-flow period, which included June, July, August, and September. The median base flow for the low-flow period from 1957 to 1999 was estimated at 0.62 Mgal/d per square mile for sand and gravel deposits and 0.19 Mgal/d per square mile for till deposits. Safe-yield estimates for public-supply reservoirs totaled 20.2 Mgal/d. When the 7-day, 10-year low flow (7Q10) was subtracted from base flow, an estimated median rate of 50.5 Mgal/d of water was available for the basin during August, the lowest base-flow month. In addition, basin-wide water-availability estimates were calculated with and without streamflow criteria for each month of the low-flow period at the 75th, 50th, and 25th percentiles of base flow. These water availability estimates ranged from 42.3 to 181.7 Mgal/d in June; 20.2 to 96.7 Mgal/d in July; 20.2 to 85.4 Mgal/d in August, and 20.2 to 97.5 Mgal/d in September. Base flow was less than the Aquatic Base Flow (ABF), minimum flow considered adequate to protect aquatic fauna, from July through September at the 25th percentile and in August and September at the 50th percentile. A basin-stress ratio, which is equal to total withdrawals divided by water availability, was also calculated. The basin-stress ratio for August at the 50th percentile of base flow minus the 7Q10 was 0.68 for the study area. For individual subbasins, the ratio ranged from 0.13 in the Chepachet River subbasin to 0.95 in the Abbot Run subbasin. In addition, basin-stress ratios with and without streamflow criteria for all four months of the low-flow period were calculated at the 75th, 50th, and 25th percentiles of base flow. These values ranged from 0.19 to 0.83 in June, 0.36 to 1.50 in July, 0.40 to 1.14 in August, and 0.31 to 0.78 in September. Ratios could not be calculated by using the ABF at the 50th and 25th percentiles in August and September because the estimated base flow was less than the ABF. The depletion of the Blackstone River flows by Cumberland Water Department Manville well no. 1 in Rhode Island was estimated with the computer program STRMDEPL and specified daily pumping rates. STRMDEPL uses analytical solutions to calculate time-varying rates of streamflow depletion caused by pumping at wells. Results show that streamflow depletions were about 97 percent of average daily pumping rates for 1995 through 1999. Relative streamflow depletions for

  11. Signal-to-noise ratio estimation on SEM images using cubic spline interpolation with Savitzky-Golay smoothing.

    PubMed

    Sim, K S; Kiani, M A; Nia, M E; Tso, C P

    2014-01-01

    A new technique based on cubic spline interpolation with Savitzky-Golay noise reduction filtering is designed to estimate signal-to-noise ratio of scanning electron microscopy (SEM) images. This approach is found to present better result when compared with two existing techniques: nearest neighbourhood and first-order interpolation. When applied to evaluate the quality of SEM images, noise can be eliminated efficiently with optimal choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  12. A potential dating technique using 228Th/228Ra ratio for tracing the chronosequence of elemental concentrations in plants.

    PubMed

    Chao, J H; Niu, H; Chiu, C Y; Lin, C

    2007-06-01

    We propose a radiometric method based on measurement of the radioactivity of the naturally occurring radionuclides (228)Ra and 228)Th and the derived (228)Th/(228)Ra ratios in plant samples to estimate plant age and the corresponding nutritional conditions in a field-growing fern, Dicranopteris linearis. Plant age (tissue age) was associated with the (228)Th/(228)Ra ratio in fronds, which implies the accumulation time of immobile elements in the plant tissue or the life span of the fronds. Results indicated that the accumulation of alkaline earth elements in D. linearis is relatively constant with increased age, while the K concentration is reversed with age because of translocation among plant tissues. Estimation of dating uncertainty based on measurement conditions revealed that the radiometric technique can be applied to trace chronosequential changes of elemental concentrations and environmental pollutants in plants with ages of less than 10-15 years.

  13. A Conceptual Wing Flutter Analysis Tool for Systems Analysis and Parametric Design Study

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    2003-01-01

    An interactive computer program was developed for wing flutter analysis in the conceptual design stage. The objective was to estimate flutt er instability boundaries of a typical wing, when detailed structural and aerodynamic data are not available. Effects of change in key flu tter parameters can also be estimated in order to guide the conceptual design. This userfriendly software was developed using MathCad and M atlab codes. The analysis method was based on non-dimensional paramet ric plots of two primary flutter parameters, namely Regier number and Flutter number, with normalization factors based on wing torsion stiffness, sweep, mass ratio, taper ratio, aspect ratio, center of gravit y location and pitch-inertia radius of gyration. These parametric plo ts were compiled in a Chance-Vought Corporation report from database of past experiments and wind tunnel test results. An example was prese nted for conceptual flutter analysis of outer-wing of a Blended-Wing- Body aircraft.

  14. An Overdetermined System for Improved Autocorrelation Based Spectral Moment Estimator Performance

    NASA Technical Reports Server (NTRS)

    Keel, Byron M.

    1996-01-01

    Autocorrelation based spectral moment estimators are typically derived using the Fourier transform relationship between the power spectrum and the autocorrelation function along with using either an assumed form of the autocorrelation function, e.g., Gaussian, or a generic complex form and applying properties of the characteristic function. Passarelli has used a series expansion of the general complex autocorrelation function and has expressed the coefficients in terms of central moments of the power spectrum. A truncation of this series will produce a closed system of equations which can be solved for the central moments of interest. The autocorrelation function at various lags is estimated from samples of the random process under observation. These estimates themselves are random variables and exhibit a bias and variance that is a function of the number of samples used in the estimates and the operational signal-to-noise ratio. This contributes to a degradation in performance of the moment estimators. This dissertation investigates the use autocorrelation function estimates at higher order lags to reduce the bias and standard deviation in spectral moment estimates. In particular, Passarelli's series expansion is cast in terms of an overdetermined system to form a framework under which the application of additional autocorrelation function estimates at higher order lags can be defined and assessed. The solution of the overdetermined system is the least squares solution. Furthermore, an overdetermined system can be solved for any moment or moments of interest and is not tied to a particular form of the power spectrum or corresponding autocorrelation function. As an application of this approach, autocorrelation based variance estimators are defined by a truncation of Passarelli's series expansion and applied to simulated Doppler weather radar returns which are characterized by a Gaussian shaped power spectrum. The performance of the variance estimators determined from a closed system is shown to improve through the application of additional autocorrelation lags in an overdetermined system. This improvement is greater in the narrowband spectrum region where the information is spread over more lags of the autocorrelation function. The number of lags needed in the overdetermined system is a function of the spectral width, the number of terms in the series expansion, the number of samples used in estimating the autocorrelation function, and the signal-to-noise ratio. The overdetermined system provides a robustness to the chosen variance estimator by expanding the region of spectral widths and signal-to-noise ratios over which the estimator can perform as compared to the closed system.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelly, Brandon C.; Shen, Yue

    We employ a flexible Bayesian technique to estimate the black hole (BH) mass and Eddington ratio functions for Type 1 (i.e., broad line) quasars from a uniformly selected data set of {approx}58, 000 quasars from the Sloan Digital Sky Survey (SDSS) DR7. We find that the SDSS becomes significantly incomplete at M {sub BH} {approx}< 3 Multiplication-Sign 10{sup 8} M {sub Sun} or L/L {sub Edd} {approx}< 0.07, and that the number densities of Type 1 quasars continue to increase down to these limits. Both the mass and Eddington ratio functions show evidence of downsizing, with the most massive andmore » highest Eddington ratio BHs experiencing Type 1 quasar phases first, although the Eddington ratio number densities are flat at z < 2. We estimate the maximum Eddington ratio of Type 1 quasars in the observable universe to be L/L {sub Edd} {approx} 3. Consistent with our results in Shen and Kelly, we do not find statistical evidence for a so-called sub-Eddington boundary in the mass-luminosity plane of broad-line quasars, and demonstrate that such an apparent boundary in the observed distribution can be caused by selection effect and errors in virial BH mass estimates. Based on the typical Eddington ratio in a given mass bin, we estimate growth times for the BHs in Type 1 quasars and find that they are comparable to or longer than the age of the universe, implying an earlier phase of accelerated (i.e., with higher Eddington ratios) and possibly obscured growth. The large masses probed by our sample imply that most of our BHs reside in what are locally early-type galaxies, and we interpret our results within the context of models of self-regulated BH growth.« less

  16. A measurement error model for physical activity level as measured by a questionnaire with application to the 1999-2006 NHANES questionnaire.

    PubMed

    Tooze, Janet A; Troiano, Richard P; Carroll, Raymond J; Moshfegh, Alanna J; Freedman, Laurence S

    2013-06-01

    Systematic investigations into the structure of measurement error of physical activity questionnaires are lacking. We propose a measurement error model for a physical activity questionnaire that uses physical activity level (the ratio of total energy expenditure to basal energy expenditure) to relate questionnaire-based reports of physical activity level to true physical activity levels. The 1999-2006 National Health and Nutrition Examination Survey physical activity questionnaire was administered to 433 participants aged 40-69 years in the Observing Protein and Energy Nutrition (OPEN) Study (Maryland, 1999-2000). Valid estimates of participants' total energy expenditure were also available from doubly labeled water, and basal energy expenditure was estimated from an equation; the ratio of those measures estimated true physical activity level ("truth"). We present a measurement error model that accommodates the mixture of errors that arise from assuming a classical measurement error model for doubly labeled water and a Berkson error model for the equation used to estimate basal energy expenditure. The method was then applied to the OPEN Study. Correlations between the questionnaire-based physical activity level and truth were modest (r = 0.32-0.41); attenuation factors (0.43-0.73) indicate that the use of questionnaire-based physical activity level would lead to attenuated estimates of effect size. Results suggest that sample sizes for estimating relationships between physical activity level and disease should be inflated, and that regression calibration can be used to provide measurement error-adjusted estimates of relationships between physical activity and disease.

  17. Inferring the source of evaporated waters using stable H and O isotopes

    NASA Astrophysics Data System (ADS)

    Bowen, G. J.; Putman, A.; Brooks, J. R.; Bowling, D. R.; Oerter, E.; Good, S. P.

    2017-12-01

    Stable isotope ratios of H and O are widely used identify the source of water, e.g., in aquifers, river runoff, soils, plant xylem, and plant-based beverages. In situations where the sampled water is partially evaporated, its isotope values will have evolved along an evaporation line (EL) in δ2H/δ18O space, and back-correction along the EL to its intersection with a meteoric water line (MWL) has been used to estimate the source water's isotope ratios. Several challenges and potential pitfalls exist with traditional approaches to this problem, including potential for bias from a commonly used regression-based approach for EL slope estimation and incomplete estimation of uncertainty in most studies. We suggest the value of a model-based approach to EL estimation, and introduce a mathematical framework that eliminates the need to explicitly estimate the EL-MWL intersection, simplifying analysis and facilitating more rigorous uncertainty estimation. We apply this analysis framework to data from 1,000 lakes sampled in EPA's 2007 National Lakes Assessment. We find that data for most lakes is consistent with a water source similar to annual runoff, estimated from monthly precipitation and evaporation within the lake basin. Strong evidence for both summer- and winter-biased sources exists, however, with winter bias pervasive in most snow-prone regions. The new analytical framework should improve the rigor of source-water inference from evaporated samples in ecohydrology and related sciences, and our initial results from U.S. lakes suggest that previous interpretations of lakes as unbiased isotope integrators may only be valid in certain climate regimes.

  18. Ultrasonic data compression via parameter estimation.

    PubMed

    Cardoso, Guilherme; Saniie, Jafar

    2005-02-01

    Ultrasonic imaging in medical and industrial applications often requires a large amount of data collection. Consequently, it is desirable to use data compression techniques to reduce data and to facilitate the analysis and remote access of ultrasonic information. The precise data representation is paramount to the accurate analysis of the shape, size, and orientation of ultrasonic reflectors, as well as to the determination of the properties of the propagation path. In this study, a successive parameter estimation algorithm based on a modified version of the continuous wavelet transform (CWT) to compress and denoise ultrasonic signals is presented. It has been shown analytically that the CWT (i.e., time x frequency representation) yields an exact solution for the time-of-arrival and a biased solution for the center frequency. Consequently, a modified CWT (MCWT) based on the Gabor-Helstrom transform is introduced as a means to exactly estimate both time-of-arrival and center frequency of ultrasonic echoes. Furthermore, the MCWT also has been used to generate a phase x bandwidth representation of the ultrasonic echo. This representation allows the exact estimation of the phase and the bandwidth. The performance of this algorithm for data compression and signal analysis is studied using simulated and experimental ultrasonic signals. The successive parameter estimation algorithm achieves a data compression ratio of (1-5N/J), where J is the number of samples and N is the number of echoes in the signal. For a signal with 10 echoes and 2048 samples, a compression ratio of 96% is achieved with a signal-to-noise ratio (SNR) improvement above 20 dB. Furthermore, this algorithm performs robustly, yields accurate echo estimation, and results in SNR enhancements ranging from 10 to 60 dB for composite signals having SNR as low as -10 dB.

  19. A method to estimate weight and dimensions of aircraft gas turbine engines. Volume 1: Method of analysis

    NASA Technical Reports Server (NTRS)

    Pera, R. J.; Onat, E.; Klees, G. W.; Tjonneland, E.

    1977-01-01

    Weight and envelope dimensions of aircraft gas turbine engines are estimated within plus or minus 5% to 10% using a computer method based on correlations of component weight and design features of 29 data base engines. Rotating components are estimated by a preliminary design procedure where blade geometry, operating conditions, material properties, shaft speed, hub-tip ratio, etc., are the primary independent variables used. The development and justification of the method selected, the various methods of analysis, the use of the program, and a description of the input/output data are discussed.

  20. An adhered-particle analysis system based on concave points

    NASA Astrophysics Data System (ADS)

    Wang, Wencheng; Guan, Fengnian; Feng, Lin

    2018-04-01

    Particles adhered together will influence the image analysis in computer vision system. In this paper, a method based on concave point is designed. First, corner detection algorithm is adopted to obtain a rough estimation of potential concave points after image segmentation. Then, it computes the area ratio of the candidates to accurately localize the final separation points. Finally, it uses the separation points of each particle and the neighboring pixels to estimate the original particles before adhesion and provides estimated profile images. The experimental results have shown that this approach can provide good results that match the human visual cognitive mechanism.

  1. Numerical Simulation of Earth Pressure on Head Chamber of Shield Machine with FEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Shouju; Kang Chengang; Sun, Wei

    2010-05-21

    Model parameters of conditioned soils in head chamber of shield machine are determined based on tree-axial compression tests in laboratory. The loads acting on tunneling face are estimated according to static earth pressure principle. Based on Duncan-Chang nonlinear elastic constitutive model, the earth pressures on head chamber of shield machine are simulated in different aperture ratio cases for rotating cutterhead of shield machine. Relationship between pressure transportation factor and aperture ratio of shield machine is proposed by using aggression analysis.

  2. Comparison of two methods for estimating discharge and nutrient loads from Tidally affected reaches of the Myakka and Peace Rivers, West-Central Florida

    USGS Publications Warehouse

    Levesque, V.A.; Hammett, K.M.

    1997-01-01

    The Myakka and Peace River Basins constitute more than 60 percent of the total inflow area and contribute more than half the total tributary inflow to the Charlotte Harbor estuarine system. Water discharge and nutrient enrichment have been identified as significant concerns in the estuary, and consequently, it is important to accurately estimate the magnitude of discharges and nutrient loads transported by inflows from both rivers. Two methods for estimating discharge and nutrient loads from tidally affected reaches of the Myakka and Peace Rivers were compared. The first method was a tidal-estimation method, in which discharge and nutrient loads were estimated based on stage, water-velocity, discharge, and water-quality data collected near the mouths of the rivers. The second method was a traditional basin-ratio method in which discharge and nutrient loads at the mouths were estimated from discharge and loads measured at upstream stations. Stage and water-velocity data were collected near the river mouths by submersible instruments, deployed in situ, and discharge measurements were made with an acoustic Doppler current profiler. The data collected near the mouths of the Myakka River and Peace River were filtered, using a low-pass filter, to remove daily mixed-tide effects with periods less than about 2 days. The filtered data from near the river mouths were used to calculate daily mean discharge and nutrient loads. These tidal-estimation-method values were then compared to the basin-ratio-method values. Four separate 30-day periods of differing streamflow conditions were chosen for monitoring and comparison. Discharge and nutrient load estimates computed from the tidal-estimation and basin-ratio methods were most similar during high-flow periods. However, during high flow, the values computed from the tidal-estimation method for the Myakka and Peace Rivers were consistently lower than the values computed from the basin-ratio method. There were substantial differences between discharges and nutrient loads computed from the tidal-estimation and basin-ratio methods during low-flow periods. Furthermore, the differences between the methods were not consistent. Discharges and nutrient loads computed from the tidal-estimation method for the Myakka River were higher than those computed from the basin-ratio method, whereas discharges and nutrients loads computed by the tidal-estimation method for the Peace River were not only lower than those computed from the basin-ratio method, but they actually reflected a negative, or upstream, net movement. Short-term tidal measurement results should be used with caution, because antecedent conditions can influence the discharge and nutrient loads. Continuous tidal data collected over a 1- or 2-year period would be necessary to more accurately estimate the tidally affected discharge and nutrient loads for the Myakka and Peace River Basins.

  3. Monte Carlo role in radiobiological modelling of radiotherapy outcomes

    NASA Astrophysics Data System (ADS)

    El Naqa, Issam; Pater, Piotr; Seuntjens, Jan

    2012-06-01

    Radiobiological models are essential components of modern radiotherapy. They are increasingly applied to optimize and evaluate the quality of different treatment planning modalities. They are frequently used in designing new radiotherapy clinical trials by estimating the expected therapeutic ratio of new protocols. In radiobiology, the therapeutic ratio is estimated from the expected gain in tumour control probability (TCP) to the risk of normal tissue complication probability (NTCP). However, estimates of TCP/NTCP are currently based on the deterministic and simplistic linear-quadratic formalism with limited prediction power when applied prospectively. Given the complex and stochastic nature of the physical, chemical and biological interactions associated with spatial and temporal radiation induced effects in living tissues, it is conjectured that methods based on Monte Carlo (MC) analysis may provide better estimates of TCP/NTCP for radiotherapy treatment planning and trial design. Indeed, over the past few decades, methods based on MC have demonstrated superior performance for accurate simulation of radiation transport, tumour growth and particle track structures; however, successful application of modelling radiobiological response and outcomes in radiotherapy is still hampered with several challenges. In this review, we provide an overview of some of the main techniques used in radiobiological modelling for radiotherapy, with focus on the MC role as a promising computational vehicle. We highlight the current challenges, issues and future potentials of the MC approach towards a comprehensive systems-based framework in radiobiological modelling for radiotherapy.

  4. Primary care physician shortages could be eliminated through use of teams, nonphysicians, and electronic communication.

    PubMed

    Green, Linda V; Savin, Sergei; Lu, Yina

    2013-01-01

    Most existing estimates of the shortage of primary care physicians are based on simple ratios, such as one physician for every 2,500 patients. These estimates do not consider the impact of such ratios on patients' ability to get timely access to care. They also do not quantify the impact of changing patient demographics on the demand side and alternative methods of delivering care on the supply side. We used simulation methods to provide estimates of the number of primary care physicians needed, based on a comprehensive analysis considering access, demographics, and changing practice patterns. We show that the implementation of some increasingly popular operational changes in the ways clinicians deliver care-including the use of teams or "pods," better information technology and sharing of data, and the use of nonphysicians-have the potential to offset completely the increase in demand for physician services while improving access to care, thereby averting a primary care physician shortage.

  5. Mortality through ontogeny of soft-bottom marine invertebrates with planktonic larvae

    NASA Astrophysics Data System (ADS)

    Pedersen, Troels Møller; Hansen, Jørgen L. S.; Josefson, Alf B.; Hansen, Benni W.

    2008-09-01

    The present survey covers one spawning season of marine benthic invertebrates in a large geographical area, the inner Danish waters, and includes a wide range of habitats with steep salinity and nutrient load gradients. The loss ratios of soft-bottom marine invertebrates from one development stage to the next is calculated based on average abundances of pelagic larvae, benthic post-larvae and adults of Bivalvia, Gastropoda, Polychaeta and Echinodermata, with planktonic development. This gives a rough estimate of the larval and post-larval mortality. Loss ratios between post-larvae stage and adult stage (post-larval mortality) varies from 3:1 to 7:1 (71.2-84.9%) and loss ratios between larvae and post-larvae (larval mortality) and between larvae and adult, ranging from 7:1 to 42:1 (85.2-97.6%) and from 45:1 to 210:1 (97.8-99.5%), respectively. The results show a remarkable unity in loss ratios (mortality) between the mollusc taxa (Bivalvia and Gastropoda) at the phylum/class level. This similarity in loss ratios among the mollusc taxa exhibiting the same developmental pathways suggests that the mortality is governed by the same biotic and abiotic factors. Larval mortality is estimated to range from 0.10 d - 1 to 0.32 d - 1 for Bivalvia and ranging from 0.09 d - 1 to 0.23 d - 1 for Polychaeta. The species loss ratios combined with specific knowledge of the reproduction cycles give estimated loss ratios (mortality) between the post-larvae and the adult stage of 25:1 and 14:1 for the bivalves Abra spp. and Mysella bidentata. For the polychaete Pygospio elegans the loss ratio (larval mortality) between the larvae and the post-larval stage is 154:1 and between the post-larvae and the adult stage 41:1. For Pholoe inornata the loss ratio between post-larvae and adults is 7:1. The present results confirm that the larval stage, metamorphosis and settlement are the critical phase in terms of mortality in the life cycle for Bivalvia. Assuming steady state based on actual measurements of pelagic larval densities an estimated input to the water column of pelagic bivalve larvae is ranging from 10,930 to 17,157 larvae m - 2 d - 1 and for Polychaeta between 2544 and 3994 larvae m - 2 d - 1 . These estimates seem to correspond to the reproductive capacity of the observed adult densities using life-table values from the literature. The potential settlement of post-larvae is 43 post-larvae m - 2 d - 1 for Bivalvia and 56 post-larvae m - 2 d - 1 for Polychaeta. The adult turnover time for Bivalvia is estimated to be 1.5 years and 2.1 years for Polychaeta. This exemplifies that species with short generation times may dominate in very dynamic transitional zones with a high frequency of catastrophic events like the frequent incidents of hypoxia in the inner Danish waters.

  6. Comparison of Modeling Approaches for Carbon Partitioning: Impact on Estimates of Global Net Primary Production and Equilibrium Biomass of Woody Vegetation from MODIS GPP

    NASA Astrophysics Data System (ADS)

    Ise, T.; Litton, C. M.; Giardina, C. P.; Ito, A.

    2009-12-01

    Plant partitioning of carbon (C) to above- vs. belowground, to growth vs. respiration, and to short vs. long lived tissues exerts a large influence on ecosystem structure and function with implications for the global C budget. Importantly, outcomes of process-based terrestrial vegetation models are likely to vary substantially with different C partitioning algorithms. However, controls on C partitioning patterns remain poorly quantified, and studies have yielded variable, and at times contradictory, results. A recent meta-analysis of forest studies suggests that the ratio of net primary production (NPP) and gross primary production (GPP) is fairly conservative across large scales. To illustrate the effect of this unique meta-analysis-based partitioning scheme (MPS), we compared an application of MPS to a terrestrial satellite-based (MODIS) GPP to estimate NPP vs. two global process-based vegetation models (Biome-BGC and VISIT) to examine the influence of C partitioning on C budgets of woody plants. Due to the temperature dependence of maintenance respiration, NPP/GPP predicted by the process-based models increased with latitude while the ratio remained constant with MPS. Overall, global NPP estimated with MPS was 17 and 27% lower than the process-based models for temperate and boreal biomes, respectively, with smaller differences in the tropics. Global equilibrium biomass of woody plants was then calculated from the NPP estimates and tissue turnover rates from VISIT. Since turnover rates differed greatly across tissue types (i.e., metabolically active vs. structural), global equilibrium biomass estimates were sensitive to the partitioning scheme employed. The MPS estimate of global woody biomass was 7-21% lower than that of the process-based models. In summary, we found that model output for NPP and equilibrium biomass was quite sensitive to the choice of C partitioning schemes. Carbon use efficiency (CUE; NPP/GPP) by forest biome and the globe. Values are means for 2001-2006.

  7. Assessing Tuberculosis Case Fatality Ratio: A Meta-Analysis

    PubMed Central

    Straetemans, Masja; Glaziou, Philippe; Bierrenbach, Ana L.; Sismanidis, Charalambos; van der Werf, Marieke J.

    2011-01-01

    Background Recently, the tuberculosis (TB) Task Force Impact Measurement acknowledged the need to review the assumptions underlying the TB mortality estimates published annually by the World Health Organization (WHO). TB mortality is indirectly measured by multiplying estimated TB incidence with estimated case fatality ratio (CFR). We conducted a meta-analysis to estimate the TB case fatality ratio in TB patients having initiated TB treatment. Methods We searched for eligible studies in the PubMed and Embase databases through March 4th 2011 and by reference listing of relevant review articles. Main analyses included the estimation of the pooled percentages of: a) TB patients dying due to TB after having initiated TB treatment and b) TB patients dying during TB treatment. Pooled percentages were estimated using random effects regression models on the combined patient population from all studies. Main Results We identified 69 relevant studies of which 22 provided data on mortality due to TB and 59 provided data on mortality during TB treatment. Among HIV infected persons the pooled percentage of TB patients dying due to TB was 9.2% (95% Confidence Interval (CI): 3.7%–14.7%) and among HIV uninfected persons 3.0% (95% CI: −1.2%–7.4%) based on the results of eight and three studies respectively providing data for this analyses. The pooled percentage of TB patients dying during TB treatment was 18.8% (95% CI: 14.8%–22.8%) among HIV infected patients and 3.5% (95% CI: 2.0%–4.92%) among HIV uninfected patients based on the results of 27 and 19 studies respectively. Conclusion The results of the literature review are useful in generating prior distributions of CFR in countries with vital registration systems and have contributed towards revised estimates of TB mortality This literature review did not provide us with all data needed for a valid estimation of TB CFR in TB patients initiating TB treatment. PMID:21738585

  8. The association of placenta previa and assisted reproductive techniques: a meta-analysis.

    PubMed

    Karami, Manoochehr; Jenabi, Ensiyeh; Fereidooni, Bita

    2018-07-01

    Several epidemiological studies have determined that assisted reproductive techniques (ART) can increase the risk of placenta previa. To date, only a meta-analysis has been performed for assessing the relationship between placenta previa and ART. This meta-analysis was conducted to estimate the association between placenta previa and ART in singleton and twin pregnancies. A literature search was performed in major databases PubMed, Web of Science, and Scopus from the earliest possible year to April 2017. The heterogeneity across studies was explored by Q-test and I 2 statistic. The publication bias was assessed using Begg's and Egger's tests. The results were reported using odds ratio (OR) and relative risk (RR) estimates with its 95% confidence intervals (CI) using a random-effects model. The literature search yielded 1529 publications until September 2016 with 1,388,592 participants. The overall estimate of OR was 2.67 (95%CI: 2.01, 3.34) and RR was 3.62 (95%CI: 0.21, 7.03) based on singleton pregnancies. The overall estimate of OR was 1.50 (95%CI: 1.26, 1.74) based on twin pregnancies. We showed based on odds ratio reports in observational studies that ART procedures are a risk factor for placenta previa.

  9. Evaluation of trace analyte identification in complex matrices by low-resolution gas chromatography--Mass spectrometry through signal simulation.

    PubMed

    Bettencourt da Silva, Ricardo J N

    2016-04-01

    The identification of trace levels of compounds in complex matrices by conventional low-resolution gas chromatography hyphenated with mass spectrometry is based in the comparison of retention times and abundance ratios of characteristic mass spectrum fragments of analyte peaks from calibrators with sample peaks. Statistically sound criteria for the comparison of these parameters were developed based on the normal distribution of retention times and the simulation of possible non-normal distribution of correlated abundances ratios. The confidence level used to set the statistical maximum and minimum limits of parameters defines the true positive rates of identifications. The false positive rate of identification was estimated from worst-case signal noise models. The estimated true and false positive identifications rate from one retention time and two correlated ratios of three fragments abundances were combined using simple Bayes' statistics to estimate the probability of compound identification being correct designated examination uncertainty. Models of the variation of examination uncertainty with analyte quantity allowed the estimation of the Limit of Examination as the lowest quantity that produced "Extremely strong" evidences of compound presence. User friendly MS-Excel files are made available to allow the easy application of developed approach in routine and research laboratories. The developed approach was successfully applied to the identification of chlorpyrifos-methyl and malathion in QuEChERS method extracts of vegetables with high water content for which the estimated Limit of Examination is 0.14 mg kg(-1) and 0.23 mg kg(-1) respectively. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. How Choice of Depth Horizon Influences the Estimated Spatial Patterns and Global Magnitude of Ocean Carbon Export Flux

    NASA Astrophysics Data System (ADS)

    Palevsky, Hilary I.; Doney, Scott C.

    2018-05-01

    Estimated rates and efficiency of ocean carbon export flux are sensitive to differences in the depth horizons used to define export, which often vary across methodological approaches. We evaluate sinking particulate organic carbon (POC) flux rates and efficiency (e-ratios) in a global earth system model, using a range of commonly used depth horizons: the seasonal mixed layer depth, the particle compensation depth, the base of the euphotic zone, a fixed depth horizon of 100 m, and the maximum annual mixed layer depth. Within this single dynamically consistent model framework, global POC flux rates vary by 30% and global e-ratios by 21% across different depth horizon choices. Zonal variability in POC flux and e-ratio also depends on the export depth horizon due to pronounced influence of deep winter mixing in subpolar regions. Efforts to reconcile conflicting estimates of export need to account for these systematic discrepancies created by differing depth horizon choices.

  11. Effectiveness and benefit-cost of peer-based workplace substance abuse prevention coupled with random testing.

    PubMed

    Miller, Ted R; Zaloshnja, Eduard; Spicer, Rebecca S

    2007-05-01

    Few studies have evaluated the impact of workplace substance abuse prevention programs on occupational injury, despite this being a justification for these programs. This paper estimates the effectiveness and benefit-cost ratio of a peer-based substance abuse prevention program at a U.S. transportation company, implemented in phases from 1988 to 1990. The program focuses on changing workplace attitudes toward on-the-job substance use in addition to training workers to recognize and intervene with coworkers who have a problem. The program was strengthened by federally mandated random drug and alcohol testing (implemented, respectively, in 1990 and 1994). With time-series analysis, we analyzed the association of monthly injury rates and costs with phased program implementation, controlling for industry injury trend. The combination of the peer-based program and testing was associated with an approximate one-third reduction in injury rate, avoiding an estimated $48 million in employer costs in 1999. That year, the peer-based program cost the company $35 and testing cost another $35 per employee. The program avoided an estimated $1850 in employer injury costs per employee in 1999, corresponding to a benefit-cost ratio of 26:1. The findings suggest that peer-based programs buttressed by random testing can be cost-effective in the workplace.

  12. An Efficient Estimator for Moving Target Localization Using Multi-Station Dual-Frequency Radars.

    PubMed

    Huang, Jiyan; Zhang, Ying; Luo, Shan

    2017-12-15

    Localization of a moving target in a dual-frequency radars system has now gained considerable attention. The noncoherent localization approach based on a least squares (LS) estimator has been addressed in the literature. Compared with the LS method, a novel localization method based on a two-step weighted least squares estimator is proposed to increase positioning accuracy for a multi-station dual-frequency radars system in this paper. The effects of signal noise ratio and the number of samples on the performance of range estimation are also analyzed in the paper. Furthermore, both the theoretical variance and Cramer-Rao lower bound (CRLB) are derived. The simulation results verified the proposed method.

  13. An Efficient Estimator for Moving Target Localization Using Multi-Station Dual-Frequency Radars

    PubMed Central

    Zhang, Ying; Luo, Shan

    2017-01-01

    Localization of a moving target in a dual-frequency radars system has now gained considerable attention. The noncoherent localization approach based on a least squares (LS) estimator has been addressed in the literature. Compared with the LS method, a novel localization method based on a two-step weighted least squares estimator is proposed to increase positioning accuracy for a multi-station dual-frequency radars system in this paper. The effects of signal noise ratio and the number of samples on the performance of range estimation are also analyzed in the paper. Furthermore, both the theoretical variance and Cramer–Rao lower bound (CRLB) are derived. The simulation results verified the proposed method. PMID:29244727

  14. Use of radium isotopes to determine the age and origin of radioactive barite at oil-field production sites

    USGS Publications Warehouse

    Zielinski, R.A.; Otton, J.K.; Budahn, J.R.

    2001-01-01

    Radium-bearing barite (radiobarite) is a common constituent of scale and sludge deposits that form in oil-field production equipment. The barite forms as a precipitate from radium-bearing, saline formation water that is pumped to the surface along with oil. Radioactivity levels in some oil-field equipment and in soils contaminated by scale and sludge can be sufficiently high to pose a potential health threat. Accurate determinations of radium isotopes (226Ra+228Ra) in soils are required to establish the level of soil contamination and the volume of soil that may exceed regulatory limits for total radium content. In this study the radium isotopic data are used to provide estimates of the age of formation of the radiobarite contaminant. Age estimates require that highly insoluble radiobarite approximates a chemically closed system from the time of its formation. Age estimates are based on the decay of short-lived 228Ra (half-life=5.76 years) compared to 226Ra (half-life=1600 years). Present activity ratios of 228Ra/226Ra in radiobarite-rich scale or highly contaminated soil are compared to initial ratios at the time of radiobarite precipitation. Initial ratios are estimated by measurements of saline water or recent barite precipitates at the site or by considering a range of probable initial ratios based on reported values in modern oil-field brines. At sites that contain two distinct radiobarite sources of different age, the soils containing mixtures of sources can be identified, and mixing proportions quantified using radium concentration and isotopic data. These uses of radium isotope data provide more description of contamination history and can possibly address liability issues. Copyright ?? 2000 .

  15. Plant Uptake of Organic Pollutants from Soil: A Critical Review ofBioconcentration Estimates Based on Modelsand Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKone, Thomas E.; Maddalena, Randy L.

    2007-01-01

    The role of terrestrial vegetation in transferring chemicals from soil and air into specific plant tissues (stems, leaves, roots, etc.) is still not well characterized. We provide here a critical review of plant-to-soil bioconcentration ratio (BCR) estimates based on models and experimental data. This review includes the conceptual and theoretical formulations of the bioconcentration ratio, constructing and calibrating empirical and mathematical algorithms to describe this ratio and the experimental data used to quantify BCRs and calibrate the model performance. We first evaluate the theoretical basis for the BCR concept and BCR models and consider how lack of knowledge and datamore » limits reliability and consistency of BCR estimates. We next consider alternate modeling strategies for BCR. A key focus of this evaluation is the relative contributions to overall uncertainty from model uncertainty versus variability in the experimental data used to develop and test the models. As a case study, we consider a single chemical, hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX), and focus on variability of bioconcentration measurements obtained from 81 experiments with different plant species, different plant tissues, different experimental conditions, and different methods for reporting concentrations in the soil and plant tissues. We use these observations to evaluate both the magnitude of experimental variability in plant bioconcentration and compare this to model uncertainty. Among these 81 measurements, the variation of the plant/soil BCR has a geometric standard deviation (GSD) of 3.5 and a coefficient of variability (CV-ratio of arithmetic standard deviation to mean) of 1.7. These variations are significant but low relative to model uncertainties--which have an estimated GSD of 10 with a corresponding CV of 14.« less

  16. Bullying and mental health and suicidal behaviour among 14- to 15-year-olds in a representative sample of Australian children.

    PubMed

    Ford, Rebecca; King, Tania; Priest, Naomi; Kavanagh, Anne

    2017-09-01

    To provide the first Australian population-based estimates of the association between bullying and adverse mental health outcomes and suicidality among Australian adolescents. Analysis of data from 3537 adolescents, aged 14-15 years from Wave 6 of the K-cohort of Longitudinal Study of Australian Children was conducted. We used Poisson and linear regression to estimate associations between bullying type (none, relational-verbal, physical, both types) and role (no role, victim, bully, victim and bully), and mental health (measured by the Strengths and Difficulties Questionnaire, symptoms of anxiety and depression) and suicidality. Adolescents involved in bullying had significantly increased Strengths and Difficulties Questionnaire, depression and anxiety scores in all bullying roles and types. In terms of self-harm and suicidality, bully-victims had the highest risk of self-harm (prevalence rate ratio 4.7, 95% confidence interval [3.26, 6.83]), suicidal ideation (prevalence rate ratio 4.3, 95% confidence interval [2.83, 6.49]), suicidal plan (prevalence rate ratio 4.1, 95% confidence interval [2.54, 6.58]) and attempts (prevalence rate ratio 2.7, 95% confidence interval [1.39, 5.13]), followed by victims then bullies. The experience of both relational-verbal and physical bullying was associated with the highest risk of self-harm (prevalence rate ratio 4.6, 95% confidence interval [3.15, 6.60]), suicidal ideation or plans (prevalence rate ratio 4.6, 95% confidence interval [3.05, 6.95]; and 4.8, 95% confidence interval [3.01, 7.64], respectively) or suicide attempts (prevalence rate ratio 3.5, 95% confidence interval [1.90, 6.30]). This study presents the first national, population-based estimates of the associations between bullying by peers and mental health outcomes in Australian adolescents. The markedly increased risk of poor mental health outcomes, self-harm and suicidal ideation and behaviours among adolescents who experienced bullying highlights the importance of addressing bullying in school settings.

  17. Molecular population genetics of X-linked genes in Drosophila pseudoobscura.

    PubMed Central

    Kovacevic, M; Schaeffer, S W

    2000-01-01

    This article presents a nucleotide sequence analysis of 500 bp determined in each of five X-linked genes, runt, sisterlessA, period, esterase 5, and Heat-shock protein 83, in 40 Drosophila pseudoobscura strains collected from two populations. Estimates of the neutral migration parameter for the five loci show that gene flow among D. pseudoobscura populations is sufficient to homogenize inversion frequencies across the range of the species. Nucleotide diversity at each locus fails to reject a neutral model of molecular evolution. The sample of 40 chromosomes included six Sex-ratio inversions, a series of three nonoverlapping inversions that are associated with a strong meiotic drive phenotype. The selection driven by the Sex-ratio meiotic drive element has not fixed variation across the X chromosome of D. pseudoobscura because, while significant linkage disequilibrium was observed within the sisterlessA, period, and esterase 5 genes, we did not find evidence for nonrandom association among loci. The Sex-ratio chromosome was estimated to be 25,000 years old based on the decomposition of linkage disequilibrium between esterase 5 and Heat-shock protein 83 or 1 million years old based on the net divergence of esterase 5 between Standard and Sex-ratio chromosomes. Genetic diversity was depressed within esterase 5 within Sex-ratio chromosomes, while the four other genes failed to show a reduction in heterozygosity in the Sex-ratio background. The reduced heterogeneity in esterase 5 is due either to its location near one of the Sex-ratio inversion breakpoints or that it is closely linked to a gene or genes responsible for the Sex-ratio meiotic drive system. PMID:10978282

  18. Application and Analysis of the Isoelectronic Line Ratio Temperature Diagnostic in a Planar Ablating-Plasma Experiment at the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Epstein, R.; Rosenberg, M. J.; Solodov, A. A.; Myatt, J. F.; Regan, S. P.; Seka, W.; Hohenberger, M.; Barrios, M. A.; Moody, J. D.

    2015-11-01

    The Mn/Co isoelectronic emission-line ratio from a microdot source in planar CH foil targets was measured to infer the electron temperature (Te) in the ablating plasma during two-plasmon-decay experiments at the National Ignition Facility (NIF). We examine the systematic uncertainty in the Te estimate based on the temperature and density sensitivities of the line ratio in conjunction with plausible density constraints, and its contribution to the total Te estimate uncertainty. The potential advantages of alternative microdot elements (e.g., Ti/Cr and Sc/V) are considered. The microdot mass was selected to provide ample line strength while minimizing the effect of self-absorption on the line emission, which is of particular concern, given the narrow linewidths of mid- Z emitters at subcritical electron densities. Atomic line-formation theory and detailed atomic-radiative simulations show that the straight forward interpretation of the isoelectronic ratio solely in terms of its temperature independence remains valid with lines of moderate optical thickness (up to ~ 10) at line center. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  19. Tracking control of WMRs on loose soil based on mixed H2/H∞ control with longitudinal slip ratio estimation

    NASA Astrophysics Data System (ADS)

    Gao, Haibo; Chen, Chao; Ding, Liang; Li, Weihua; Yu, Haitao; Xia, Kerui; Liu, Zhen

    2017-11-01

    Wheeled mobile robots (WMRs) often suffer from the longitudinal slipping when moving on the loose soil of the surface of the moon during exploration. Longitudinal slip is the main cause of WMRs' delay in trajectory tracking. In this paper, a nonlinear extended state observer (NESO) is introduced to estimate the longitudinal velocity in order to estimate the slip ratio and the derivative of the loss of velocity which are used in modelled disturbance compensation. Owing to the uncertainty and disturbance caused by estimation errors, a multi-objective controller using the mixed H2/H∞ method is employed to ensure the robust stability and performance of the WMR system. The final inputs of the trajectory tracking consist of the feedforward compensation, compensation for the modelled disturbances and designed multi-objective control inputs. Finally, the simulation results demonstrate the effectiveness of the controller, which exhibits a satisfactory tracking performance.

  20. Diagnostic value of potassium level in a spot urine sample as an index of 24-hour urinary potassium excretion in unselected patients hospitalized in a hypertension unit

    PubMed Central

    Symonides, Bartosz; Wojciechowska, Ewa; Gryglas, Adam; Gaciong, Zbigniew

    2017-01-01

    Background Primary hyperaldosteronism may be associated with elevated 24-hour urinary potassium excretion. We evaluated the diagnostic value of spot urine (SU) potassium as an index of 24-hour urinary potassium excretion. Methods We measured SU and 24-hour urinary collection potassium and creatinine in 382 patients. Correlations between SU and 24-hour collections were assessed for potassium levels and potassium/creatinine ratios. We used the PAHO formula to estimate 24-hour urinary potassium excretion based on SU potassium level. The agreement between estimated and measured 24-hour urinary potassium excretion was evaluated using the Bland-Altman method. To evaluate diagnostic performance of SU potassium, we calculated areas under the curve (AUC) for SU potassium/creatinine ratio and 24-hour urinary potassium excretion estimated using the PAHO formula. Results Strongest correlation between SU and 24-hour collection was found for potassium/creatinine ratio (r = 0.69, P<0.001). The PAHO formula underestimated 24-hour urinary potassium excretion by mean 8.3±18 mmol/d (95% limits of agreement -28 to +44 mmol/d). Diagnostic performance of SU potassium/creatinine ratio was borderline good only if 24-hour urinary potassium excretion was largely elevated (AUC 0.802 for 120 mmol K+/24 h) but poor with lower values (AUC 0.696 for 100 mmol K+/24 h, 0.636 for 80 mmol K+/24 h, 0.675 for 40 mmol K+/24 h). Diagnostic performance of 24-hour urinary potassium excretion estimated by the PAHO formula was excellent with values above 120 mmol/d and good with lower values (AUC 0.941 for 120 mmol K+/24 h, 0.819 for 100 mmol K+/24 h, 0.823 for 80 mmol K+/24 h, 0.836 for 40 mmol K+/24 h). Conclusions Spot urine potassium/creatinine ratio might be a marker of increased 24-hour urinary potassium excretion and a potentially useful screening test when reliable 24-hour urine collection is not available. The PAHO formula allowed estimation of the 24-hour urinary potassium excretion based on SU measurements with reasonable clinical accuracy. PMID:28662194

  1. Urinary Uric Acid/Creatinine Ratio - A Marker For Perinatal Asphyxia.

    PubMed

    Patel, Kinjal Prahaladbhai; Makadia, Mayur Goradhanbhai; Patel, Vishwal Indravardan; Nilayangode, Haridas Neelakandan; Nimbalkar, Somashekhar Marutirao

    2017-01-01

    Perinatal hypoxia is one of the leading causes of perinatal mortality in developing countries. Both apgar score and arterial blood pH predict the neonatal mortality in asphyxia. Apgar score alone does not predict neurologic outcome and as it is influenced by various factors. This study was conducted to evaluate the utility and sensitivity of urinary uric acid to creatinine ratio (UA/Cr ratio) in asphyxia diagnosis, compared to invasive Arterial Blood Gas (ABG) analysis. To assess the urinary uric acid/creatinine ratio as an additional marker for perinatal asphyxia compared with ABG analysis in apgar score monitoring. The present case control study was conducted at a teaching hospital in Central Gujarat. Data of 40 healthy newborns and 40 asphyxiated newborns were collected. In absence of regional estimates, a sample of size 39 was required to attain a power of 80% at 5% alpha (type I error) considering a moderate effect size of 0.65. (UA/Cr) ratio was measured from the spot urine sample collected during 24-72 hours of birth. Statistical analysis was performed by Independent t-test, Pearson's correlation coefficient (r) and Receiver Operating Characteristic (ROC) plots. The mean (UA/Cr ratio) (2.75±0.18 vs 1.78±0.23) is significantly higher in asphyxiated group than in the control group (p<0.0001). Urinary UA/Cr ratio had negative correlation with blood pH (r= -0.27, p=0.18), which was not significant (p>0.05). Urinary UA/Cr ratio with criterion of >2.3 had 100% sensitivity, 100% specificity with AUC of 1 (p<0.0001) had a better predictive value. Apgar score is usually reduced in neonates with congenital anomalies and premature neonates. Hence, it is preferable that the clinical diagnosis of asphyxia by apgar scores be supported by other investigations so that early decision can be taken about the level of care the baby needs. pH, lactates and base deficits change with establishment of respiration following resuscitation. However, pH, lactate, base deficit estimations are invasive and need rapid estimations. Non-invasive urinary UA/Cr ratio may be an answer to these issues as it easy, economical and equally efficient.

  2. Magmatic evolution of lunar highland rocks estimated from trace elements in plagioclase: A new bulk silicate Moon model with sub-chondritic Ti/Ba, Sr/Ba, and Sr/Al ratios

    NASA Astrophysics Data System (ADS)

    Togashi, Shigeko; Kita, Noriko T.; Tomiya, Akihiko; Morishita, Yuichi

    2017-08-01

    The compositions of host magmas of ferroan anorthosites (FAN-host magmas) were estimated from secondary ion mass spectrometry analyses of plagioclase in lunar highland rocks. The evolution of the magmas was investigated by considering phase relations based on the MELTS algorithm and by re-examining partition coefficients for trace elements between plagioclase and melts. Data little affected by post-magmatic processes were selected by using plagioclase with relatively primitive Sc and Co contents. The FAN-host magma contained 90-174 ppm Sr, 40-119 ppm Ba and 0.5-1.3% TiO2, and had sub-chondritic Sr/Ba and Ti/Ba ratios. It is difficult to account for the formation of FAN-host magma on the basis of magma evolution processes of previously proposed bulk silicate Moon models with chondritic ratios for refractory elements at global scale. Therefore, the source of the FAN-host magma must have had primordial sub-chondritic Sr/Ba and Ti/Ba ratios. The FAN-host magmas were consistent in refractory elements with the estimated host mafic magma for feldspathic crust based on lunar meteorites, and some very-low-Ti mare rocks from lunar meteorites. Here, we propose an alternative bulk silicate Moon model (the cBSM model), which is enriched in crustal components of proto-bodies relative to the present whole Earth-Moon system.

  3. Smoking, physical exercise, BMI and late foetal death: a study within the Danish National Birth Cohort.

    PubMed

    Morales-Suárez-Varela, Maria; Nohr, Ellen A; Bech, Bodil H; Wu, Chunsen; Olsen, Jørn

    2016-10-01

    The aim of this paper was to estimate the effect of maternal and paternal smoking on foetal death (miscarriage and stillbirth) and to estimate potential interactions with physical exercise and pre-pregnancy body mass index. We selected 87,930 pregnancies from the population-based Danish National Birth Cohort. Information about lifestyle, occupational, medical and obstetric factors was obtained from a telephone interview and data on pregnancy outcomes came from the Danish population based registries. Cox regression was used to estimate the hazard ratios (adjusted for potential confounders) for predominantly late foetal death (miscarriage and stillbirth). An interaction contrast ratio was used to assess potential effect measure modification of smoking by physical exercise and body mass index. The adjusted hazard ratio of foetal death was 1.22 (95 % CI 1.02-1.46) for couples where both parents smoked compared to non-smoking parents (miscarriage: 1.18, 95 % CI 0.96-1.44; stillbirth: 1.32, 95 % CI 0.93-1.89). On the additive scale, we detected a small positive interaction for stillbirth between smoking and body mass index (overweight women). In conclusion, smoking during pregnancy was associated with a slightly higher hazard ratio for foetal death if both parents smoked. This study suggests that smoking may increase the negative effect of a high BMI on foetal death, but results were not statistically significant for the interaction between smoking and physical exercise.

  4. Celiac Disease and Anorexia Nervosa: A Nationwide Study.

    PubMed

    Mårild, Karl; Størdal, Ketil; Bulik, Cynthia M; Rewers, Marian; Ekbom, Anders; Liu, Edwin; Ludvigsson, Jonas F

    2017-05-01

    Previous research suggests an association of celiac disease (CD) with anorexia nervosa (AN), but data are mostly limited to case reports. We aimed to determine whether CD is associated with the diagnosis of AN. Register-based cohort and case-control study including women with CD ( n = 17 959) and sex- and age-matched population-based controls ( n = 89 379). CD (villous atrophy) was identified through the histopathology records of Sweden's 28 pathology departments. Inpatient and hospital-based outpatient records were used to identify AN. Hazard ratios for incident AN diagnosis were estimated by using stratified Cox regression with CD diagnosis as a time-dependent exposure variable. In the secondary analyses, we used conditional logistic regression to estimate odds ratios for being diagnosed with AN before CD. Median age of CD diagnosis was 28 years. During 1 174 401 person-years of follow-up, 54 patients with CD were diagnosed with AN (27/100 000 person-years) compared with 180 matched controls (18/100 000 person-years). The hazard ratio for later AN was 1.46 (95% confidence interval [CI], 1.08-1.98) and 1.31 beyond the first year after CD diagnosis (95% CI, 0.95-1.81). A previous AN diagnosis was also associated with CD (odds ratio, 2.18; 95% CI, 1.45-3.29). Estimates remained largely unchanged when adjusted for socioeconomic characteristics and type 1 diabetes. The bidirectional association between AN diagnosis and CD warrants attention in the initial assessment and follow-up of these conditions because underdiagnosis and misdiagnosis of these disorders likely cause protracted and unnecessary morbidity. Copyright © 2017 by the American Academy of Pediatrics.

  5. Jackknife Estimation of Sampling Variance of Ratio Estimators in Complex Samples: Bias and the Coefficient of Variation. Research Report. ETS RR-06-19

    ERIC Educational Resources Information Center

    Oranje, Andreas

    2006-01-01

    A multitude of methods has been proposed to estimate the sampling variance of ratio estimates in complex samples (Wolter, 1985). Hansen and Tepping (1985) studied some of those variance estimators and found that a high coefficient of variation (CV) of the denominator of a ratio estimate is indicative of a biased estimate of the standard error of a…

  6. Probabilistic quantitative microbial risk assessment model of norovirus from wastewater irrigated vegetables in Ghana using genome copies and fecal indicator ratio conversion for estimating exposure dose.

    PubMed

    Owusu-Ansah, Emmanuel de-Graft Johnson; Sampson, Angelina; Amponsah, Samuel K; Abaidoo, Robert C; Dalsgaard, Anders; Hald, Tine

    2017-12-01

    The need to replace the commonly applied fecal indicator conversions ratio (an assumption of 1:10 -5 virus to fecal indicator organism) in Quantitative Microbial Risk Assessment (QMRA) with models based on quantitative data on the virus of interest has gained prominence due to the different physical and environmental factors that might influence the reliability of using indicator organisms in microbial risk assessment. The challenges facing analytical studies on virus enumeration (genome copies or particles) have contributed to the already existing lack of data in QMRA modelling. This study attempts to fit a QMRA model to genome copies of norovirus data. The model estimates the risk of norovirus infection from the intake of vegetables irrigated with wastewater from different sources. The results were compared to the results of a corresponding model using the fecal indicator conversion ratio to estimate the norovirus count. In all scenarios of using different water sources, the application of the fecal indicator conversion ratio underestimated the norovirus disease burden, measured by the Disability Adjusted Life Years (DALYs), when compared to results using the genome copies norovirus data. In some cases the difference was >2 orders of magnitude. All scenarios using genome copies met the 10 -4 DALY per person per year for consumption of vegetables irrigated with wastewater, although these results are considered to be highly conservative risk estimates. The fecal indicator conversion ratio model of stream-water and drain-water sources of wastewater achieved the 10 -6 DALY per person per year threshold, which tends to indicate an underestimation of health risk when compared to using genome copies for estimating the dose. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. CALIPSO-Inferred Aerosol Direct Radiative Effects: Bias Estimates Using Ground-Based Raman Lidars

    NASA Technical Reports Server (NTRS)

    Thorsen, Tyler; Fu, Qiang

    2015-01-01

    Observational constraints on the change in the radiative energy budget caused by the presence of aerosols, i.e. the aerosol direct radiative effect (DRE), have recently been made using observations from the Cloud- Aerosol Lidar and Infrared Pathfinder Satellite (CALIPSO). CALIPSO observations have the potential to provide improved global estimates of aerosol DRE compared to passive sensor-derived estimates due to CALIPSO's ability to perform vertically-resolved aerosol retrievals over all surface types and over cloud. In this study we estimate the uncertainties in CALIPSO-inferred aerosol DRE using multiple years of observations from the Atmospheric Radiation Measurement (ARM) program's Raman lidars (RL) at mid-latitude and tropical sites. Examined are assumptions about the ratio of extinction-to-backscatter (i.e. the lidar ratio) made by the CALIPSO retrievals, which are needed to retrieve the aerosol extinction profile. The lidar ratio is shown to introduce minimal error in the mean aerosol DRE at the top-of-atmosphere and surface. It is also shown that CALIPSO is unable to detect all radiatively-significant aerosol, resulting in an underestimate in the magnitude of the aerosol DRE by 30â€"50%. Therefore, global estimates of the aerosol DRE inferred from CALIPSO observations are likely too weak.

  8. Radon Measurements of Atmospheric Mixing (RAMIX) 2006–2014 Final Campaign Summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischer, ML; Biraud, SC

    2015-05-01

    Uncertainty in vertical mixing between the surface layer, boundary layer, and free troposphere leads to large uncertainty in “top-down” estimates of regional land-atmosphere carbon exchange (i.e., estimates based on measurements of atmospheric CO2 mixing ratios. Radon-222 (222Rn) is a valuable tracer for measuring atmospheric mixing because it is emitted from the land surface and has a short enough half-life (3.8 days) to allow characterization of mixing processes based on vertical profile measurements.

  9. Radon Measurements of Atmospheric Mixing (RAMIX) 2006–2014 Final Campaign Summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischer, ML; Biraud, SC; Hirsch, A

    2015-05-01

    Uncertainty in vertical mixing between the surface layer, boundary layer, and free troposphere leads to large uncertainty in “top-down” estimates of regional land-atmosphere carbon exchange (i.e., estimates based on measurements of atmospheric CO 2 mixing ratios). The radioisotope radon-222 ( 222Rn) is a valuable tracer for measuring atmospheric mixing because it is emitted from the land surface and has a short enough half-life (3.8 days) to allow characterization of mixing processes based on vertical profile measurements.

  10. Nearest neighbor density ratio estimation for large-scale applications in astronomy

    NASA Astrophysics Data System (ADS)

    Kremer, J.; Gieseke, F.; Steenstrup Pedersen, K.; Igel, C.

    2015-09-01

    In astronomical applications of machine learning, the distribution of objects used for building a model is often different from the distribution of the objects the model is later applied to. This is known as sample selection bias, which is a major challenge for statistical inference as one can no longer assume that the labeled training data are representative. To address this issue, one can re-weight the labeled training patterns to match the distribution of unlabeled data that are available already in the training phase. There are many examples in practice where this strategy yielded good results, but estimating the weights reliably from a finite sample is challenging. We consider an efficient nearest neighbor density ratio estimator that can exploit large samples to increase the accuracy of the weight estimates. To solve the problem of choosing the right neighborhood size, we propose to use cross-validation on a model selection criterion that is unbiased under covariate shift. The resulting algorithm is our method of choice for density ratio estimation when the feature space dimensionality is small and sample sizes are large. The approach is simple and, because of the model selection, robust. We empirically find that it is on a par with established kernel-based methods on relatively small regression benchmark datasets. However, when applied to large-scale photometric redshift estimation, our approach outperforms the state-of-the-art.

  11. Adapting Surface Ground Motion Relations to Underground conditions: A case study for the Sudbury Neutrino Observatory in Sudbury, Ontario, Canada

    NASA Astrophysics Data System (ADS)

    Babaie Mahani, A.; Eaton, D. W.

    2013-12-01

    Ground Motion Prediction Equations (GMPEs) are widely used in Probabilistic Seismic Hazard Assessment (PSHA) to estimate ground-motion amplitudes at Earth's surface as a function of magnitude and distance. Certain applications, such as hazard assessment for caprock integrity in the case of underground storage of CO2, waste disposal sites, and underground pipelines, require subsurface estimates of ground motion; at present, such estimates depend upon theoretical modeling and simulations. The objective of this study is to derive correction factors for GMPEs to enable estimation of amplitudes in the subsurface. We use a semi-analytic approach along with finite-difference simulations of ground-motion amplitudes for surface and underground motions. Spectral ratios of underground to surface motions are used to calculate the correction factors. Two predictive methods are used. The first is a semi-analytic approach based on a quarter-wavelength method that is widely used for earthquake site-response investigations; the second is a numerical approach based on elastic finite-difference simulations of wave propagation. Both methods are evaluated using recordings of regional earthquakes by broadband seismometers installed at the surface and at depths of 1400 m and 2100 m in the Sudbury Neutrino Observatory, Canada. Overall, both methods provide a reasonable fit to the peaks and troughs observed in the ratios of real data. The finite-difference method, however, has the capability to simulate ground motion ratios more accurately than the semi-analytic approach.

  12. Modeling and E-M estimation of haplotype-specific relative risks from genotype data for a case-control study of unrelated individuals.

    PubMed

    Stram, Daniel O; Leigh Pearce, Celeste; Bretsky, Phillip; Freedman, Matthew; Hirschhorn, Joel N; Altshuler, David; Kolonel, Laurence N; Henderson, Brian E; Thomas, Duncan C

    2003-01-01

    The US National Cancer Institute has recently sponsored the formation of a Cohort Consortium (http://2002.cancer.gov/scpgenes.htm) to facilitate the pooling of data on very large numbers of people, concerning the effects of genes and environment on cancer incidence. One likely goal of these efforts will be generate a large population-based case-control series for which a number of candidate genes will be investigated using SNP haplotype as well as genotype analysis. The goal of this paper is to outline the issues involved in choosing a method of estimating haplotype-specific risk estimates for such data that is technically appropriate and yet attractive to epidemiologists who are already comfortable with odds ratios and logistic regression. Our interest is to develop and evaluate extensions of methods, based on haplotype imputation, that have been recently described (Schaid et al., Am J Hum Genet, 2002, and Zaykin et al., Hum Hered, 2002) as providing score tests of the null hypothesis of no effect of SNP haplotypes upon risk, which may be used for more complex tasks, such as providing confidence intervals, and tests of equivalence of haplotype-specific risks in two or more separate populations. In order to do so we (1) develop a cohort approach towards odds ratio analysis by expanding the E-M algorithm to provide maximum likelihood estimates of haplotype-specific odds ratios as well as genotype frequencies; (2) show how to correct the cohort approach, to give essentially unbiased estimates for population-based or nested case-control studies by incorporating the probability of selection as a case or control into the likelihood, based on a simplified model of case and control selection, and (3) finally, in an example data set (CYP17 and breast cancer, from the Multiethnic Cohort Study) we compare likelihood-based confidence interval estimates from the two methods with each other, and with the use of the single-imputation approach of Zaykin et al. applied under both null and alternative hypotheses. We conclude that so long as haplotypes are well predicted by SNP genotypes (we use the Rh2 criteria of Stram et al. [1]) the differences between the three methods are very small and in particular that the single imputation method may be expected to work extremely well. Copyright 2003 S. Karger AG, Basel

  13. Age ratios as estimators of productivity: testing assumptions on a threatened seabird, the marbled murrelet (Brachyramphus marmoratus)

    Treesearch

    M. Zachariah Peery; Benjamin H. Becker; Steven R. Beissinger

    2007-01-01

    The ratio of hatch-year (HY) to after-hatch-year (AHY) individuals (HY:AHY ratio) can be a valuable metric for estimating avian productivity because it does not require monitoring individual breeding sites and can often be estimated across large geographic and temporal scales. However, rigorous estimation of age ratios requires that both young and adult age classes are...

  14. Tissue Viscoelasticity Imaging Using Vibration and Ultrasound Coupler Gel

    NASA Astrophysics Data System (ADS)

    Yamakawa, Makoto; Shiina, Tsuyoshi

    2012-07-01

    In tissue diagnosis, both elasticity and viscosity are important indexes. Therefore, we propose a method for evaluating tissue viscoelasticity by applying vibration that is usually performed in elastography and using an ultrasound coupler gel with known viscoelasticity. In this method, we use three viscoelasticity parameters based on the coupler strain and tissue strain: the strain ratio as an elasticity parameter, and the phase difference and the normalized hysteresis loop area as viscosity parameters. In the agar phantom experiment, using these viscoelasticity parameters, we were able to estimate the viscoelasticity distribution of the phantom. In particular, the strain ratio and the phase difference were robust to strain estimation error.

  15. Calibration and Limitations of the Mg II Line-based Black Hole Masses

    NASA Astrophysics Data System (ADS)

    Woo, Jong-Hak; Le, Huynh Anh N.; Karouzos, Marios; Park, Dawoo; Park, Daeseong; Malkan, Matthew A.; Treu, Tommaso; Bennert, Vardha N.

    2018-06-01

    We present single-epoch black hole mass ({M}BH}) calibrations based on the rest-frame ultraviolet (UV) and optical measurements of Mg II 2798 Å and Hβ 4861 Å lines and the active galactic nucleus (AGN) continuum, using a sample of 52 moderate-luminosity AGNs at z ∼ 0.4 and z ∼ 0.6 with high-quality Keck spectra. We combine this sample with a large number of luminous AGNs from the Sloan Digital Sky Survey to increase the dynamic range for a better comparison of UV and optical velocity and luminosity measurements. With respect to the reference {M}BH} based on the line dispersion of Hβ and continuum luminosity at 5100 Å, we calibrate the UV and optical mass estimators by determining the best-fit values of the coefficients in the mass equation. By investigating whether the UV estimators show a systematic trend with Eddington ratio, FWHM of Hβ, Fe II strength, or UV/optical slope, we find no significant bias except for the slope. By fitting the systematic difference of Mg II-based and Hβ-based masses with the L 3000/L 5100 ratio, we provide a correction term as a function of the spectral index as ΔC = 0.24 (1 + α λ ) + 0.17, which can be added to the Mg II-based mass estimators if the spectral slope can be well determined. The derived UV mass estimators typically show >∼0.2 dex intrinsic scatter with respect to the Hβ-based {M}BH}, suggesting that the UV-based mass has an additional uncertainty of ∼0.2 dex, even if high-quality rest-frame UV spectra are available.

  16. Practical Implementation of Multiple Model Adaptive Estimation Using Neyman-Pearson Based Hypothesis Testing and Spectral Estimation Tools

    DTIC Science & Technology

    1996-09-01

    Generalized Likelihood Ratio (GLR) and voting techniques. The third class consisted of multiple hypothesis filter detectors, specifically the MMAE. The...vector version, versus a tensor if we use the matrix version of the power spectral density estimate. Using this notation, we will derive an...as MATLAB , have an intrinsic sample covariance computation available, which makes this method quite easy to implement. In practice, the mean for the

  17. Early postnatal myelin content estimate of white matter via T1w/T2w ratio

    NASA Astrophysics Data System (ADS)

    Lee, Kevin; Cherel, Marie; Budin, Francois; Gilmore, John; Zaldarriaga Consing, Kirsten; Rasmussen, Jerod; Wadhwa, Pathik D.; Entringer, Sonja; Glasser, Matthew F.; Van Essen, David C.; Buss, Claudia; Styner, Martin

    2015-03-01

    To develop and evaluate a novel processing framework for the relative quantification of myelin content in cerebral white matter (WM) regions from brain MRI data via a computed ratio of T1 to T2 weighted intensity values. We employed high resolution (1mm3 isotropic) T1 and T2 weighted MRI from 46 (28 male, 18 female) neonate subjects (typically developing controls) scanned on a Siemens Tim Trio 3T at UC Irvine. We developed a novel, yet relatively straightforward image processing framework for WM myelin content estimation based on earlier work by Glasser, et al. We first co-register the structural MRI data to correct for motion. Then, background areas are masked out via a joint T1w and T2 foreground mask computed. Raw T1w/T2w-ratios images are computed next. For purpose of calibration across subjects, we first coarsely segment the fat-rich facial regions via an atlas co-registration. Linear intensity rescaling based on median T1w/T2w-ratio values in those facial regions yields calibrated T1w/T2wratio images. Mean values in lobar regions are evaluated using standard statistical analysis to investigate their interaction with age at scan. Several lobes have strongly positive significant interactions of age at scan with the computed T1w/T2w-ratio. Most regions do not show sex effects. A few regions show no measurable effects of change in myelin content change within the first few weeks of postnatal development, such as cingulate and CC areas, which we attribute to sample size and measurement variability. We developed and evaluated a novel way to estimate white matter myelin content for use in studies of brain white matter development.

  18. Software for the grouped optimal aggregation technique

    NASA Technical Reports Server (NTRS)

    Brown, P. M.; Shaw, G. W. (Principal Investigator)

    1982-01-01

    The grouped optimal aggregation technique produces minimum variance, unbiased estimates of acreage and production for countries, zones (states), or any designated collection of acreage strata. It uses yield predictions, historical acreage information, and direct acreage estimate from satellite data. The acreage strata are grouped in such a way that the ratio model over historical acreage provides a smaller variance than if the model were applied to each individual stratum. An optimal weighting matrix based on historical acreages, provides the link between incomplete direct acreage estimates and the total, current acreage estimate.

  19. Estimating numbers of greater prairie-chickens using mark-resight techniques

    USGS Publications Warehouse

    Clifton, A.M.; Krementz, D.G.

    2006-01-01

    Current monitoring efforts for greater prairie-chicken (Tympanuchus cupido pinnatus) populations indicate that populations are declining across their range. Monitoring the population status of greater prairie-chickens is based on traditional lek surveys (TLS) that provide an index without considering detectability. Estimators, such as immigration-emigration joint maximum-likelihood estimator from a hypergeometric distribution (IEJHE), can account for detectability and provide reliable population estimates based on resightings. We evaluated the use of mark-resight methods using radiotelemetry to estimate population size and density of greater prairie-chickens on 2 sites at a tallgrass prairie in the Flint Hills of Kansas, USA. We used average distances traveled from lek of capture to estimate density. Population estimates and confidence intervals at the 2 sites were 54 (CI 50-59) on 52.9 km 2 and 87 (CI 82-94) on 73.6 km2. The TLS performed at the same sites resulted in population ranges of 7-34 and 36-63 and always produced a lower population index than the mark-resight population estimate with a larger range. Mark-resight simulations with varying male:female ratios of marks indicated that this ratio was important in designing a population study on prairie-chickens. Confidence intervals for estimates when no marks were placed on females at the 2 sites (CI 46-50, 76-84) did not overlap confidence intervals when 40% of marks were placed on females (CI 54-64, 91-109). Population estimates derived using this mark-resight technique were apparently more accurate than traditional methods and would be more effective in detecting changes in prairie-chicken populations. Our technique could improve prairie-chicken management by providing wildlife biologists and land managers with a tool to estimate the population size and trends of lekking bird species, such as greater prairie-chickens.

  20. Novel Oversampling Technique for Improving Signal-to-Quantization Noise Ratio on Accelerometer-Based Smart Jerk Sensors in CNC Applications.

    PubMed

    Rangel-Magdaleno, Jose J; Romero-Troncoso, Rene J; Osornio-Rios, Roque A; Cabal-Yepez, Eduardo

    2009-01-01

    Jerk monitoring, defined as the first derivative of acceleration, has become a major issue in computerized numeric controlled (CNC) machines. Several works highlight the necessity of measuring jerk in a reliable way for improving production processes. Nowadays, the computation of jerk is done by finite differences of the acceleration signal, computed at the Nyquist rate, which leads to low signal-to-quantization noise ratio (SQNR) during the estimation. The novelty of this work is the development of a smart sensor for jerk monitoring from a standard accelerometer, which has improved SQNR. The proposal is based on oversampling techniques that give a better estimation of jerk than that produced by a Nyquist-rate differentiator. Simulations and experimental results are presented to show the overall methodology performance.

  1. Chapter 37: Population Trends of the Marbled Murrelet Projected From Demographic Analyses

    Treesearch

    Steven B. Beissinger

    1995-01-01

    A demographic model of the Marbled Murrelet is developed to explore likely population trends and factors influencing them. The model was structured to use field data on juvenile ratios, collected near the end of the breeding season and corrected for date of census, to estimate fecundity. Survivorship was estimated for the murrelet based on comparative analyses of...

  2. A review of satellite-based methods of estimating live fuel moisture content for fire danger assessment: moving towards operational products

    USDA-ARS?s Scientific Manuscript database

    One of the primary variables affecting ignition and spread of wildfire is fuel moisture content (FMC), which is the ratio of water mass to dry mass in living and dead plant material. Because dead FMC may be estimated from available weather data, remote sensing is needed to monitor the spatial distr...

  3. Autoregressive linear least square single scanning electron microscope image signal-to-noise ratio estimation.

    PubMed

    Sim, Kok Swee; NorHisham, Syafiq

    2016-11-01

    A technique based on linear Least Squares Regression (LSR) model is applied to estimate signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images. In order to test the accuracy of this technique on SNR estimation, a number of SEM images are initially corrupted with white noise. The autocorrelation function (ACF) of the original and the corrupted SEM images are formed to serve as the reference point to estimate the SNR value of the corrupted image. The LSR technique is then compared with the previous three existing techniques known as nearest neighbourhood, first-order interpolation, and the combination of both nearest neighborhood and first-order interpolation. The actual and the estimated SNR values of all these techniques are then calculated for comparison purpose. It is shown that the LSR technique is able to attain the highest accuracy compared to the other three existing techniques as the absolute difference between the actual and the estimated SNR value is relatively small. SCANNING 38:771-782, 2016. © 2016 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.

  4. VizieR Online Data Catalog: TGAS MS & giants reddening and extinction (Gontcharov+, 2018)

    NASA Astrophysics Data System (ADS)

    Gontcharov, G. A.; Mosenkov, A. V.

    2018-01-01

    These are the reddening, interstellar extinction and extinction-to-reddening ratio estimates for the Gaia DR1 TGAS and Hipparcos stars within 415 pc from the Sun based on the 3D reddening map of Gontcharov (J/PAZh/43/521) and 3D extinction-to-reddening (total-to-selective extinction) ratio Rv map of Gontcharov (J/PAZh/38/15). (2 data files).

  5. Cost-benefit analysis simulation of a hospital-based violence intervention program.

    PubMed

    Purtle, Jonathan; Rich, Linda J; Bloom, Sandra L; Rich, John A; Corbin, Theodore J

    2015-02-01

    Violent injury is a major cause of disability, premature mortality, and health disparities worldwide. Hospital-based violence intervention programs (HVIPs) show promise in preventing violent injury. Little is known, however, about how the impact of HVIPs may translate into monetary figures. To conduct a cost-benefit analysis simulation to estimate the savings an HVIP might produce in healthcare, criminal justice, and lost productivity costs over 5 years in a hypothetical population of 180 violently injured patients, 90 of whom received HVIP intervention and 90 of whom did not. Primary data from 2012, analyzed in 2013, on annual HVIP costs/number of clients served and secondary data sources were used to estimate the cost, number, and type of violent reinjury incidents (fatal/nonfatal, resulting in hospitalization/not resulting in hospitalization) and violent perpetration incidents (aggravated assault/homicide) that this population might experience over 5 years. Four different models were constructed and three different estimates of HVIP effect size (20%, 25%, and 30%) were used to calculate a range of estimates for HVIP net savings and cost-benefit ratios from different payer perspectives. All benefits were discounted at 5% to adjust for their net present value. Estimates of HVIP cost savings at the base effect estimate of 25% ranged from $82,765 (narrowest model) to $4,055,873 (broadest model). HVIPs are likely to produce cost savings. This study provides a systematic framework for the economic evaluation of HVIPs and estimates of HVIP cost savings and cost-benefit ratios that may be useful in informing public policy decisions. Copyright © 2015 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  6. Estimation of Comfort/Disconfort Based on EEG in Massage by Use of Clustering according to Correration and Incremental Learning type NN

    NASA Astrophysics Data System (ADS)

    Teramae, Tatsuya; Kushida, Daisuke; Takemori, Fumiaki; Kitamura, Akira

    Authors proposed the estimation method combining k-means algorithm and NN for evaluating massage. However, this estimation method has a problem that discrimination ratio is decreased to new user. There are two causes of this problem. One is that generalization of NN is bad. Another one is that clustering result by k-means algorithm has not high correlation coefficient in a class. Then, this research proposes k-means algorithm according to correlation coefficient and incremental learning for NN. The proposed k-means algorithm is method included evaluation function based on correlation coefficient. Incremental learning is method that NN is learned by new data and initialized weight based on the existing data. The effect of proposed methods are verified by estimation result using EEG data when testee is given massage.

  7. Computerized Classification Testing with the Rasch Model

    ERIC Educational Resources Information Center

    Eggen, Theo J. H. M.

    2011-01-01

    If classification in a limited number of categories is the purpose of testing, computerized adaptive tests (CATs) with algorithms based on sequential statistical testing perform better than estimation-based CATs (e.g., Eggen & Straetmans, 2000). In these computerized classification tests (CCTs), the Sequential Probability Ratio Test (SPRT) (Wald,…

  8. Anthropometric measures and absolute cardiovascular risk estimates in the Australian Diabetes, Obesity and Lifestyle (AusDiab) Study.

    PubMed

    Chen, Lei; Peeters, Anna; Magliano, Dianna J; Shaw, Jonathan E; Welborn, Timothy A; Wolfe, Rory; Zimmet, Paul Z; Tonkin, Andrew M

    2007-12-01

    Framingham risk functions are widely used for prediction of future cardiovascular disease events. They do not, however, include anthropometric measures of overweight or obesity, now considered a major cardiovascular disease risk factor. We aimed to establish the most appropriate anthropometric index and its optimal cutoff point for use as an ancillary measure in clinical practice when identifying people with increased absolute cardiovascular risk estimates. Analysis of a population-based, cross-sectional survey was carried out. The 1991 Framingham prediction equations were used to compute 5 and 10-year risks of cardiovascular or coronary heart disease in 7191 participants from the Australian Diabetes, Obesity and Lifestyle Study (1999-2000). Receiver operating characteristic curve analysis was used to compare measures of body mass index (BMI), waist circumference, and waist-to-hip ratio in identifying participants estimated to be at 'high', or at 'intermediate or high' absolute risk. After adjustment for BMI and age, waist-to-hip ratio showed stronger correlation with absolute risk estimates than waist circumference. The areas under the receiver operating characteristic curve for waist-to-hip ratio (0.67-0.70 in men, 0.64-0.74 in women) were greater than those for waist circumference (0.60-0.65, 0.59-0.71) or BMI (0.52-0.59, 0.53-0.66). The optimal cutoff points of BMI, waist circumference and waist-to-hip ratio to predict people at 'high', or at 'intermediate or high' absolute risk estimates were 26 kg/m2, 95 cm and 0.90 in men, and 25-26 kg/m2, 80-85 cm and 0.80 in women, respectively. Measurement of waist-to-hip ratio is more useful than BMI or waist circumference in the identification of individuals estimated to be at increased risk for future primary cardiovascular events.

  9. Fish tissue lipid-C:N relationships for correcting δ(13) C values and estimating lipid content in aquatic food-web studies.

    PubMed

    Hoffman, Joel C; Sierszen, Michael E; Cotter, Anne M

    2015-11-15

    Normalizing δ(13) C values of animal tissue for lipid content is necessary to accurately interpret food-web relationships from stable isotope analysis. To reduce the effort and expense associated with chemical extraction of lipids, various studies have tested arithmetic mass balance to mathematically normalize δ(13) C values for lipid content; however, the approach assumes that lipid content is related to the tissue C:N ratio. We evaluated two commonly used models for estimating tissue lipid content based on C:N ratio (a mass balance model and a stoichiometric model) by comparing model predictions to measure the lipid content of white muscle tissue. We then determined the effect of lipid model choice on δ(13) C values normalized using arithmetic mass balance. To do so, we used a collection of fish from Lake Superior spanning a wide range in lipid content (5% to 73% lipid). We found that the lipid content was positively related to the bulk muscle tissue C:N ratio. The two different lipid models produced similar estimates of lipid content based on tissue C:N, within 6% for tissue C:N values <7. Normalizing δ(13) C values using an arithmetic mass-balance equation based on either model yielded similar results, with a small bias (<1‰) compared with results based on chemical extraction. Among-species consistency in the relationship between fish muscle tissue C:N ratio and lipid content supports the application of arithmetic mass balance to normalize δ(13) C values for lipid content. The uncertainty associated with both lipid extraction quality and choice of model parameters constrains the achievable precision of normalized δ(13) C values to about ±1.0‰. Published in 2015. This article is a U.S. Government work and is in the public domain in the U.S.A.

  10. Correlation between Apgar score and urinary uric acid to creatinine ratio in perinatal asphyxia.

    PubMed

    Basu, Pallab; Som, Sabyasachi; Choudhuri, Nabendu; Das, Harendranath

    2008-10-01

    A randomized case control hospital based study was conducted over 12 months time on 31 asphyxiated and 31 normal newborn to see whether urinary uric acid can be used as a marker of perinatal asphyxia and can be correlated with the clinical diagnosis by Apgar score. Uric acid and creatinine were estimated in spot urine within 24 hours after birth in both cases and controls. A ratio between concentrations of uric acid to creatinine was estimated and compared between cases and controls. It was found that the ratios were significantly higher in cases than controls (3.1± 1.3 vs 0.96± 0.54; P < 0.001) and among asphyxia patients, a significant negative linear correlation was found between the uric acid to creatinine ratio and the Apgar score (r = -0.857, P < 0.001). So urinary uric acid to creatinine ratio can be used as an additional non-invasive dispace, easy and at the same time early biochemical marker of birth asphyxia which biochemically supports the clinical diagnosis and severity grading of asphyxia by Apgar score.

  11. A method for estimating fall adult sex ratios from production and survival data

    USGS Publications Warehouse

    Wight, H.M.; Heath, R.G.; Geis, A.D.

    1965-01-01

    This paper presents a method of utilizing data relating to the production and survival of a bird population to estimate a basic fall adult sex ratio. This basic adult sex ratio is an average value derived from average production and survival rates. It is an estimate of the average sex ratio about which the fall adult ratios will fluctuate according to annual variations in production and survival. The basic fall adult sex ratio has been calculated as an asymptotic value which is the limit of an infinite series wherein average population characteristics are used as constants. Graphs are provided that allow the determination of basic sex ratios from production and survival data of a population. Where the respective asymptote has been determined, it may be possible to estimate various production and survival rates by use of variations of the formula for estimating the asymptote.

  12. Comparison of medicine availability measurements at health facilities: evidence from Service Provision Assessment surveys in five sub-Saharan African countries.

    PubMed

    Choi, Yoonjoung; Ametepi, Paul

    2013-07-09

    With growing emphasis on health systems strengthening in global health, various health facility assessment methods have been used increasingly to measure medicine and commodity availability. However, few studies have systematically compared estimates of availability based on different definitions. The objective of this study was to compare estimates of medicine availability based on different definitions. A secondary data analysis was conducted using data from the Service Provision Assessment (SPA)--a nationally representative sample survey of health facilities--conducted in five countries: Kenya SPA 2010, Namibia SPA 2009, Rwanda SPA 2007, Tanzania SPA 2006, and Uganda SPA 2007. For 32 medicines, percent of facilities having the medicine were estimated using five definitions: four for current availability and one for six-month period availability. 'Observed availability of at least one valid unit' was used as a reference definition, and ratios between the reference and each of the other four estimates were calculated. Summary statistics of the ratios among the 32 medicines were calculated by country. The ratios were compared further between public and non-public facilities within each country. Across five countries, compared to current observed availability of at least one valid unit, 'reported availability without observation' was on average 6% higher (ranging from 3% in Rwanda to 8% in Namibia), 'observed availability where all units were valid' was 11% lower (ranging from 2% in Tanzania to 19% in Uganda), and 'six-month period availability' was 14% lower (ranging from 5% in Namibia to 25% in Uganda). Medicine availability estimates vary substantially across definitions, and need to be interpreted with careful consideration of the methods used.

  13. Developments in seismic monitoring for risk reduction

    USGS Publications Warehouse

    Celebi, M.

    2007-01-01

    This paper presents recent state-of-the-art developments to obtain displacements and drift ratios for seismic monitoring and damage assessment of buildings. In most cases, decisions on safety of buildings following seismic events are based on visual inspections of the structures. Real-time instrumental measurements using GPS or double integration of accelerations, however, offer a viable alternative. Relevant parameters, such as the type of connections and structural characteristics (including storey geometry), can be estimated to compute drifts corresponding to several pre-selected threshold stages of damage. Drift ratios determined from real-time monitoring can then be compared to these thresholds in order to estimate damage conditions drift ratios. This approach is demonstrated in three steel frame buildings in San Francisco, California. Recently recorded data of strong shaking from these buildings indicate that the monitoring system can be a useful tool in rapid assessment of buildings and other structures following an earthquake. Such systems can also be used for risk monitoring, as a method to assess performance-based design and analysis procedures, for long-term assessment of structural characteristics of a building, and as a possible long-term damage detection tool.

  14. Random walk-percolation-based modeling of two-phase flow in porous media: Breakthrough time and net to gross ratio estimation

    NASA Astrophysics Data System (ADS)

    Ganjeh-Ghazvini, Mostafa; Masihi, Mohsen; Ghaedi, Mojtaba

    2014-07-01

    Fluid flow modeling in porous media has many applications in waste treatment, hydrology and petroleum engineering. In any geological model, flow behavior is controlled by multiple properties. These properties must be known in advance of common flow simulations. When uncertainties are present, deterministic modeling often produces poor results. Percolation and Random Walk (RW) methods have recently been used in flow modeling. Their stochastic basis is useful in dealing with uncertainty problems. They are also useful in finding the relationship between porous media descriptions and flow behavior. This paper employs a simple methodology based on random walk and percolation techniques. The method is applied to a well-defined model reservoir in which the breakthrough time distributions are estimated. The results of this method and the conventional simulation are then compared. The effect of the net to gross ratio on the breakthrough time distribution is studied in terms of Shannon entropy. Use of the entropy plot allows one to assign the appropriate net to gross ratio to any porous medium.

  15. Maternal mortality in Vietnam in 1994-95.

    PubMed

    Hieu, D T; Hanenberg, R; Vach, T H; Vinh, D Q; Sokal, D

    1999-12-01

    This report presents the first population-based estimates of maternal mortality in Vietnam. All the deaths of women aged 15-49 in 1994-95 in three provinces of Vietnam were identified and classified by cause. Maternal mortality was the fifth most frequent cause of death. The maternal mortality ratio was 155 deaths per 100,000 live births. This ratio compares with the World Health Organization's estimates of 430 such deaths globally and 390 for Asia. The maternal mortality ratio in the delta regions of these provinces was half that of the mountainous and semimountainous regions. Because a larger proportion of the Vietnamese population live in delta regions than elsewhere, the maternal mortality ratio for Vietnam as a whole may be lower than that of the three provinces studied. Maternal mortality is low in Vietnam primarily because a relatively high proportion of deliveries take place in clinics and hospitals, where few women die in childbirth. Also, few women die of the consequences of induced abortion in Vietnam because the procedure is legal and easily available.

  16. The Average Hazard Ratio - A Good Effect Measure for Time-to-event Endpoints when the Proportional Hazard Assumption is Violated?

    PubMed

    Rauch, Geraldine; Brannath, Werner; Brückner, Matthias; Kieser, Meinhard

    2018-05-01

    In many clinical trial applications, the endpoint of interest corresponds to a time-to-event endpoint. In this case, group differences are usually expressed by the hazard ratio. Group differences are commonly assessed by the logrank test, which is optimal under the proportional hazard assumption. However, there are many situations in which this assumption is violated. Especially in applications were a full population and several subgroups or a composite time-to-first-event endpoint and several components are considered, the proportional hazard assumption usually does not simultaneously hold true for all test problems under investigation. As an alternative effect measure, Kalbfleisch and Prentice proposed the so-called 'average hazard ratio'. The average hazard ratio is based on a flexible weighting function to modify the influence of time and has a meaningful interpretation even in the case of non-proportional hazards. Despite this favorable property, it is hardly ever used in practice, whereas the standard hazard ratio is commonly reported in clinical trials regardless of whether the proportional hazard assumption holds true or not. There exist two main approaches to construct corresponding estimators and tests for the average hazard ratio where the first relies on weighted Cox regression and the second on a simple plug-in estimator. The aim of this work is to give a systematic comparison of these two approaches and the standard logrank test for different time-toevent settings with proportional and nonproportional hazards and to illustrate the pros and cons in application. We conduct a systematic comparative study based on Monte-Carlo simulations and by a real clinical trial example. Our results suggest that the properties of the average hazard ratio depend on the underlying weighting function. The two approaches to construct estimators and related tests show very similar performance for adequately chosen weights. In general, the average hazard ratio defines a more valid effect measure than the standard hazard ratio under non-proportional hazards and the corresponding tests provide a power advantage over the common logrank test. As non-proportional hazards are often met in clinical practice and the average hazard ratio tests often outperform the common logrank test, this approach should be used more routinely in applications. Schattauer GmbH.

  17. Method of remotely estimating a rest or best lock frequency of a local station receiver using telemetry

    NASA Technical Reports Server (NTRS)

    Fielhauer, Karl B. (Inventor); Jensen, James R. (Inventor)

    2007-01-01

    A system includes a remote station and a local station having a receiver. The receiver operates in an unlocked state corresponding to its best lock frequency (BLF). The local station derives data indicative of a ratio of the BLF to a reference frequency of the receiver, and telemeters the data to the remote station. The remote station estimates the BLF based on (i) the telemetered data, and (ii) a predetermined estimate of the reference frequency.

  18. Average glandular dose in paired digital mammography and digital breast tomosynthesis acquisitions in a population based screening program: effects of measuring breast density, air kerma and beam quality

    NASA Astrophysics Data System (ADS)

    Helge Østerås, Bjørn; Skaane, Per; Gullien, Randi; Catrine Trægde Martinsen, Anne

    2018-02-01

    The main purpose was to compare average glandular dose (AGD) for same-compression digital mammography (DM) and digital breast tomosynthesis (DBT) acquisitions in a population based screening program, with and without breast density stratification, as determined by automatically calculated breast density (Quantra™). Secondary, to compare AGD estimates based on measured breast density, air kerma and half value layer (HVL) to DICOM metadata based estimates. AGD was estimated for 3819 women participating in the screening trial. All received craniocaudal and mediolateral oblique views of each breasts with paired DM and DBT acquisitions. Exposure parameters were extracted from DICOM metadata. Air kerma and HVL were measured for all beam qualities used to acquire the mammograms. Volumetric breast density was estimated using Quantra™. AGD was estimated using the Dance model. AGD reported directly from the DICOM metadata was also assessed. Mean AGD was 1.74 and 2.10 mGy for DM and DBT, respectively. Mean DBT/DM AGD ratio was 1.24. For fatty breasts: mean AGD was 1.74 and 2.27 mGy for DM and DBT, respectively. For dense breasts: mean AGD was 1.73 and 1.79 mGy, for DM and DBT, respectively. For breasts of similar thickness, dense breasts had higher AGD for DM and similar AGD for DBT. The DBT/DM dose ratio was substantially lower for dense compared to fatty breasts (1.08 versus 1.33). The average c-factor was 1.16. Using previously published polynomials to estimate glandularity from thickness underestimated the c-factor by 5.9% on average. Mean AGD error between estimates based on measurements (air kerma and HVL) versus DICOM header data was 3.8%, but for one mammography unit as high as 7.9%. Mean error of using the AGD value reported in the DICOM header was 10.7 and 13.3%, respectively. Thus, measurement of breast density, radiation dose and beam quality can substantially affect AGD estimates.

  19. Average glandular dose in paired digital mammography and digital breast tomosynthesis acquisitions in a population based screening program: effects of measuring breast density, air kerma and beam quality.

    PubMed

    Østerås, Bjørn Helge; Skaane, Per; Gullien, Randi; Martinsen, Anne Catrine Trægde

    2018-01-25

    The main purpose was to compare average glandular dose (AGD) for same-compression digital mammography (DM) and digital breast tomosynthesis (DBT) acquisitions in a population based screening program, with and without breast density stratification, as determined by automatically calculated breast density (Quantra ™ ). Secondary, to compare AGD estimates based on measured breast density, air kerma and half value layer (HVL) to DICOM metadata based estimates. AGD was estimated for 3819 women participating in the screening trial. All received craniocaudal and mediolateral oblique views of each breasts with paired DM and DBT acquisitions. Exposure parameters were extracted from DICOM metadata. Air kerma and HVL were measured for all beam qualities used to acquire the mammograms. Volumetric breast density was estimated using Quantra ™ . AGD was estimated using the Dance model. AGD reported directly from the DICOM metadata was also assessed. Mean AGD was 1.74 and 2.10 mGy for DM and DBT, respectively. Mean DBT/DM AGD ratio was 1.24. For fatty breasts: mean AGD was 1.74 and 2.27 mGy for DM and DBT, respectively. For dense breasts: mean AGD was 1.73 and 1.79 mGy, for DM and DBT, respectively. For breasts of similar thickness, dense breasts had higher AGD for DM and similar AGD for DBT. The DBT/DM dose ratio was substantially lower for dense compared to fatty breasts (1.08 versus 1.33). The average c-factor was 1.16. Using previously published polynomials to estimate glandularity from thickness underestimated the c-factor by 5.9% on average. Mean AGD error between estimates based on measurements (air kerma and HVL) versus DICOM header data was 3.8%, but for one mammography unit as high as 7.9%. Mean error of using the AGD value reported in the DICOM header was 10.7 and 13.3%, respectively. Thus, measurement of breast density, radiation dose and beam quality can substantially affect AGD estimates.

  20. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    NASA Astrophysics Data System (ADS)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  1. A general, multivariate definition of causal effects in epidemiology.

    PubMed

    Flanders, W Dana; Klein, Mitchel

    2015-07-01

    Population causal effects are often defined as contrasts of average individual-level counterfactual outcomes, comparing different exposure levels. Common examples include causal risk difference and risk ratios. These and most other examples emphasize effects on disease onset, a reflection of the usual epidemiological interest in disease occurrence. Exposure effects on other health characteristics, such as prevalence or conditional risk of a particular disability, can be important as well, but contrasts involving these other measures may often be dismissed as non-causal. For example, an observed prevalence ratio might often viewed as an estimator of a causal incidence ratio and hence subject to bias. In this manuscript, we provide and evaluate a definition of causal effects that generalizes those previously available. A key part of the generalization is that contrasts used in the definition can involve multivariate, counterfactual outcomes, rather than only univariate outcomes. An important consequence of our generalization is that, using it, one can properly define causal effects based on a wide variety of additional measures. Examples include causal prevalence ratios and differences and causal conditional risk ratios and differences. We illustrate how these additional measures can be useful, natural, easily estimated, and of public health importance. Furthermore, we discuss conditions for valid estimation of each type of causal effect, and how improper interpretation or inferences for the wrong target population can be sources of bias.

  2. Receiver Diversity Combining Using Evolutionary Algorithms in Rayleigh Fading Channel

    PubMed Central

    Akbari, Mohsen; Manesh, Mohsen Riahi

    2014-01-01

    In diversity combining at the receiver, the output signal-to-noise ratio (SNR) is often maximized by using the maximal ratio combining (MRC) provided that the channel is perfectly estimated at the receiver. However, channel estimation is rarely perfect in practice, which results in deteriorating the system performance. In this paper, an imperialistic competitive algorithm (ICA) is proposed and compared with two other evolutionary based algorithms, namely, particle swarm optimization (PSO) and genetic algorithm (GA), for diversity combining of signals travelling across the imperfect channels. The proposed algorithm adjusts the combiner weights of the received signal components in such a way that maximizes the SNR and minimizes the bit error rate (BER). The results indicate that the proposed method eliminates the need of channel estimation and can outperform the conventional diversity combining methods. PMID:25045725

  3. Estimates of monthly streamflow characteristics at selected sites in the upper Missouri River basin, Montana, base period water years 1937-86

    USGS Publications Warehouse

    Parrett, Charles; Johnson, D.R.; Hull, J.A.

    1989-01-01

    Estimates of streamflow characteristics (monthly mean flow that is exceeded 90, 80, 50, and 20 percent of the time for all years of record and mean monthly flow) were made and are presented in tabular form for 312 sites in the Missouri River basin in Montana. Short-term gaged records were extended to the base period of water years 1937-86, and were used to estimate monthly streamflow characteristics at 100 sites. Data from 47 gaged sites were used in regression analysis relating the streamflow characteristics to basin characteristics and to active-channel width. The basin-characteristics equations, with standard errors of 35% to 97%, were used to estimate streamflow characteristics at 179 ungaged sites. The channel-width equations, with standard errors of 36% to 103%, were used to estimate characteristics at 138 ungaged sites. Streamflow measurements were correlated with concurrent streamflows at nearby gaged sites to estimate streamflow characteristics at 139 ungaged sites. In a test using 20 pairs of gages, the standard errors ranged from 31% to 111%. At 139 ungaged sites, the estimates from two or more of the methods were weighted and combined in accordance with the variance of individual methods. When estimates from three methods were combined the standard errors ranged from 24% to 63 %. A drainage-area-ratio adjustment method was used to estimate monthly streamflow characteristics at seven ungaged sites. The reliability of the drainage-area-ratio adjustment method was estimated to be about equal to that of the basin-characteristics method. The estimate were checked for reliability. Estimates of monthly streamflow characteristics from gaged records were considered to be most reliable, and estimates at sites with actual flow record from 1937-86 were considered to be completely reliable (zero error). Weighted-average estimates were considered to be the most reliable estimates made at ungaged sites. (USGS)

  4. Beyond the fibre: resolved properties of Sloan Digital Sky Survey galaxies

    NASA Astrophysics Data System (ADS)

    Gerssen, J.; Wilman, D. J.; Christensen, L.

    2012-02-01

    We have used the Visible Multi-Object Spectrograph (VIMOS) integral field spectrograph to map the emission-line properties in a sample of 24 star-forming galaxies selected from the Sloan Digital Sky Survey (SDSS) data base. In this paper we present and describe the sample, and explore some basic properties of SDSS galaxies with resolved emission-line fields. We fit the Hα+[N II] emission lines in each spectrum to derive maps of continuum, Hα flux, velocity and velocity dispersion. The Hα, Hβ, [N II] and [O III] emission lines are also fit in summed spectra for circular annuli of increasing radius. A simple mass model is used to estimate dynamical mass within 10 kpc, which compared to estimates of stellar mass shows that between 10 and 100 per cent of total mass is in stars. We present plots showing the radial behaviour of equivalent width (EW)[Hα], u-i colour and emission-line ratios. Although EW[Hα] and u-i colour trace current or recent star formation, the radial profiles are often quite different. Whilst line ratios do vary with annular radius, radial gradients in galaxies with central line ratios typical of active galactic nucleus (AGN) or low-ionization nuclear emission-line regions are mild, with a hard component of ionization required out to large radii. We use our VIMOS maps to quantify the fraction of Hα emission contained within the SDSS fibre, taking the ratio of total Hα flux to that of a simulated SDSS fibre. A comparison of the flux ratios to colour-based SDSS extrapolations shows a 175 per cent dispersion in the ratio of estimated to actual corrections in normal star-forming galaxies, with larger errors in galaxies containing AGN. We find a strong correlation between indicators of nuclear activity: galaxies with AGN-like line ratios and/or radio emission frequently show enhanced dispersion peaks in their cores, requiring non-thermal sources of heating. Altogether, about half of the galaxies in our sample show no evidence for nuclear activity or non-thermal heating. The fully reduced data cubes and the maps with the line-fit results are available as FITS files from the authors. Based on observations made with ESO Telescopes at the Paranal Observatory under programmes 076.B-0408(A) and 078.B-0194(A).

  5. Assessment of TREM2 rs75932628 association with Alzheimer's disease in a population-based sample: the Cache County Study.

    PubMed

    Gonzalez Murcia, Josue D; Schmutz, Cameron; Munger, Caitlin; Perkes, Ammon; Gustin, Aaron; Peterson, Michael; Ebbert, Mark T W; Norton, Maria C; Tschanz, Joann T; Munger, Ronald G; Corcoran, Christopher D; Kauwe, John S K

    2013-12-01

    Recent studies have identified the rs75932628 (R47H) variant in TREM2 as an Alzheimer's disease risk factor with estimated odds ratio ranging from 2.9 to 5.1. The Cache County Memory Study is a large, population-based sample designed for the study of memory and aging. We genotyped R47H in 2974 samples (427 cases and 2540 control subjects) from the Cache County study using a custom TaqMan assay. We observed 7 heterozygous cases and 12 heterozygous control subjects with an odds ratio of 3.5 (95% confidence interval, 1.3-8.8; p = 0.0076). The minor allele frequency and population attributable fraction for R47H were 0.0029 and 0.004, respectively. This study replicates the association between R47H and Alzheimer's disease risk in a large, population-based sample, and estimates the population frequency and attributable risk of this rare variant. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Carbon: Chlorophyll Ratios and Net Primary Productivity of Subarctic Pacific Surface Waters Derived From Autonomous Shipboard Sensors

    NASA Astrophysics Data System (ADS)

    Burt, William J.; Westberry, Toby K.; Behrenfeld, Michael J.; Zeng, Chen; Izett, Robert W.; Tortell, Philippe D.

    2018-02-01

    We present optically derived estimates of phytoplankton carbon (Cphyto) and chlorophyll a concentration (Chl) across a wide range of productivity and hydrographic regimes in the Subarctic Pacific Ocean. Our high-frequency measurements capture changes in Cphyto and Chl across regional gradients in macronutrient and micronutrient limitations and submesoscale hydrographic frontal zones. Throughout the majority of our survey region, carbon to chlorophyll ratios (Cphyto:Chl) ranged between 50 and 100. Lower values (10-20) were constrained to the highly productive coastal upwelling system along Vancouver Island, whereas higher estimated values (>200) were found directly off the southern British Columbia continental shelf. Further offshore, Cphyto:Chl was less variable, ranging from 50 to 80 in high nutrient low Chl waters in June and from 80 to 120 in the Gulf of Alaska in July. Much of the variability in Cphyto:Chl throughout the study region could be explained by mixed-layer light levels (i.e., photoacclimation), with additional variability attributed to nutrient-controlled changes in phytoplankton growth rates in some regions. Elevated Cphyto:Chl ratios resulting from apparent nutrient stress were found in areas of low macronutrient concentrations. In contrast, iron-limited waters exhibited Cphyto:Chl ratios lower than predicted from the photoacclimation model. Applying the carbon-based production model, we derived Cphyto and Chl-based estimates of net primary productivity, which showed good coherence with independent 14C uptake measurements. Our results highlight the utility of ship-board optical data to examine phytoplankton physiological ecology and productivity in surface marine waters.

  7. Examination of P/S Spectral Ratios for Small Explosions at Local Distances and Interpretation of Moment Tensors Estimated from Near-Source Data

    DTIC Science & Technology

    2010-09-01

    EXAMINATION OF P/S SPECTRAL RATIOS FOR SMALL EXPLOSIONS AT LOCAL DISTANCES AND INTERPRETATION OF MOMENT TENSORS ESTIMATED FROM NEAR-SOURCE DATA...and particle motion. We then estimated smoothed spectra for the P- and S-waves and formed P/S spectral ratios. The signal quality and difficulty in...4. TITLE AND SUBTITLE Examination of P/S Spectral Ratios for Small Explosions at Local Distances and Interpretation of Moment Tensors Estimated from

  8. Ensemble-Based Parameter Estimation in a Coupled GCM Using the Adaptive Spatial Average Method

    DOE PAGES

    Liu, Y.; Liu, Z.; Zhang, S.; ...

    2014-05-29

    Ensemble-based parameter estimation for a climate model is emerging as an important topic in climate research. And for a complex system such as a coupled ocean–atmosphere general circulation model, the sensitivity and response of a model variable to a model parameter could vary spatially and temporally. An adaptive spatial average (ASA) algorithm is proposed to increase the efficiency of parameter estimation. Refined from a previous spatial average method, the ASA uses the ensemble spread as the criterion for selecting “good” values from the spatially varying posterior estimated parameter values; these good values are then averaged to give the final globalmore » uniform posterior parameter. In comparison with existing methods, the ASA parameter estimation has a superior performance: faster convergence and enhanced signal-to-noise ratio.« less

  9. Equivalence of binormal likelihood-ratio and bi-chi-squared ROC curve models

    PubMed Central

    Hillis, Stephen L.

    2015-01-01

    A basic assumption for a meaningful diagnostic decision variable is that there is a monotone relationship between it and its likelihood ratio. This relationship, however, generally does not hold for a decision variable that results in a binormal ROC curve. As a result, receiver operating characteristic (ROC) curve estimation based on the assumption of a binormal ROC-curve model produces improper ROC curves that have “hooks,” are not concave over the entire domain, and cross the chance line. Although in practice this “improperness” is usually not noticeable, sometimes it is evident and problematic. To avoid this problem, Metz and Pan proposed basing ROC-curve estimation on the assumption of a binormal likelihood-ratio (binormal-LR) model, which states that the decision variable is an increasing transformation of the likelihood-ratio function of a random variable having normal conditional diseased and nondiseased distributions. However, their development is not easy to follow. I show that the binormal-LR model is equivalent to a bi-chi-squared model in the sense that the families of corresponding ROC curves are the same. The bi-chi-squared formulation provides an easier-to-follow development of the binormal-LR ROC curve and its properties in terms of well-known distributions. PMID:26608405

  10. Cost-effectiveness analysis of thermotherapy versus pentavalent antimonials for the treatment of cutaneous leishmaniasis.

    PubMed

    Cardona-Arias, Jaiberth Antonio; López-Carvajal, Liliana; Tamayo Plata, Mery Patricia; Vélez, Iván Darío

    2017-05-01

    The treatment of cutaneous leishmaniasis is toxic, has contraindications, and a high cost. The objective of this study was to estimate the cost-effectiveness of thermotherapy versus pentavalent antimonials for the treatment of cutaneous leishmaniasis. Effectiveness was the proportion of healing and safety with the adverse effects; these parameters were estimated from a controlled clinical trial and a meta-analysis. A standard costing was conducted. Average and incremental cost-effectiveness ratios were estimated. The uncertainty regarding effectiveness, safety, and costs was determined through sensitivity analyses. The total costs were $66,807 with Glucantime and $14,079 with thermotherapy. The therapeutic effectiveness rates were 64.2% for thermotherapy and 85.1% for Glucantime. The average cost-effectiveness ratios ranged between $721 and $1275 for Glucantime and between $187 and $390 for thermotherapy. Based on the meta-analysis, thermotherapy may be a dominant strategy. The excellent cost-effectiveness ratio of thermotherapy shows the relevance of its inclusion in guidelines for the treatment. © 2017 Chinese Cochrane Center, West China Hospital of Sichuan University and John Wiley & Sons Australia, Ltd.

  11. Estimation of the IC to CG Ratio Using JEM-GLIMS and Ground-based Lightning Network Data

    NASA Astrophysics Data System (ADS)

    Bandholnopparat, K.; Sato, M.; Takahashi, Y.; Adachi, T.; Ushio, T.

    2017-12-01

    The ratio between intracloud (IC) discharge and cloud-to-ground (CG) discharge, which is denoted by Z, is the important parameter for the studies on the climatological differences of thunderstorm structures and for the quantitative evaluation of lightning contributions to the global electric circuit. However, the latitudinal, regional, and seasonal dependences of Z-value are not fully clarified. The purposes of this study are (i) to develop new methods to identify IC and CG discharges using optical data obtained by the Global Lightning and Sprite Measurements on Japanese Experiment Module (JEM-GLIMS) from space and ground-based lightning data, (ii) to estimate Z-value and its latitudinal, regional, and seasonal dependences. As a first step, we compared the JEM-GLIMS data to the ground-based lightning data obtained by JLDN, NLDN, WWLLN, and GEON in order to distinguish the lightning discharge type detected by JEM-GLIMS. As a next step, we have calculated intensity ratios between the blue and red PH channels, that is, PH2(337 nm)/PH3(762 nm), PH5(316 nm)/PH3, PH6(392 nm)/PH3, PH2/PH4(599-900 nm), PH5/PH4, and PH6/PH4 for each lightning event. From these analyses, it is found that 447 and 454 of 8355 lightning events were identified to be CG and IC discharges, respectively. It is also found that the PH intensity ratio of IC discharges is clearly higher than that of CG discharges. In addition, the difference of the PH2/PH3, PH2/PH4, and PH6/PH4 ratio between IC and CG cases is relatively large, which means these three ratios are the useful proxy to classify the discharge types for other 7454 lightning events. Finally, the estimated Z-value varies from 0.18 - 0.84 from the equator to the higher latitude. The decrease of the Z-value from the equator to the higher latitude is confirmed both in the northern and the southern hemispheres. Although this latitudinal dependence of the Z-value is similar to previous studies, i.e., Boccippio et al. (2001), the estimated absolute Z-value is smaller than that in previous studies. The reason of the smaller absolute Z-value may be because JEM-GLIMS used the high threshold for the event triggering and missed many lightning events having lower optical energies. At the presentation, we will show the regional and seasonal dependences of the Z-value in detail.

  12. Inlet noise suppressor design method based upon the distribution of acoustic power with mode cutoff ratio

    NASA Technical Reports Server (NTRS)

    Rice, E. J.

    1976-01-01

    A liner design for noise suppressors with outer wall treatment such as in an engine inlet is presented which potentially circumvents the problems of resolution in modal measurement. The method is based on the fact that the modal optimum impedance and the maximum possible sound power attenuation at this optimum can be expressed as functions of cutoff ratio alone. Modes with similar cutoff ratios propagate similarly in the duct and in addition propagate similarly to the far field. Thus there is no need to determine the acoustic power carried by these modes individually, and they can be grouped together as one entity. With the optimum impedance and maximum attenuation specified as functions of cutoff ratio, the off-optimum liner performance can be estimated using an approximate attenuation equation.

  13. On estimation of time-dependent attributable fraction from population-based case-control studies.

    PubMed

    Zhao, Wei; Chen, Ying Qing; Hsu, Li

    2017-09-01

    Population attributable fraction (PAF) is widely used to quantify the disease burden associated with a modifiable exposure in a population. It has been extended to a time-varying measure that provides additional information on when and how the exposure's impact varies over time for cohort studies. However, there is no estimation procedure for PAF using data that are collected from population-based case-control studies, which, because of time and cost efficiency, are commonly used for studying genetic and environmental risk factors of disease incidences. In this article, we show that time-varying PAF is identifiable from a case-control study and develop a novel estimator of PAF. Our estimator combines odds ratio estimates from logistic regression models and density estimates of the risk factor distribution conditional on failure times in cases from a kernel smoother. The proposed estimator is shown to be consistent and asymptotically normal with asymptotic variance that can be estimated empirically from the data. Simulation studies demonstrate that the proposed estimator performs well in finite sample sizes. Finally, the method is illustrated by a population-based case-control study of colorectal cancer. © 2017, The International Biometric Society.

  14. Mapping apparent stress and energy radiation over fault zones of major earthquakes

    USGS Publications Warehouse

    McGarr, A.; Fletcher, Joe B.

    2002-01-01

    Using published slip models for five major earthquakes, 1979 Imperial Valley, 1989 Loma Prieta, 1992 Landers, 1994 Northridge, and 1995 Kobe, we produce maps of apparent stress and radiated seismic energy over their fault surfaces. The slip models, obtained by inverting seismic and geodetic data, entail the division of the fault surfaces into many subfaults for which the time histories of seismic slip are determined. To estimate the seismic energy radiated by each subfault, we measure the near-fault seismic-energy flux from the time-dependent slip there and then multiply by a function of rupture velocity to obtain the corresponding energy that propagates into the far-field. This function, the ratio of far-field to near-fault energy, is typically less than 1/3, inasmuch as most of the near-fault energy remains near the fault and is associated with permanent earthquake deformation. Adding the energy contributions from all of the subfaults yields an estimate of the total seismic energy, which can be compared with independent energy estimates based on seismic-energy flux measured in the far-field, often at teleseismic distances. Estimates of seismic energy based on slip models are robust, in that different models, for a given earthquake, yield energy estimates that are in close agreement. Moreover, the slip-model estimates of energy are generally in good accord with independent estimates by others, based on regional or teleseismic data. Apparent stress is estimated for each subfault by dividing the corresponding seismic moment into the radiated energy. Distributions of apparent stress over an earthquake fault zone show considerable heterogeneity, with peak values that are typically about double the whole-earthquake values (based on the ratio of seismic energy to seismic moment). The range of apparent stresses estimated for subfaults of the events studied here is similar to the range of apparent stresses for earthquakes in continental settings, with peak values of about 8 MPa in each case. For earthquakes in compressional tectonic settings, peak apparent stresses at a given depth are substantially greater than corresponding peak values from events in extensional settings; this suggests that crustal strength, inferred from laboratory measurements, may be a limiting factor. Lower bounds on shear stresses inferred from the apparent stress distribution of the 1995 Kobe earthquake are consistent with tectonic-stress estimates reported by Spudich et al. (1998), based partly on slip-vector rake changes.

  15. Sex ratios in fetuses and liveborn infants with autosomal aneuploidy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heuther, C.A.; Martin, R.L.M.; Stoppelman, S.M.

    1996-06-14

    Ten data sources were used substantially to increase the available data for estimating fetal and livebirth sex ratios for Patau (trisomy 13), Edwards (trisomy 18), and Down (trisomy 21) syndromes and controls. The fetal sex ratio estimate was 0.88 (N = 584) for trisomy 13, 0.90 (N = 1702) for trisomy 18, and 1.16 (N = 3154) for trisomy 21. All were significantly different from prenatal controls (1.07). The estimated ratios in prenatal controls were 1.28 (N = 1409) for CVSs and 1.06 (N = 49427) for amniocenteses, indicating a clear differential selection against males, mostly during the first halfmore » of fetal development. By contrast, there were no sex ratio differences for any of the trisomies when comparing gestational ages <16 and >16 weeks. The livebirth sex ratio estimate was 0.90 (N = 293) for trisomy 13, 0.63 (N = 497) for trisomy 18, and 1.15 (N = 6424) for trisomy 21, the latter two being statistically different than controls (1.05) (N = 3660707). These ratios for trisomies 13 and 18 were also statistically different than the ratio for trisomy 21. Only in trisomy 18 did the sex ratios in fetuses and livebirths differ, indicating a prenatal selection against males >16 weeks. No effects of maternal age or race were found on these estimates for any of the fetal or livebirth trisomies. Sex ratios for translocations and mosaics were also estimated for these aneuploids. Compared to previous estimates, these results are less extreme, most likely because of larger sample sizes and less sample bias. They support the hypothesis that these trisomy sex ratios are skewed at conception, or become so during embryonic development through differential intrauterine selection. The estimate for Down syndrome livebirths is also consistent with the hypothesis that its higher sex ratio is associated with paternal nondisjunction. 36 refs., 5 tabs.« less

  16. Understanding the Link between Poverty and Food Insecurity among Children: Does the Definition of Poverty Matter?

    PubMed

    Wight, Vanessa; Kaushal, Neeraj; Waldfogel, Jane; Garfinkel, Irv

    2014-01-02

    This paper examines the association between poverty and food insecurity among children, using two different definitions of poverty-the official poverty measure (OPM) and the new supplemental poverty measure (SPM) of the Census Bureau, which is based on a more inclusive definition of family resources and needs. Our analysis is based on data from the 2001-11 Current Population Survey and shows that food insecurity and very low food security among children decline as income-to-needs ratio increases. The point estimates show that the associations are stronger as measured by the new supplemental measure of income-to-needs ratio than when estimated through the official measure. Statistical tests reject the hypothesis that poor households' odds of experiencing low food security are the same whether the SPM or OPM measure is used; but the tests do not reject the hypothesis when very low food security is the outcome.

  17. Understanding the Link between Poverty and Food Insecurity among Children: Does the Definition of Poverty Matter?

    PubMed Central

    Wight, Vanessa; Kaushal, Neeraj; Waldfogel, Jane; Garfinkel, Irv

    2014-01-01

    This paper examines the association between poverty and food insecurity among children, using two different definitions of poverty—the official poverty measure (OPM) and the new supplemental poverty measure (SPM) of the Census Bureau, which is based on a more inclusive definition of family resources and needs. Our analysis is based on data from the 2001–11 Current Population Survey and shows that food insecurity and very low food security among children decline as income-to-needs ratio increases. The point estimates show that the associations are stronger as measured by the new supplemental measure of income-to-needs ratio than when estimated through the official measure. Statistical tests reject the hypothesis that poor households’ odds of experiencing low food security are the same whether the SPM or OPM measure is used; but the tests do not reject the hypothesis when very low food security is the outcome. PMID:25045244

  18. Estimation of different source contributions to sediment organic matter in an agricultural-forested watershed using end member mixing analyses based on stable isotope ratios and fluorescence spectroscopy.

    PubMed

    Derrien, Morgane; Kim, Min-Seob; Ock, Giyoung; Hong, Seongjin; Cho, Jinwoo; Shin, Kyung-Hoon; Hur, Jin

    2018-03-15

    The two popular source tracing tools of stable isotope ratios (δ 13 C and δ 15 N) and fluorescence spectroscopy were used to estimate the relative source contributions to sediment organic matter (SeOM) at five different river sites in an agricultural-forested watershed (Soyang Lake watershed), and their capabilities for the source assignment were compared. Bulk sediments were used for the stable isotopes, while alkaline extractable organic matter (AEOM) from sediments was used to obtain fluorescent indices for SeOM. Several source discrimination indices were fully compiled for a range of the SeOM sources distributed in the catchments of the watershed, which included soils, forest leaves, crop (C3 and C4) and riparian plants, periphyton, and organic fertilizers. The relative source contributions to the river sediment samples were estimated via end member mixing analysis (EMMA) based on several selected discrimination indices. The EMMA based on the isotopes demonstrated that all sediments were characterized by a medium to a high contribution of periphyton ranging from ~30% to 70% except for one site heavily affected by forest and agricultural fields with relatively high contributions of terrestrial materials. The EMMA based on fluorescence parameters, however, did not show similar results with low contributions from forest leaf and periphyton. The characteristics of the studied watershed were more consistent with the source contributions determined by the isotope ratios. The discrepancy in the EMMA capability for source assignments between the two analytical tools can be explained by the limited analytical window of fluorescence spectroscopy for non-fluorescent dissolved organic matter (FDOM) and the inability of AEOM to represent original bulk particulate organic matter (POM). Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Microphysical properties and ice particle morphology of cirrus clouds inferred from combined CALIOP-IIR measurements

    NASA Astrophysics Data System (ADS)

    Saito, M.; Iwabuchi, H.; Yang, P.; Tang, G.; King, M. D.; Sekiguchi, M.

    2016-12-01

    Cirrus clouds cover about 25% of the globe. Knowledge about the optical and microphysical properties of these clouds [particularly, optical thickness (COT) and effective radius (CER)] is essential to radiative forcing assessment. Previous studies of those properties using satellite remote sensing techniques based on observations by passive and active sensors gave inconsistent retrievals. In particular, COTs from the Cloud Aerosol Lidar with Orthogonal Polarization (CALIOP) using the unconstrained method are affected by variable particle morphology, especially the fraction of horizontally oriented plate particles (HPLT), because the method assumes the lidar ratio to be constant, which should have different values for different ice particle shapes. More realistic ice particle morphology improves estimates of the optical and microphysical properties. In this study, we develop an optimal estimation-based algorithm to infer cirrus COT and CER in addition to morphological parameters (e.g., Fraction of HPLT) using the observations made by CALIOP and the Infrared Imaging Radiometer (IIR) on the CALIPSO platform. The assumed ice particle model is a mixture of a few habits with variable HPLT. Ice particle single-scattering properties are computed using state-of-the-art light-scattering computational capabilities. Rigorous estimation of uncertainties associated with surface properties, atmospheric gases and cloud heterogeneity is performed. The results based on the present method show that COTs are quite consistent with the MODIS and CALIOP counterparts, and CERs essentially agree with the IIR operational retrievals. The lidar ratio is calculated from the bulk optical properties based on the inferred parameters. The presentation will focus on latitudinal variations of particle morphology and the lidar ratio on a global scale.

  20. First Lattice Calculation of the QED Corrections to Leptonic Decay Rates

    NASA Astrophysics Data System (ADS)

    Giusti, D.; Lubicz, V.; Tarantino, C.; Martinelli, G.; Sachrajda, C. T.; Sanfilippo, F.; Simula, S.; Tantalo, N.

    2018-02-01

    The leading-order electromagnetic and strong isospin-breaking corrections to the ratio of Kμ 2 and πμ 2 decay rates are evaluated for the first time on the lattice, following a method recently proposed. The lattice results are obtained using the gauge ensembles produced by the European Twisted Mass Collaboration with Nf=2 +1 +1 dynamical quarks. Systematic effects are evaluated and the impact of the quenched QED approximation is estimated. Our result for the correction to the tree-level Kμ 2/πμ 2 decay ratio is -1.22 (16 )%, to be compared to the estimate of -1.12 (21 )% based on chiral perturbation theory and adopted by the Particle Data Group.

  1. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    NASA Astrophysics Data System (ADS)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes ≪200, currently available data are prone to large uncertainties.

  2. The VAST Survey - IV. A wide brown dwarf companion to the A3V star ζ Delphini

    NASA Astrophysics Data System (ADS)

    De Rosa, R. J.; Patience, J.; Ward-Duong, K.; Vigan, A.; Marois, C.; Song, I.; Macintosh, B.; Graham, J. R.; Doyon, R.; Bessell, M. S.; Lai, O.; McCarthy, D. W.; Kulesa, C.

    2014-12-01

    We report the discovery of a wide comoving substellar companion to the nearby (D = 67.5 ± 1.1 pc) A3V star ζ Delphini based on imaging and follow-up spectroscopic observations obtained during the course of our Volume-limited A-Star (VAST) multiplicity survey. ζ Del was observed over a five-year baseline with adaptive optics, revealing the presence of a previously unresolved companion with a proper motion consistent with that of the A-type primary. The age of the ζ Del system was estimated as 525 ± 125 Myr based on the position of the primary on the colour-magnitude and temperature-luminosity diagrams. Using intermediate-resolution near-infrared spectroscopy, the spectrum of ζ Del B is shown to be consistent with a mid-L dwarf (L5 ± 2), at a temperature of 1650 ± 200 K. Combining the measured near-infrared magnitude of ζ Del B with the estimated temperature leads to a model-dependent mass estimate of 50 ± 15 MJup, corresponding to a mass ratio of q = 0.019 ± 0.006. At a projected separation of 910 ± 14 au, ζ Del B is among the most widely separated and extreme-mass ratio substellar companions to a main-sequence star resolved to date, providing a rare empirical constraint of the formation of low-mass ratio companions at extremely wide separations.

  3. Estimation of Theaflavins (TF) and Thearubigins (TR) Ratio in Black Tea Liquor Using Electronic Vision System

    NASA Astrophysics Data System (ADS)

    Akuli, Amitava; Pal, Abhra; Ghosh, Arunangshu; Bhattacharyya, Nabarun; Bandhopadhyya, Rajib; Tamuly, Pradip; Gogoi, Nagen

    2011-09-01

    Quality of black tea is generally assessed using organoleptic tests by professional tea tasters. They determine the quality of black tea based on its appearance (in dry condition and during liquor formation), aroma and taste. Variation in the above parameters is actually contributed by a number of chemical compounds like, Theaflavins (TF), Thearubigins (TR), Caffeine, Linalool, Geraniol etc. Among the above, TF and TR are the most important chemical compounds, which actually contribute to the formation of taste, colour and brightness in tea liquor. Estimation of TF and TR in black tea is generally done using a spectrophotometer instrument. But, the analysis technique undergoes a rigorous and time consuming effort for sample preparation; also the operation of costly spectrophotometer requires expert manpower. To overcome above problems an Electronic Vision System based on digital image processing technique has been developed. The system is faster, low cost, repeatable and can estimate the amount of TF and TR ratio for black tea liquor with accuracy. The data analysis is done using Principal Component Analysis (PCA), Multiple Linear Regression (MLR) and Multiple Discriminate Analysis (MDA). A correlation has been established between colour of tea liquor images and TF, TR ratio. This paper describes the newly developed E-Vision system, experimental methods, data analysis algorithms and finally, the performance of the E-Vision System as compared to the results of traditional spectrophotometer.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slater, Paul B.

    Paralleling our recent computationally intensive (quasi-Monte Carlo) work for the case N=4 (e-print quant-ph/0308037), we undertake the task for N=6 of computing to high numerical accuracy, the formulas of Sommers and Zyczkowski (e-print quant-ph/0304041) for the (N{sup 2}-1)-dimensional volume and (N{sup 2}-2)-dimensional hyperarea of the (separable and nonseparable) NxN density matrices, based on the Bures (minimal monotone) metric--and also their analogous formulas (e-print quant-ph/0302197) for the (nonmonotone) flat Hilbert-Schmidt metric. With the same seven 10{sup 9} well-distributed ('low-discrepancy') sample points, we estimate the unknown volumes and hyperareas based on five additional (monotone) metrics of interest, including the Kubo-Mori and Wigner-Yanase.more » Further, we estimate all of these seven volume and seven hyperarea (unknown) quantities when restricted to the separable density matrices. The ratios of separable volumes (hyperareas) to separable plus nonseparable volumes (hyperareas) yield estimates of the separability probabilities of generically rank-6 (rank-5) density matrices. The (rank-6) separability probabilities obtained based on the 35-dimensional volumes appear to be--independently of the metric (each of the seven inducing Haar measure) employed--twice as large as those (rank-5 ones) based on the 34-dimensional hyperareas. (An additional estimate--33.9982--of the ratio of the rank-6 Hilbert-Schmidt separability probability to the rank-4 one is quite clearly close to integral too.) The doubling relationship also appears to hold for the N=4 case for the Hilbert-Schmidt metric, but not the others. We fit simple exact formulas to our estimates of the Hilbert-Schmidt separable volumes and hyperareas in both the N=4 and N=6 cases.« less

  5. Optimizing fish sampling for fish - mercury bioaccumulation factors

    USGS Publications Warehouse

    Scudder Eikenberry, Barbara C.; Riva-Murray, Karen; Knightes, Christopher D.; Journey, Celeste A.; Chasar, Lia C.; Brigham, Mark E.; Bradley, Paul M.

    2015-01-01

    Fish Bioaccumulation Factors (BAFs; ratios of mercury (Hg) in fish (Hgfish) and water (Hgwater)) are used to develop Total Maximum Daily Load and water quality criteria for Hg-impaired waters. Both applications require representative Hgfish estimates and, thus, are sensitive to sampling and data-treatment methods. Data collected by fixed protocol from 11 streams in 5 states distributed across the US were used to assess the effects of Hgfish normalization/standardization methods and fish sample numbers on BAF estimates. Fish length, followed by weight, was most correlated to adult top-predator Hgfish. Site-specific BAFs based on length-normalized and standardized Hgfish estimates demonstrated up to 50% less variability than those based on non-normalized Hgfish. Permutation analysis indicated that length-normalized and standardized Hgfish estimates based on at least 8 trout or 5 bass resulted in mean Hgfish coefficients of variation less than 20%. These results are intended to support regulatory mercury monitoring and load-reduction program improvements.

  6. Particle swarm optimization algorithm based parameters estimation and control of epileptiform spikes in a neural mass model

    NASA Astrophysics Data System (ADS)

    Shan, Bonan; Wang, Jiang; Deng, Bin; Wei, Xile; Yu, Haitao; Zhang, Zhen; Li, Huiyan

    2016-07-01

    This paper proposes an epilepsy detection and closed-loop control strategy based on Particle Swarm Optimization (PSO) algorithm. The proposed strategy can effectively suppress the epileptic spikes in neural mass models, where the epileptiform spikes are recognized as the biomarkers of transitions from the normal (interictal) activity to the seizure (ictal) activity. In addition, the PSO algorithm shows capabilities of accurate estimation for the time evolution of key model parameters and practical detection for all the epileptic spikes. The estimation effects of unmeasurable parameters are improved significantly compared with unscented Kalman filter. When the estimated excitatory-inhibitory ratio exceeds a threshold value, the epileptiform spikes can be inhibited immediately by adopting the proportion-integration controller. Besides, numerical simulations are carried out to illustrate the effectiveness of the proposed method as well as the potential value for the model-based early seizure detection and closed-loop control treatment design.

  7. Estimating the confidence bounds for projected ozone design values under different emissions control options

    EPA Science Inventory

    In current regulatory applications, regional air quality model is applied for a base year and a future year with reduced emissions using the same meteorological conditions. The base year design value is multiplied by the ratio of the average of the top 10 ozone concentrations fo...

  8. The insight into the dark side - I. The pitfalls of the dark halo parameters estimation

    NASA Astrophysics Data System (ADS)

    Saburova, Anna S.; Kasparova, Anastasia V.; Katkov, Ivan Yu.

    2016-12-01

    We examined the reliability of estimates of pseudo-isothermal, Burkert and NFW dark halo parameters for the methods based on the mass-modelling of the rotation curves. To do it, we constructed the χ2 maps for the grid of the dark matter halo parameters for a sample of 14 disc galaxies with high-quality rotation curves from THINGS. We considered two variants of models in which: (a) the mass-to-light ratios of disc and bulge were taken as free parameters, (b) the mass-to-light ratios were fixed in a narrow range according to the models of stellar populations. To reproduce the possible observational features of the real galaxies, we made tests showing that the parameters of the three halo types change critically in the cases of a lack of kinematic data in the central or peripheral areas and for different spatial resolutions. We showed that due to the degeneracy between the central densities and the radial scales of the dark haloes there are considerable uncertainties of their concentrations estimates. Due to this reason, it is also impossible to draw any firm conclusion about universality of the dark halo column density based on mass-modelling of even a high-quality rotation curve. The problem is not solved by fixing the density of baryonic matter. In contrast, the estimates of dark halo mass within optical radius are much more reliable. We demonstrated that one can evaluate successfully the halo mass using the pure best-fitting method without any restrictions on the mass-to-light ratios.

  9. National, regional, and global sex ratios of infant, child, and under-5 mortality and identification of countries with outlying ratios: a systematic assessment.

    PubMed

    Alkema, Leontine; Chao, Fengqing; You, Danzhen; Pedersen, Jon; Sawyer, Cheryl C

    2014-09-01

    Under natural circumstances, the sex ratio of male to female mortality up to the age of 5 years is greater than one but sex discrimination can change sex ratios. The estimation of mortality by sex and identification of countries with outlying levels is challenging because of issues with data availability and quality, and because sex ratios might vary naturally based on differences in mortality levels and associated cause of death distributions. For this systematic analysis, we estimated country-specific mortality sex ratios for infants, children aged 1-4 years, and children under the age of 5 years (under 5s) for all countries from 1990 (or the earliest year of data collection) to 2012 using a Bayesian hierarchical time series model, accounting for various data quality issues and assessing the uncertainty in sex ratios. We simultaneously estimated the global relation between sex ratios and mortality levels and constructed estimates of expected and excess female mortality rates to identify countries with outlying sex ratios. Global sex ratios in 2012 were 1·13 (90% uncertainty interval 1·12-1·15) for infants, 0·95 (0·93-0·97) for children aged 1-5 years, and 1·08 (1·07-1·09) for under 5s, an increase since 1990 of 0·01 (-0·01 to 0·02) for infants, 0·04 (0·02 to 0·06) for children aged 1-4 years, and 0·02 (0·01 to 0·04) for under 5s. Levels and trends varied across regions and countries. Sex ratios were lowest in southern Asia for 1990 and 2012 for all age groups. Highest sex ratios were seen in developed regions and the Caucasus and central Asia region. Decreasing mortality was associated with increasing sex ratios, except at very low infant mortality, where sex ratios decreased with total mortality. For 2012, we identified 15 countries with outlying under-5 sex ratios, of which ten countries had female mortality higher than expected (Afghanistan, Bahrain, Bangladesh, China, Egypt, India, Iran, Jordan, Nepal, and Pakistan). Although excess female mortality has decreased since 1990 for the vast majority of countries with outlying sex ratios, the ratios of estimated to expected female mortality did not change substantially for most countries, and worsened for India. Important differences exist between boys and girls with respect to survival up to the age of 5 years. Survival chances tend to improve more rapidly for girls compared with boys as total mortality decreases, with a reversal of this trend at very low infant mortality. For many countries, sex ratios follow this pattern but important exceptions exist. An explanation needs to be sought for selected countries with outlying sex ratios and action should be undertaken if sex discrimination is present. The National University of Singapore and the United Nations Children's Fund (UNICEF). Copyright © 2014 Alkema et al. Open Access article distributed under the terms of CC BY-NC-SA. Published by .. All rights reserved.

  10. A review of global terrestrial evapotranspiration: Observation, modeling, climatology, and climatic variability

    NASA Astrophysics Data System (ADS)

    Wang, Kaicun; Dickinson, Robert E.

    2012-06-01

    This review surveys the basic theories, observational methods, satellite algorithms, and land surface models for terrestrial evapotranspiration, E (or λE, i.e., latent heat flux), including a long-term variability and trends perspective. The basic theories used to estimate E are the Monin-Obukhov similarity theory (MOST), the Bowen ratio method, and the Penman-Monteith equation. The latter two theoretical expressions combine MOST with surface energy balance. Estimates of E can differ substantially between these three approaches because of their use of different input data. Surface and satellite-based measurement systems can provide accurate estimates of diurnal, daily, and annual variability of E. But their estimation of longer time variability is largely not established. A reasonable estimate of E as a global mean can be obtained from a surface water budget method, but its regional distribution is still rather uncertain. Current land surface models provide widely different ratios of the transpiration by vegetation to total E. This source of uncertainty therefore limits the capability of models to provide the sensitivities of E to precipitation deficits and land cover change.

  11. Estimating the signal-to-noise ratio of AVIRIS data

    NASA Technical Reports Server (NTRS)

    Curran, Paul J.; Dungan, Jennifer L.

    1988-01-01

    To make the best use of narrowband airborne visible/infrared imaging spectrometer (AVIRIS) data, an investigator needs to know the ratio of signal to random variability or noise (signal-to-noise ratio or SNR). The signal is land cover dependent and varies with both wavelength and atmospheric absorption; random noise comprises sensor noise and intrapixel variability (i.e., variability within a pixel). The three existing methods for estimating the SNR are inadequate, since typical laboratory methods inflate while dark current and image methods deflate the SNR. A new procedure is proposed called the geostatistical method. It is based on the removal of periodic noise by notch filtering in the frequency domain and the isolation of sensor noise and intrapixel variability using the semi-variogram. This procedure was applied easily and successfully to five sets of AVIRIS data from the 1987 flying season and could be applied to remotely sensed data from broadband sensors.

  12. An approach to estimate body dimensions through constant body ratio benchmarks.

    PubMed

    Chao, Wei-Cheng; Wang, Eric Min-Yang

    2010-12-01

    Building a new anthropometric database is a difficult and costly job that requires considerable manpower and time. However, most designers and engineers do not know how to convert old anthropometric data into applicable new data with minimal errors and costs (Wang et al., 1999). To simplify the process of converting old anthropometric data into useful new data, this study analyzed the available data in paired body dimensions in an attempt to determine constant body ratio (CBR) benchmarks that are independent of gender and age. In total, 483 CBR benchmarks were identified and verified from 35,245 ratios analyzed. Additionally, 197 estimation formulae, taking as inputs 19 easily measured body dimensions, were built using 483 CBR benchmarks. Based on the results for 30 recruited participants, this study determined that the described approach is more accurate and cost-effective than alternative techniques. Copyright © 2010 Elsevier Ltd. All rights reserved.

  13. Tire-road friction estimation and traction control strategy for motorized electric vehicle.

    PubMed

    Jin, Li-Qiang; Ling, Mingze; Yue, Weiqiang

    2017-01-01

    In this paper, an optimal longitudinal slip ratio system for real-time identification of electric vehicle (EV) with motored wheels is proposed based on the adhesion between tire and road surface. First and foremost, the optimal longitudinal slip rate torque control can be identified in real time by calculating the derivative and slip rate of the adhesion coefficient. Secondly, the vehicle speed estimation method is also brought. Thirdly, an ideal vehicle simulation model is proposed to verify the algorithm with simulation, and we find that the slip ratio corresponds to the detection of the adhesion limit in real time. Finally, the proposed strategy is applied to traction control system (TCS). The results showed that the method can effectively identify the state of wheel and calculate the optimal slip ratio without wheel speed sensor; in the meantime, it can improve the accelerated stability of electric vehicle with traction control system (TCS).

  14. Tire-road friction estimation and traction control strategy for motorized electric vehicle

    PubMed Central

    Jin, Li-Qiang; Yue, Weiqiang

    2017-01-01

    In this paper, an optimal longitudinal slip ratio system for real-time identification of electric vehicle (EV) with motored wheels is proposed based on the adhesion between tire and road surface. First and foremost, the optimal longitudinal slip rate torque control can be identified in real time by calculating the derivative and slip rate of the adhesion coefficient. Secondly, the vehicle speed estimation method is also brought. Thirdly, an ideal vehicle simulation model is proposed to verify the algorithm with simulation, and we find that the slip ratio corresponds to the detection of the adhesion limit in real time. Finally, the proposed strategy is applied to traction control system (TCS). The results showed that the method can effectively identify the state of wheel and calculate the optimal slip ratio without wheel speed sensor; in the meantime, it can improve the accelerated stability of electric vehicle with traction control system (TCS). PMID:28662053

  15. Are Methods for Estimating Primary Production and the Growth Rates of Phytoplankton Approaching Agreement?

    NASA Astrophysics Data System (ADS)

    Cullen, J. J.

    2016-02-01

    During the 1980s, estimates of primary productivity and the growth rates of phytoplankton in oligotrophic waters were controversial, in part because rates based on seasonal accumulations of oxygen in the shallow oxygen maximum were reported to be much higher than could be accounted for with measurements of photosynthesis based on incubations with C-14. Since then, much has changed: tested and standardized methods have been employed to collect comprehensive time-series observations of primary production and related oceanographic properties in oligotrophic waters of the North Pacific subtropical gyre and the Sargasso Sea; technical and theoretical advances have led to new tracer-based estimates of photosynthesis (e.g., oxygen/argon and triple isotopes of dissolved oxygen); and biogeochemical sensor systems on ocean gliders and profiling floats can describe with unprecedented resolution the dynamics of phytoplankton, oxygen and nitrate as driven by growth, loss processes including grazing, and vertical migration for nutrient acquisition. Meanwhile, the estimation of primary productivity, phytoplankton biomass and phytoplankton growth rates from remote sensing of ocean color has matured, complementing biogeochemical models that describe and predict these key properties of plankton dynamics. In a selective review focused on well-studied oligotrophic waters, I compare methods for estimating the primary productivity and growth rates of phytoplankton to see if they are converging on agreement, not only in the estimated rates, but also in the underlying assumptions, such as the ratio of gross- to net primary production — and how this relates to the measurement — and the ratio of chlorophyll to carbon in phytoplankton. Examples of agreement are encouraging, but some stark contrasts illustrate the need for improved mechanistic understanding of exactly what each method is measuring.

  16. Parameter estimation using weighted total least squares in the two-compartment exchange model.

    PubMed

    Garpebring, Anders; Löfstedt, Tommy

    2018-01-01

    The linear least squares (LLS) estimator provides a fast approach to parameter estimation in the linearized two-compartment exchange model. However, the LLS method may introduce a bias through correlated noise in the system matrix of the model. The purpose of this work is to present a new estimator for the linearized two-compartment exchange model that takes this noise into account. To account for the noise in the system matrix, we developed an estimator based on the weighted total least squares (WTLS) method. Using simulations, the proposed WTLS estimator was compared, in terms of accuracy and precision, to an LLS estimator and a nonlinear least squares (NLLS) estimator. The WTLS method improved the accuracy compared to the LLS method to levels comparable to the NLLS method. This improvement was at the expense of increased computational time; however, the WTLS was still faster than the NLLS method. At high signal-to-noise ratio all methods provided similar precisions while inconclusive results were observed at low signal-to-noise ratio. The proposed method provides improvements in accuracy compared to the LLS method, however, at an increased computational cost. Magn Reson Med 79:561-567, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  17. Urinary Uric Acid/Creatinine Ratio - A Marker For Perinatal Asphyxia

    PubMed Central

    Patel, Kinjal Prahaladbhai; Makadia, Mayur Goradhanbhai; Patel, Vishwal Indravardan; Nilayangode, Haridas Neelakandan

    2017-01-01

    Background Perinatal hypoxia is one of the leading causes of perinatal mortality in developing countries. Both apgar score and arterial blood pH predict the neonatal mortality in asphyxia. Apgar score alone does not predict neurologic outcome and as it is influenced by various factors. This study was conducted to evaluate the utility and sensitivity of urinary uric acid to creatinine ratio (UA/Cr ratio) in asphyxia diagnosis, compared to invasive Arterial Blood Gas (ABG) analysis. Aim To assess the urinary uric acid/creatinine ratio as an additional marker for perinatal asphyxia compared with ABG analysis in apgar score monitoring. Materials and Methods The present case control study was conducted at a teaching hospital in Central Gujarat. Data of 40 healthy newborns and 40 asphyxiated newborns were collected. In absence of regional estimates, a sample of size 39 was required to attain a power of 80% at 5% alpha (type I error) considering a moderate effect size of 0.65. (UA/Cr) ratio was measured from the spot urine sample collected during 24-72 hours of birth. Statistical analysis was performed by Independent t-test, Pearson’s correlation coefficient (r) and Receiver Operating Characteristic (ROC) plots. Results The mean (UA/Cr ratio) (2.75±0.18 vs 1.78±0.23) is significantly higher in asphyxiated group than in the control group (p<0.0001). Urinary UA/Cr ratio had negative correlation with blood pH (r= -0.27, p=0.18), which was not significant (p>0.05). Urinary UA/Cr ratio with criterion of >2.3 had 100% sensitivity, 100% specificity with AUC of 1 (p<0.0001) had a better predictive value. Conclusions Apgar score is usually reduced in neonates with congenital anomalies and premature neonates. Hence, it is preferable that the clinical diagnosis of asphyxia by apgar scores be supported by other investigations so that early decision can be taken about the level of care the baby needs. pH, lactates and base deficits change with establishment of respiration following resuscitation. However, pH, lactate, base deficit estimations are invasive and need rapid estimations. Non-invasive urinary UA/Cr ratio may be an answer to these issues as it easy, economical and equally efficient. PMID:28274014

  18. Development of theoretical oxygen saturation calibration curve based on optical density ratio and optical simulation approach

    NASA Astrophysics Data System (ADS)

    Jumadi, Nur Anida; Beng, Gan Kok; Ali, Mohd Alauddin Mohd; Zahedi, Edmond; Morsin, Marlia

    2017-09-01

    The implementation of surface-based Monte Carlo simulation technique for oxygen saturation (SaO2) calibration curve estimation is demonstrated in this paper. Generally, the calibration curve is estimated either from the empirical study using animals as the subject of experiment or is derived from mathematical equations. However, the determination of calibration curve using animal is time consuming and requires expertise to conduct the experiment. Alternatively, an optical simulation technique has been used widely in the biomedical optics field due to its capability to exhibit the real tissue behavior. The mathematical relationship between optical density (OD) and optical density ratios (ODR) associated with SaO2 during systole and diastole is used as the basis of obtaining the theoretical calibration curve. The optical properties correspond to systolic and diastolic behaviors were applied to the tissue model to mimic the optical properties of the tissues. Based on the absorbed ray flux at detectors, the OD and ODR were successfully calculated. The simulation results of optical density ratio occurred at every 20 % interval of SaO2 is presented with maximum error of 2.17 % when comparing it with previous numerical simulation technique (MC model). The findings reveal the potential of the proposed method to be used for extended calibration curve study using other wavelength pair.

  19. An Interactive Software for Conceptual Wing Flutter Analysis and Parametric Study

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    1996-01-01

    An interactive computer program was developed for wing flutter analysis in the conceptual design stage. The objective was to estimate the flutter instability boundary of a flexible cantilever wing, when well-defined structural and aerodynamic data are not available, and then study the effect of change in Mach number, dynamic pressure, torsional frequency, sweep, mass ratio, aspect ratio, taper ratio, center of gravity, and pitch inertia, to guide the development of the concept. The software was developed for Macintosh or IBM compatible personal computers, on MathCad application software with integrated documentation, graphics, data base and symbolic mathematics. The analysis method was based on non-dimensional parametric plots of two primary flutter parameters, namely Regier number and Flutter number, with normalization factors based on torsional stiffness, sweep, mass ratio, taper ratio, aspect ratio, center of gravity location and pitch inertia radius of gyration. The parametric plots were compiled in a Vought Corporation report from a vast data base of past experiments and wind-tunnel tests. The computer program was utilized for flutter analysis of the outer wing of a Blended-Wing-Body concept, proposed by McDonnell Douglas Corp. Using a set of assumed data, preliminary flutter boundary and flutter dynamic pressure variation with altitude, Mach number and torsional stiffness were determined.

  20. Applying the compound Poisson process model to the reporting of injury-related mortality rates.

    PubMed

    Kegler, Scott R

    2007-02-16

    Injury-related mortality rate estimates are often analyzed under the assumption that case counts follow a Poisson distribution. Certain types of injury incidents occasionally involve multiple fatalities, however, resulting in dependencies between cases that are not reflected in the simple Poisson model and which can affect even basic statistical analyses. This paper explores the compound Poisson process model as an alternative, emphasizing adjustments to some commonly used interval estimators for population-based rates and rate ratios. The adjusted estimators involve relatively simple closed-form computations, which in the absence of multiple-case incidents reduce to familiar estimators based on the simpler Poisson model. Summary data from the National Violent Death Reporting System are referenced in several examples demonstrating application of the proposed methodology.

  1. Methane Emissions from Bangladesh: Bridging the Gap Between Ground-based and Space-borne Estimates

    NASA Astrophysics Data System (ADS)

    Peters, C.; Bennartz, R.; Hornberger, G. M.

    2015-12-01

    Gaining an understanding of methane (CH4) emission sources and atmospheric dispersion is an essential part of climate change research. Large-scale and global studies often rely on satellite observations of column CH4 mixing ratio whereas high-spatial resolution estimates rely on ground-based measurements. Extrapolation of ground-based measurements on, for example, rice paddies to broad region scales is highly uncertain because of spatio-temporal variability. We explore the use of ground-based river stage measurements and independent satellite observations of flooded area along with satellite measurements of CH4 mixing ratio to estimate the extent of methane emissions. Bangladesh, which comprises most of the Ganges Brahmaputra Meghna (GBM) delta, is a region of particular interest for studying spatio-temporal variation of methane emissions due to (1) broadscale rice cultivation and (2) seasonal flooding and atmospheric convection during the monsoon. Bangladesh and its deltaic landscape exhibit a broad range of environmental, economic, and social circumstances that are relevant to many nations in South and Southeast Asia. We explore the seasonal enhancement of CH4 in Bangladesh using passive remote sensing spectrometer CH4 products from the SCanning Imaging Absorption SpectroMeter for Atmospheric CHartographY (SCIAMACHY) and the Atmospheric Infrared Sounder (AIRS). The seasonal variation of CH4 is compared to independent estimates of seasonal flooding from water gauge stations and space-based passive microwave water-to-land fractions from the Tropical Rainfall Measuring Mission Microwave Imager (TRMM-TMI). Annual cycles in inundation (natural and anthropogenic) and atmospheric CH4 concentrations show highly correlated seasonal signals. NOAA's HYSPLIT model is used to determine atmospheric residence time of ground CH4 fluxes. Using the satellite observations, we can narrow the large uncertainty in extrapolation of ground-based CH4 emission estimates from rice paddies, allowing for country-wide upscaling of high spatial resolution data. This approach allows for better informed carbon cycling modeling for the GBM delta and is applicable to other regions.

  2. Value based care and bundled payments: Anesthesia care costs for outpatient oncology surgery using time-driven activity-based costing.

    PubMed

    French, Katy E; Guzman, Alexis B; Rubio, Augustin C; Frenzel, John C; Feeley, Thomas W

    2016-09-01

    With the movement towards bundled payments, stakeholders should know the true cost of the care they deliver. Time-driven activity-based costing (TDABC) can be used to estimate costs for each episode of care. In this analysis, TDABC is used to both estimate the costs of anesthesia care and identify the primary drivers of those costs of 11 common oncologic outpatient surgical procedures. Personnel cost were calculated by determining the hourly cost of each provider and the associated process time of the 11 surgical procedures. Using the anesthesia record, drugs, supplies and equipment costs were identified and calculated. The current staffing model was used to determine baseline personnel costs for each procedure. Using the costs identified through TDABC analysis, the effect of different staffing ratios on anesthesia costs could be predicted. Costs for each of the procedures were determined. Process time and costs are linearly related. Personnel represented 79% of overall cost while drugs, supplies and equipment represented the remaining 21%. Changing staffing ratios shows potential savings between 13% and 28% across the 11 procedures. TDABC can be used to estimate the costs of anesthesia care. This costing information is critical to assessing the anesthesiology component in a bundled payment. It can also be used to identify areas of cost savings and model costs of anesthesia care. CRNA to anesthesiologist staffing ratios profoundly influence the cost of care. This methodology could be applied to other medical specialties to help determine costs in the setting of bundled payments. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Value Based Care and Bundled Payments: Anesthesia Care Costs for Outpatient Oncology Surgery Using Time-Driven Activity-Based Costing

    PubMed Central

    French, Katy E.; Guzman, Alexis B.; Rubio, Augustin C.; Frenzel, John C.; Feeley, Thomas W

    2015-01-01

    Background With the movement towards bundled payments, stakeholders should know the true cost of the care they deliver. Time-driven activity-based costing (TDABC) can be used to estimate costs for each episode of care. In this analysis, TDABC is used to both estimate the costs of anesthesia care and identify the primary drivers of those costs of 11 common oncologic outpatient surgical procedures. Methods Personnel cost were calculated by determining the hourly cost of each provider and the associated process time of the 11 surgical procedures. Using the anesthesia record, drugs, supplies and equipment costs were identified and calculated. The current staffing model was used to determine baseline personnel costs for each procedure. Using the costs identified through TDABC analysis, the effect of different staffing ratios on anesthesia costs could be predicted. Results Costs for each of the procedures were determined. Process time and costs are linearly related. Personnel represented 79% of overall cost while drugs, supplies and equipment represented the remaining 21%. Changing staffing ratios shows potential savings between 13-28% across the 11 procedures. Conclusions TDABC can be used to estimate the costs of anesthesia care. This costing information is critical to assessing the anesthesiology component in a bundled payment. It can also be used to identify areas of cost savings and model costs of anesthesia care. CRNA to anesthesiologist staffing ratios profoundly influence the cost of care. This methodology could be applied to other medical specialties to help determine costs in the setting of bundled payments. PMID:27637823

  4. Reconstruction and analysis of cesium-137 fallout deposition patterns in the Marshall Islands

    NASA Astrophysics Data System (ADS)

    Whitcomb, Robert Cleckley, Jr.

    Estimates of 137Cs deposition due to fallout originating from nuclear weapons testing in the Marshall Islands have been made for several locations in the Marshall Islands. These retrospective estimates were based primarily on historical exposure rate and gummed film measurements. The methods used to reconstruct these deposition estimates are specific for six of the Pacific tests. These methods are also similar to those used in the National Cancer Institute study for reconstructing 131I deposition from the Nevada Test Site. Reconstructed cumulative deposition estimates are validated against contemporary measurements of 137Cs concentration in soil. This validation work also includes an accounting for estimated global fallout contributions. These validations show that the overall geometric bias in predicted-to-observed (P/O) ratios is 1.0 (indicating excellent agreement). The 5th and 95th percentile range of this distribution is 0.35--2.95. The P/O ratios for estimates using historical gummed film measurements tend to slightly over-predict more than estimates using exposure rate measurements. The methods produce reasonable estimates of deposition confirming that radioactive fallout occurred at atolls further south of the four northern atolls recognized by the Department of Energy as being affected by fallout. The deposition estimate methods, supported by the very good agreement between estimates and measurements, suggest that these methods can be used for other weapons testing fallout radionuclides with confidence.

  5. Fuzzy classifier based support vector regression framework for Poisson ratio determination

    NASA Astrophysics Data System (ADS)

    Asoodeh, Mojtaba; Bagheripour, Parisa

    2013-09-01

    Poisson ratio is considered as one of the most important rock mechanical properties of hydrocarbon reservoirs. Determination of this parameter through laboratory measurement is time, cost, and labor intensive. Furthermore, laboratory measurements do not provide continuous data along the reservoir intervals. Hence, a fast, accurate, and inexpensive way of determining Poisson ratio which produces continuous data over the whole reservoir interval is desirable. For this purpose, support vector regression (SVR) method based on statistical learning theory (SLT) was employed as a supervised learning algorithm to estimate Poisson ratio from conventional well log data. SVR is capable of accurately extracting the implicit knowledge contained in conventional well logs and converting the gained knowledge into Poisson ratio data. Structural risk minimization (SRM) principle which is embedded in the SVR structure in addition to empirical risk minimization (EMR) principle provides a robust model for finding quantitative formulation between conventional well log data and Poisson ratio. Although satisfying results were obtained from an individual SVR model, it had flaws of overestimation in low Poisson ratios and underestimation in high Poisson ratios. These errors were eliminated through implementation of fuzzy classifier based SVR (FCBSVR). The FCBSVR significantly improved accuracy of the final prediction. This strategy was successfully applied to data from carbonate reservoir rocks of an Iranian Oil Field. Results indicated that SVR predicted Poisson ratio values are in good agreement with measured values.

  6. Four hundred or more participants needed for stable contingency table estimates of clinical prediction rule performance.

    PubMed

    Kent, Peter; Boyle, Eleanor; Keating, Jennifer L; Albert, Hanne B; Hartvigsen, Jan

    2017-02-01

    To quantify variability in the results of statistical analyses based on contingency tables and discuss the implications for the choice of sample size for studies that derive clinical prediction rules. An analysis of three pre-existing sets of large cohort data (n = 4,062-8,674) was performed. In each data set, repeated random sampling of various sample sizes, from n = 100 up to n = 2,000, was performed 100 times at each sample size and the variability in estimates of sensitivity, specificity, positive and negative likelihood ratios, posttest probabilities, odds ratios, and risk/prevalence ratios for each sample size was calculated. There were very wide, and statistically significant, differences in estimates derived from contingency tables from the same data set when calculated in sample sizes below 400 people, and typically, this variability stabilized in samples of 400-600 people. Although estimates of prevalence also varied significantly in samples below 600 people, that relationship only explains a small component of the variability in these statistical parameters. To reduce sample-specific variability, contingency tables should consist of 400 participants or more when used to derive clinical prediction rules or test their performance. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Parametric Model Based On Imputations Techniques for Partly Interval Censored Data

    NASA Astrophysics Data System (ADS)

    Zyoud, Abdallah; Elfaki, F. A. M.; Hrairi, Meftah

    2017-12-01

    The term ‘survival analysis’ has been used in a broad sense to describe collection of statistical procedures for data analysis. In this case, outcome variable of interest is time until an event occurs where the time to failure of a specific experimental unit might be censored which can be right, left, interval, and Partly Interval Censored data (PIC). In this paper, analysis of this model was conducted based on parametric Cox model via PIC data. Moreover, several imputation techniques were used, which are: midpoint, left & right point, random, mean, and median. Maximum likelihood estimate was considered to obtain the estimated survival function. These estimations were then compared with the existing model, such as: Turnbull and Cox model based on clinical trial data (breast cancer data), for which it showed the validity of the proposed model. Result of data set indicated that the parametric of Cox model proved to be more superior in terms of estimation of survival functions, likelihood ratio tests, and their P-values. Moreover, based on imputation techniques; the midpoint, random, mean, and median showed better results with respect to the estimation of survival function.

  8. Insight into nitrous oxide production processes in the western North Pacific based on a marine ecosystem isotopomer model

    NASA Astrophysics Data System (ADS)

    Yoshikawa, C.; Sasai, Y.; Wakita, M.; Honda, M. C.; Fujiki, T.; Harada, N.; Makabe, A.; Matsushima, S.; Toyoda, S.; Yoshida, N.; Ogawa, N. O.; Suga, H.; Ohkouchi, N.

    2016-02-01

    Based on the observed inverse relationship between the dissolved oxygen and N2O concentrations in the ocean, previous models have indirectly predicted marine N2O emissions from the apparent oxygen utilization (AOU), In this study, a marine ecosystem model that incorporates nitrous oxide (N2O) production processes (i.e., ammonium oxidation during nitrification and nitrite reduction during nitrifier denitrification) was newly developed to estimate the sea-air N2O flux and to quantify N2O production processes. Site preference of 15N (SP) in N2O isotopomers (14N15N16O and 15N14N16O) and the average nitrogen isotope ratio (δ15N) were added to the model because they are useful tracers to distinguish between ammonium oxidation and nitrite reduction. This model was applied to two contrasting time series sites, a subarctic station (K2) and a subtropical station (S1) in the western North Pacific. The model was validated with observed nitrogen concentration and nitrogen isotopomer datasets, and successfully simulated the higher N2O concentrations, higher δ15N values, and higher site preference values for N2O at K2 compared with S1. The annual mean N2O emissions were estimated to be 34 mg N m-2 yr-1 at K2 and 2 mg N m-2 yr-1 at S1. Using this model, we conducted three case studies: 1) estimating the ratio of in-situ biological N2O production to nitrate (NO3-) production during nitrification, 2) estimating the ratio of N2O production by ammonium oxidation to that by nitrite reduction, and 3) estimating the ratio of AOA ammonium oxidation to AOB ammonium oxidation. The results of case studies estimated the ratios of in situ biological N2O production to nitrate production during nitrification to be 0.22% at K2 and 0.06% at S1. It is also suggested that N2O was mainly produced via ammonium oxidation at K2 but was produced via both ammonium oxidation and nitrite reduction at S1. It is also revealed that 80% of the ammonium oxidation at K2 was caused by archaea in the subsurface water. The results of isotope tracer incubation experiments using an archaeal activity inhibitor supported this hypothesis.

  9. Detecting spatial structures in throughfall data: the effect of extent, sample size, sampling design, and variogram estimation method

    NASA Astrophysics Data System (ADS)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-04-01

    In the last three decades, an increasing number of studies analyzed spatial patterns in throughfall to investigate the consequences of rainfall redistribution for biogeochemical and hydrological processes in forests. In the majority of cases, variograms were used to characterize the spatial properties of the throughfall data. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and an appropriate layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation methods on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with heavy outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling), and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the numbers recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes << 200, our current knowledge about throughfall spatial variability stands on shaky ground.

  10. A Simple Plasma Retinol Isotope Ratio Method for Estimating β-Carotene Relative Bioefficacy in Humans: Validation with the Use of Model-Based Compartmental Analysis.

    PubMed

    Ford, Jennifer Lynn; Green, Joanne Balmer; Lietz, Georg; Oxley, Anthony; Green, Michael H

    2017-09-01

    Background: Provitamin A carotenoids are an important source of dietary vitamin A for many populations. Thus, accurate and simple methods for estimating carotenoid bioefficacy are needed to evaluate the vitamin A value of test solutions and plant sources. β-Carotene bioefficacy is often estimated from the ratio of the areas under plasma isotope response curves after subjects ingest labeled β-carotene and a labeled retinyl acetate reference dose [isotope reference method (IRM)], but to our knowledge, the method has not yet been evaluated for accuracy. Objectives: Our objectives were to develop and test a physiologically based compartmental model that includes both absorptive and postabsorptive β-carotene bioconversion and to use the model to evaluate the accuracy of the IRM and a simple plasma retinol isotope ratio [(RIR), labeled β-carotene-derived retinol/labeled reference-dose-derived retinol in one plasma sample] for estimating relative bioefficacy. Methods: We used model-based compartmental analysis (Simulation, Analysis and Modeling software) to develop and apply a model that provided known values for β-carotene bioefficacy. Theoretical data for 10 subjects were generated by the model and used to determine bioefficacy by RIR and IRM; predictions were compared with known values. We also applied RIR and IRM to previously published data. Results: Plasma RIR accurately predicted β-carotene relative bioefficacy at 14 d or later. IRM also accurately predicted bioefficacy by 14 d, except that, when there was substantial postabsorptive bioconversion, IRM underestimated bioefficacy. Based on our model, 1-d predictions of relative bioefficacy include absorptive plus a portion of early postabsorptive conversion. Conclusion: The plasma RIR is a simple tracer method that accurately predicts β-carotene relative bioefficacy based on analysis of one blood sample obtained at ≥14 d after co-ingestion of labeled β-carotene and retinyl acetate. The method also provides information about the contributions of absorptive and postabsorptive conversion to total bioefficacy if an additional sample is taken at 1 d. © 2017 American Society for Nutrition.

  11. Chondrule magnetic properties

    NASA Technical Reports Server (NTRS)

    Wasilewski, P. J.; Obryan, M. V.

    1994-01-01

    The topics discussed include the following: chondrule magnetic properties; chondrules from the same meteorite; and REM values (the ratio for remanence initially measured to saturation remanence in 1 Tesla field). The preliminary field estimates for chondrules magnetizing environments range from minimal to a least several mT. These estimates are based on REM values and the characteristics of the remanence initially measured (natural remanence) thermal demagnetization compared to the saturation remanence in 1 Tesla field demagnetization.

  12. Size matters: relationships between body size and body mass of common coastal, aquatic invertebrates in the Baltic Sea.

    PubMed

    Eklöf, Johan; Austin, Åsa; Bergström, Ulf; Donadi, Serena; Eriksson, Britas D H K; Hansen, Joakim; Sundblad, Göran

    2017-01-01

    Organism biomass is one of the most important variables in ecological studies, making biomass estimations one of the most common laboratory tasks. Biomass of small macroinvertebrates is usually estimated as dry mass or ash-free dry mass (hereafter 'DM' vs. 'AFDM') per sample; a laborious and time consuming process, that often can be speeded up using easily measured and reliable proxy variables like body size or wet (fresh) mass. Another common way of estimating AFDM (one of the most accurate but also time-consuming estimates of biologically active tissue mass) is the use of AFDM/DM ratios as conversion factors. So far, however, these ratios typically ignore the possibility that the relative mass of biologically active vs. non-active support tissue (e.g., protective exoskeleton or shell)-and therefore, also AFDM/DM ratios-may change with body size, as previously shown for taxa like spiders, vertebrates and trees. We collected aquatic, epibenthic macroinvertebrates (>1 mm) in 32 shallow bays along a 360 km stretch of the Swedish coast along the Baltic Sea; one of the largest brackish water bodies on Earth. We then estimated statistical relationships between the body size (length or height in mm), body dry mass and ash-free dry mass for 14 of the most common taxa; five gastropods, three bivalves, three crustaceans and three insect larvae. Finally, we statistically estimated the potential influence of body size on the AFDM/DM ratio per taxon. For most taxa, non-linear regression models describing the power relationship between body size and (i) DM and (ii) AFDM fit the data well (as indicated by low SE and high R 2 ). Moreover, for more than half of the taxa studied (including the vast majority of the shelled molluscs), body size had a negative influence on organism AFDM/DM ratios. The good fit of the modelled power relationships suggests that the constants reported here can be used to quickly estimate organism dry- and ash-free dry mass based on body size, thereby freeing up considerable work resources. However, the considerable differences in constants between taxa emphasize the need for taxon-specific relationships, and the potential dangers associated with ignoring body size. The negative influence of body size on the AFDM/DM ratio found in a majority of the molluscs could be caused by increasingly thicker shells with organism age, and/or spawning-induced loss of biologically active tissue in adults. Consequently, future studies utilizing AFDM/DM (and presumably also AFDM/wet mass) ratios should carefully assess the potential influence of body size to ensure more reliable estimates of organism body mass.

  13. Can we improve top-down GHG inverse methods through informed prior and better representations of atmospheric transport? Insights from the Atmospheric Carbon and Transport (ACT) - America Aircraft Mission

    NASA Astrophysics Data System (ADS)

    Feng, S.; Lauvaux, T.; Keller, K.; Davis, K. J.

    2016-12-01

    Current estimates of biogenic carbon fluxes over North America based on top-down atmospheric inversions are subject to considerable uncertainty. This uncertainty stems to a large part from the uncertain prior fluxes estimates with the associated error covariances and approximations in the atmospheric transport models that link observed carbon dioxide mixing ratios with surface fluxes. Specifically, approximations in the representation of vertical mixing associated with atmospheric turbulence or convective transport and largely under-determined prior fluxes and their error structures significantly hamper our capacity to reliably estimate regional carbon fluxes. The Atmospheric Carbon and Transport - America (ACT-America) mission aims at reducing the uncertainties in inverse fluxes at the regional-scale by deploying airborne and ground-based platforms to characterize atmospheric GHG mixing ratios and the concurrent atmospheric dynamics. Two aircraft measure the 3-dimensional distribution of greenhouse gases at synoptic scales, focusing on the atmospheric boundary layer and the free troposphere during both fair and stormy weather conditions. Here we analyze two main questions: (i) What level of information can we expect from the currently planned observations? (ii) How might ACT-America reduce the hindcast and predictive uncertainty of carbon estimates over North America?

  14. Biomass production on the Olympic and Kitsap Peninsulas, Washington: updated logging residue ratios, slash pile volume-to-weight ratios, and supply curves for selected locations

    Treesearch

    Jason C. Cross; Eric C. Turnblom; Gregory J. Ettl

    2013-01-01

    Biomass residue produced by timber harvest operations is estimated for the Olympic and Kitsap Peninsulas, Washington. Scattered residues were sampled in 53 harvest units and piled residues were completely enumerated in 55 harvest units. Production is based on 2008 and 2009 data and is stratified by forest location, ownership type, harvest intensity, and harvest method...

  15. Bias-adjusted satellite-based rainfall estimates for predicting floods: Narayani Basin

    USGS Publications Warehouse

    Shrestha, M.S.; Artan, G.A.; Bajracharya, S.R.; Gautam, D.K.; Tokar, S.A.

    2011-01-01

    In Nepal, as the spatial distribution of rain gauges is not sufficient to provide detailed perspective on the highly varied spatial nature of rainfall, satellite-based rainfall estimates provides the opportunity for timely estimation. This paper presents the flood prediction of Narayani Basin at the Devghat hydrometric station (32000km2) using bias-adjusted satellite rainfall estimates and the Geospatial Stream Flow Model (GeoSFM), a spatially distributed, physically based hydrologic model. The GeoSFM with gridded gauge observed rainfall inputs using kriging interpolation from 2003 was used for calibration and 2004 for validation to simulate stream flow with both having a Nash Sutcliff Efficiency of above 0.7. With the National Oceanic and Atmospheric Administration Climate Prediction Centre's rainfall estimates (CPC-RFE2.0), using the same calibrated parameters, for 2003 the model performance deteriorated but improved after recalibration with CPC-RFE2.0 indicating the need to recalibrate the model with satellite-based rainfall estimates. Adjusting the CPC-RFE2.0 by a seasonal, monthly and 7-day moving average ratio, improvement in model performance was achieved. Furthermore, a new gauge-satellite merged rainfall estimates obtained from ingestion of local rain gauge data resulted in significant improvement in flood predictability. The results indicate the applicability of satellite-based rainfall estimates in flood prediction with appropriate bias correction. ?? 2011 The Authors. Journal of Flood Risk Management ?? 2011 The Chartered Institution of Water and Environmental Management.

  16. Bias-adjusted satellite-based rainfall estimates for predicting floods: Narayani Basin

    USGS Publications Warehouse

    Artan, Guleid A.; Tokar, S.A.; Gautam, D.K.; Bajracharya, S.R.; Shrestha, M.S.

    2011-01-01

    In Nepal, as the spatial distribution of rain gauges is not sufficient to provide detailed perspective on the highly varied spatial nature of rainfall, satellite-based rainfall estimates provides the opportunity for timely estimation. This paper presents the flood prediction of Narayani Basin at the Devghat hydrometric station (32 000 km2) using bias-adjusted satellite rainfall estimates and the Geospatial Stream Flow Model (GeoSFM), a spatially distributed, physically based hydrologic model. The GeoSFM with gridded gauge observed rainfall inputs using kriging interpolation from 2003 was used for calibration and 2004 for validation to simulate stream flow with both having a Nash Sutcliff Efficiency of above 0.7. With the National Oceanic and Atmospheric Administration Climate Prediction Centre's rainfall estimates (CPC_RFE2.0), using the same calibrated parameters, for 2003 the model performance deteriorated but improved after recalibration with CPC_RFE2.0 indicating the need to recalibrate the model with satellite-based rainfall estimates. Adjusting the CPC_RFE2.0 by a seasonal, monthly and 7-day moving average ratio, improvement in model performance was achieved. Furthermore, a new gauge-satellite merged rainfall estimates obtained from ingestion of local rain gauge data resulted in significant improvement in flood predictability. The results indicate the applicability of satellite-based rainfall estimates in flood prediction with appropriate bias correction.

  17. Survival analysis using inverse probability of treatment weighted methods based on the generalized propensity score.

    PubMed

    Sugihara, Masahiro

    2010-01-01

    In survival analysis, treatment effects are commonly evaluated based on survival curves and hazard ratios as causal treatment effects. In observational studies, these estimates may be biased due to confounding factors. The inverse probability of treatment weighted (IPTW) method based on the propensity score is one of the approaches utilized to adjust for confounding factors between binary treatment groups. As a generalization of this methodology, we developed an exact formula for an IPTW log-rank test based on the generalized propensity score for survival data. This makes it possible to compare the group differences of IPTW Kaplan-Meier estimators of survival curves using an IPTW log-rank test for multi-valued treatments. As causal treatment effects, the hazard ratio can be estimated using the IPTW approach. If the treatments correspond to ordered levels of a treatment, the proposed method can be easily extended to the analysis of treatment effect patterns with contrast statistics. In this paper, the proposed method is illustrated with data from the Kyushu Lipid Intervention Study (KLIS), which investigated the primary preventive effects of pravastatin on coronary heart disease (CHD). The results of the proposed method suggested that pravastatin treatment reduces the risk of CHD and that compliance to pravastatin treatment is important for the prevention of CHD. (c) 2009 John Wiley & Sons, Ltd.

  18. The association between maternal smoking and placenta abruption: a meta-analysis.

    PubMed

    Shobeiri, Fatemeh; Masoumi, Seyedeh Zahra; Jenabi, Ensiyeh

    2017-08-01

    Several epidemiological studies have determined that maternal smoking can increase the risk of placenta abruption. To date, only a meta-analysis has been performed for assessing the relationship between smoking and placenta abruption. This meta-analysis was conducted to estimate the association between smoking and the risk of placenta abruption. A literature search was conducted in major databases such as PubMed, Web of Science, and Scopus from the earliest possible year to April 2016. The heterogeneity across studies was explored by Q-test and I 2 statistic. The publication bias was assessed using Begg's and Egger's tests. The results were reported using odds ratio (OR) estimate with its 95% confidence intervals (CI) using a random effects model. The literature search yielded 1167 publications until April 2016 with 4 309 610 participants. Based on OR estimates obtained from case-control and cohort studies, there was a significant association between smoking and placenta abruption (1.80; 95% CI: 1.75, 1.85). Based on the results of cohort studies, smoking and placenta abruption had a significant association (relative risk ratio: 1.65; 95% CI: 1.51, 1.80). Based on reports in epidemiological studies, we showed that smoking is a risk factor for placenta abruption.

  19. Assessing the likely value of gravity and drawdown measurements to constrain estimates of hydraulic conductivity and specific yield during unconfined aquifer testing

    USGS Publications Warehouse

    Blainey, Joan B.; Ferré, Ty P.A.; Cordova, Jeffrey T.

    2007-01-01

    Pumping of an unconfined aquifer can cause local desaturation detectable with high‐resolution gravimetry. A previous study showed that signal‐to‐noise ratios could be predicted for gravity measurements based on a hydrologic model. We show that although changes should be detectable with gravimeters, estimations of hydraulic conductivity and specific yield based on gravity data alone are likely to be unacceptably inaccurate and imprecise. In contrast, a transect of low‐quality drawdown data alone resulted in accurate estimates of hydraulic conductivity and inaccurate and imprecise estimates of specific yield. Combined use of drawdown and gravity data, or use of high‐quality drawdown data alone, resulted in unbiased and precise estimates of both parameters. This study is an example of the value of a staged assessment regarding the likely significance of a new measurement method or monitoring scenario before collecting field data.

  20. Evapotranspiration and microclimate at a low-level radioactive-waste disposal site in northwestern Illinois

    USGS Publications Warehouse

    Healy, R.W.; DeVries, M.P.; Sturrock, A.M.

    1987-01-01

    From July 1982 through June 1984, a study was made of the microclimate and evapotranspiration at a low-level radioactive-waste disposal site near Sheffield, Bureau County, Illinois. Vegetation at the site consists of mixed pasture grasses, primarily brome (Bromus inermis) and red clover (Trifoleum pratense). Three methods were used to estimate evapotranspiration: (1) an energy-budget with the Bowen ratio, (2) an aerodynamic-profile, and (3) a soil-based water-budget. For the aerodynamic-profile method, sensible-heat flux was estimated by a profile equation and evapotranspiration was then calculated as the residual in the energy-balance equation. Estimates by the energy-budget and aerodynamic-profile methods were computed from hourly data, then summed by days and months. Yearly estimates for March through November, by these methods, were quite close--648 and 626 millimeters, respectively. Daily estimates range up to a maximum of about 6 millimeters. The water-budget method produced only monthly estimates based on weekly or biweekly soil-moisture content measurements. The yearly evapotranspiration estimated by this method (which actually included only the months of April through October) was 655 millimeters. The March-through-November average for the three methods of 657 millimeters was equivalent to 70 percent of precipitation. Continuous measurements were made of incoming and reflected shortwave radiation, incoming and emitted longwave radiation, net radiation, soil-heat flux, soil temperature, horizontal windspeed, and wet- and dry-bulb air temperature. Windspeed and air temperature were measured at heights of 0.5 and 2.0 meters (and also at 1.0 meter after September 1983). Soil-moisture content of the soil zone was measured with a gamma-attenuation gage. Annual precipitation (938 millimeters) and average temperature (10.8 degrees Celsius) were virtually identical to long-term averages from nearby National Weather Service stations. Solar radiation averaged 65 percent of that normally expected under clear skies. Net radiation averaged 70.1 watts per square meter and was highest in July and negative during some winter months. Wind direction varied but was predominately out of the south-southeast. Wind speed at the 2-meter height averaged 3.5 meters per second and was slightly higher in winter months than the rest of the year. The amount of water stored within the soil zone was greatest in early spring and least in late summer. Seasonal and diurnal trends in evapotranspiration rates mirrored those in net radiation; July was usually the month with the highest rate. The ratio of sensible- to latent-heat fluxes (commonly called the Bowen ratio) for the 2-year period was 0.38, as averaged from the three methods. Monthly Bowen ratios fluctuated somewhat but averaged about 0.35 for late spring through summer. In fall, the ratio declined to zero or to slightly negative values. When the ratio was negative, the latent-heat flux was slightly greater than the net radiation because of additional energy supplied by the cooling soil and air. Evapotranspiration calculated by the three methods averaged 75 percent of potential evapotranspiration, as estimated by the Penman equation. There was no apparent seasonal trend in the relation between actual and potential evapotranspiration rates.

  1. Multiple Flux Footprints, Flux Divergences and Boundary Layer Mixing Ratios: Studies of Ecosystem-Atmosphere CO2 Exchange Using the WLEF Tall Tower.

    NASA Astrophysics Data System (ADS)

    Davis, K. J.; Bakwin, P. S.; Yi, C.; Cook, B. D.; Wang, W.; Denning, A. S.; Teclaw, R.; Isebrands, J. G.

    2001-05-01

    Long-term, tower-based measurements using the eddy-covariance method have revealed a wealth of detail about the temporal dynamics of netecosystem-atmosphere exchange (NEE) of CO2. The data also provide a measure of the annual net CO2 exchange. The area represented by these flux measurements, however, is limited, and doubts remain about possible systematic errors that may bias the annual net exchange measurements. Flux and mixing ratio measurements conducted at the WLEF tall tower as part of the Chequamegon Ecosystem-Atmosphere Study (ChEAS) allow for unique assessment of the uncertainties in NEE of CO2. The synergy between flux and mixing ratio observations shows the potential for comparing inverse and eddy-covariance methods of estimating NEE of CO2. Such comparisons may strengthen confidence in both results and begin to bridge the huge gap in spatial scales (at least 3 orders of magnitude) between continental or hemispheric scale inverse studies and kilometer-scale eddy covariance flux measurements. Data from WLEF and Willow Creek, another ChEAS tower, are used to estimate random and systematic errors in NEE of CO2. Random uncertainty in seasonal exchange rates and the annual integrated NEE, including both turbulent sampling errors and variability in enviromental conditions, is small. Systematic errors are identified by examining changes in flux as a function of atmospheric stability and wind direction, and by comparing the multiple level flux measurements on the WLEF tower. Nighttime drainage is modest but evident. Systematic horizontal advection occurs during the morning turbulence transition. The potential total systematic error appears to be larger than random uncertainty, but still modest. The total systematic error, however, is difficult to assess. It appears that the WLEF region ecosystems were a small net sink of CO2 in 1997. It is clear that the summer uptake rate at WLEF is much smaller than that at most deciduous forest sites, including the nearby Willow Creek site. The WLEF tower also allows us to study the potential for monitoring continental CO2 mixing ratios from tower sites. Despite concerns about the proximity to ecosystem sources and sinks, it is clear that boundary layer CO2 mixing ratios can be monitored using typical surface layer towers. Seasonal and annual land-ocean mixing ratio gradients are readily detectable, providing the motivation for a flux-tower based mixing ratio observation network that could greatly improve the accuracy of inversion-based estimates of NEE of CO2, and enable inversions to be applied on smaller temporal and spatial scales. Results from the WLEF tower illustrate the degree to which local flux measurements represent interannual, seasonal and synoptic CO2 mixing ratio trends. This coherence between fluxes and mixing ratios serves to "regionalize" the eddy-covariance based local NEE observations.

  2. Variations in male-female infant ratios among births to Canadian- and Indian-born mothers, 1990-2011: a population-based register study.

    PubMed

    Urquia, Marcelo L; Ray, Joel G; Wanigaratne, Susitha; Moineddin, Rahim; O'Campo, Patricia J

    2016-01-01

    We assessed variations in the male-female infant ratios among births to Canadian-born and Indian-born mothers according to year of birth, province and country of birth of each parent. In this population-based register study, we analyzed birth certificates of 5 853 970 singleton live births to Canadian-born and 177 990 singleton live births to Indian-born mothers giving birth in Canada from 1990 to 2011. Male-female ratios were stratified by live birth order and plotted by year of birth. Logistic regression was used to assess whether ratios varied between Canadian provinces and according to the birthplace of each parent. The deficit in the number of girls was estimated using bootstrap methods. Among Canadian-born mothers, male-female ratios were about 1.05, with negligible fluctuations by birth order, year and province. Among Indian-born mothers, the overall male-female ratio at the third birth was 1.38 (95% confidence interval [CI] 1.34-1.41) and was 1.66 (95% CI 1.56-1.76) at the fourth or higher-order births. There was little variability in the ratios between provinces. Couples involving at least 1 Indian-born parent had higher than expected male-female ratios at the second and higher-order births, particularly when the father was Indian-born. The deficit in the expected number of girls among Indian immigrants to Canada in the study period was estimated to be 4472 (95% CI 3211-5921). Fewer than expected girls at the third and higher-order births have been born to Indian immigrants across Canada since 1990. This trend was also seen among couples of mixed nativity, including those involving a Canadian-born mother and an Indian-born father. Fathers should be considered when investigating sex ratios at birth.

  3. Variations in male-female infant ratios among births to Canadian- and Indian-born mothers, 1990-2011: a population-based register study

    PubMed Central

    Urquia, Marcelo L.; Ray, Joel G.; Wanigaratne, Susitha; Moineddin, Rahim; O'Campo, Patricia J.

    2016-01-01

    Background: We assessed variations in the male-female infant ratios among births to Canadian-born and Indian-born mothers according to year of birth, province and country of birth of each parent. Methods: In this population-based register study, we analyzed birth certificates of 5 853 970 singleton live births to Canadian-born and 177 990 singleton live births to Indian-born mothers giving birth in Canada from 1990 to 2011. Male-female ratios were stratified by live birth order and plotted by year of birth. Logistic regression was used to assess whether ratios varied between Canadian provinces and according to the birthplace of each parent. The deficit in the number of girls was estimated using bootstrap methods. Results: Among Canadian-born mothers, male-female ratios were about 1.05, with negligible fluctuations by birth order, year and province. Among Indian-born mothers, the overall male-female ratio at the third birth was 1.38 (95% confidence interval [CI] 1.34-1.41) and was 1.66 (95% CI 1.56-1.76) at the fourth or higher-order births. There was little variability in the ratios between provinces. Couples involving at least 1 Indian-born parent had higher than expected male-female ratios at the second and higher-order births, particularly when the father was Indian-born. The deficit in the expected number of girls among Indian immigrants to Canada in the study period was estimated to be 4472 (95% CI 3211-5921). Interpretation: Fewer than expected girls at the third and higher-order births have been born to Indian immigrants across Canada since 1990. This trend was also seen among couples of mixed nativity, including those involving a Canadian-born mother and an Indian-born father. Fathers should be considered when investigating sex ratios at birth. PMID:27398354

  4. On averaging aspect ratios and distortion parameters over ice crystal population ensembles for estimating effective scattering asymmetry parameters

    PubMed Central

    van Diedenhoven, Bastiaan; Ackerman, Andrew S.; Fridlind, Ann M.; Cairns, Brian

    2017-01-01

    The use of ensemble-average values of aspect ratio and distortion parameter of hexagonal ice prisms for the estimation of ensemble-average scattering asymmetry parameters is evaluated. Using crystal aspect ratios greater than unity generally leads to ensemble-average values of aspect ratio that are inconsistent with the ensemble-average asymmetry parameters. When a definition of aspect ratio is used that limits the aspect ratio to below unity (α≤1) for both hexagonal plates and columns, the effective asymmetry parameters calculated using ensemble-average aspect ratios are generally consistent with ensemble-average asymmetry parameters, especially if aspect ratios are geometrically averaged. Ensemble-average distortion parameters generally also yield effective asymmetry parameters that are largely consistent with ensemble-average asymmetry parameters. In the case of mixtures of plates and columns, it is recommended to geometrically average the α≤1 aspect ratios and to subsequently calculate the effective asymmetry parameter using a column or plate geometry when the contribution by columns to a given mixture’s total projected area is greater or lower than 50%, respectively. In addition, we show that ensemble-average aspect ratios, distortion parameters and asymmetry parameters can generally be retrieved accurately from simulated multi-directional polarization measurements based on mixtures of varying columns and plates. However, such retrievals tend to be somewhat biased toward yielding column-like aspect ratios. Furthermore, generally large retrieval errors can occur for mixtures with approximately equal contributions of columns and plates and for ensembles with strong contributions of thin plates. PMID:28983127

  5. A new method for calculation of water saturation in shale gas reservoirs using V P -to-V S ratio and porosity

    NASA Astrophysics Data System (ADS)

    Liu, Kun; Sun, Jianmeng; Zhang, Hongpan; Liu, Haitao; Chen, Xiangyang

    2018-02-01

    Total water saturation is an important parameter for calculating the free gas content of shale gas reservoirs. Owing to the limitations of the Archie formula and its extended solutions in zones rich in organic or conductive minerals, a new method was proposed to estimate total water saturation according to the relationship between total water saturation, V P -to-V S ratio and total porosity. Firstly, the ranges of the relevant parameters in the viscoelastic BISQ model in shale gas reservoirs were estimated. Then, the effects of relevant parameters on the V P -to-V S ratio were simulated based on the partially saturated viscoelastic BISQ model. These parameters were total water saturation, total porosity, permeability, characteristic squirt-flow length, fluid viscosity and sonic frequency. The simulation results showed that the main factors influencing V P -to-V S ratio were total porosity and total water saturation. When the permeability and the characteristic squirt-flow length changed slightly for a particular shale gas reservoir, their influences could be neglected. Then an empirical equation for total water saturation with respect to total porosity and V P -to-V S ratio was obtained according to the experimental data. Finally, the new method was successfully applied to estimate total water saturation in a sequence formation of shale gas reservoirs. Practical applications have shown good agreement with the results calculated by the Archie model.

  6. A predictive model to estimate the pretest probability of metastasis in patients with osteosarcoma.

    PubMed

    Wang, Sisheng; Zheng, Shaoluan; Hu, Kongzu; Sun, Heyan; Zhang, Jinling; Rong, Genxiang; Gao, Jie; Ding, Nan; Gui, Binjie

    2017-01-01

    Osteosarcomas (OSs) represent a huge challenge to improve the overall survival, especially in metastatic patients. Increasing evidence indicates that both tumor-associated elements but also on host-associated elements are under a remarkable effect on the prognosis of cancer patients, especially systemic inflammatory response. By analyzing a series prognosis of factors, including age, gender, primary tumor size, tumor location, tumor grade, and histological classification, monocyte ratio, and NLR ratio, a clinical predictive model was established by using stepwise logistic regression involved circulating leukocyte to compute the estimated probabilities of metastases for OS patients. The clinical predictive model was described by the following equations: probability of developing metastases = ex/(1 + ex), x = -2.150 +  (1.680 × monocyte ratio) + (1.533 × NLR ratio), where is the base of the natural logarithm, the assignment to each of the 2 variables is 1 if the ratio >1 (otherwise 0). The calculated AUC of the receiver-operating characteristic curve as 0.793 revealed well accuracy of this model (95% CI, 0.740-0.845). The predicted probabilities that we generated with the cross-validation procedure had a similar AUC (0.743; 95% CI, 0.684-0.803). The present model could be used to improve the outcomes of the metastases by developing a predictive model considering circulating leukocyte influence to estimate the pretest probability of developing metastases in patients with OS.

  7. Estimating Surface NO2 and SO2 Mixing Ratios from Fast-Response Total Column Observations and Potential Application to Geostationary Missions

    EPA Science Inventory

    Total-column nitrogen dioxide (NO2) data collected by a ground-based sun-tracking spectrometer system 21 (Pandora) and an photolytic-converter-based in-situ instrument collocated at NASA’s Langley Research Center in 22 Hampton, Virginia were analyzed to study the relationship bet...

  8. WATGIS: A GIS-Based Lumped Parameter Water Quality Model

    Treesearch

    Glenn P. Fernandez; George M. Chescheir; R. Wayne Skaggs; Devendra M. Amatya

    2002-01-01

    A Geographic Information System (GIS)­based, lumped parameter water quality model was developed to estimate the spatial and temporal nitrogen­loading patterns for lower coastal plain watersheds in eastern North Carolina. The model uses a spatially distributed delivery ratio (DR) parameter to account for nitrogen retention or loss along a drainage network. Delivery...

  9. Sensitivity of drainage morphometry based hydrological response (GIUH) of a river basin to the spatial resolution of DEM data

    NASA Astrophysics Data System (ADS)

    Sahoo, Ramendra; Jain, Vikrant

    2018-02-01

    Drainage network pattern and its associated morphometric ratios are some of the important plan form attributes of a drainage basin. Extraction of these attributes for any basin is usually done by spatial analysis of the elevation data of that basin. These planform attributes are further used as input data for studying numerous process-response interactions inside the physical premise of the basin. One of the important uses of the morphometric ratios is its usage in the derivation of hydrologic response of a basin using GIUH concept. Hence, accuracy of the basin hydrological response to any storm event depends upon the accuracy with which, the morphometric ratios can be estimated. This in turn, is affected by the spatial resolution of the source data, i.e. the digital elevation model (DEM). We have estimated the sensitivity of the morphometric ratios and the GIUH derived hydrograph parameters, to the resolution of source data using a 30 meter and a 90 meter DEM. The analysis has been carried out for 50 drainage basins in a mountainous catchment. A simple and comprehensive algorithm has been developed for estimation of the morphometric indices from a stream network. We have calculated all the morphometric parameters and the hydrograph parameters for each of these basins extracted from two different DEMs, with different spatial resolutions. Paired t-test and Sign test were used for the comparison. Our results didn't show any statistically significant difference among any of the parameters calculated from the two source data. Along with the comparative study, a first-hand empirical analysis about the frequency distribution of the morphometric and hydrologic response parameters has also been communicated. Further, a comparison with other hydrological models suggests that plan form morphometry based GIUH model is more consistent with resolution variability in comparison to topographic based hydrological model.

  10. Estimation of time-variable fast flow path chemical concentrations for application in tracer-based hydrograph separation analyses

    USGS Publications Warehouse

    Kronholm, Scott C.; Capel, Paul D.

    2016-01-01

    Mixing models are a commonly used method for hydrograph separation, but can be hindered by the subjective choice of the end-member tracer concentrations. This work tests a new variant of mixing model that uses high-frequency measures of two tracers and streamflow to separate total streamflow into water from slowflow and fastflow sources. The ratio between the concentrations of the two tracers is used to create a time-variable estimate of the concentration of each tracer in the fastflow end-member. Multiple synthetic data sets, and data from two hydrologically diverse streams, are used to test the performance and limitations of the new model (two-tracer ratio-based mixing model: TRaMM). When applied to the synthetic streams under many different scenarios, the TRaMM produces results that were reasonable approximations of the actual values of fastflow discharge (±0.1% of maximum fastflow) and fastflow tracer concentrations (±9.5% and ±16% of maximum fastflow nitrate concentration and specific conductance, respectively). With real stream data, the TRaMM produces high-frequency estimates of slowflow and fastflow discharge that align with expectations for each stream based on their respective hydrologic settings. The use of two tracers with the TRaMM provides an innovative and objective approach for estimating high-frequency fastflow concentrations and contributions of fastflow water to the stream. This provides useful information for tracking chemical movement to streams and allows for better selection and implementation of water quality management strategies.

  11. An evaluation of inferential procedures for adaptive clinical trial designs with pre-specified rules for modifying the sample size.

    PubMed

    Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S

    2014-09-01

    Many papers have introduced adaptive clinical trial methods that allow modifications to the sample size based on interim estimates of treatment effect. There has been extensive commentary on type I error control and efficiency considerations, but little research on estimation after an adaptive hypothesis test. We evaluate the reliability and precision of different inferential procedures in the presence of an adaptive design with pre-specified rules for modifying the sampling plan. We extend group sequential orderings of the outcome space based on the stage at stopping, likelihood ratio statistic, and sample mean to the adaptive setting in order to compute median-unbiased point estimates, exact confidence intervals, and P-values uniformly distributed under the null hypothesis. The likelihood ratio ordering is found to average shorter confidence intervals and produce higher probabilities of P-values below important thresholds than alternative approaches. The bias adjusted mean demonstrates the lowest mean squared error among candidate point estimates. A conditional error-based approach in the literature has the benefit of being the only method that accommodates unplanned adaptations. We compare the performance of this and other methods in order to quantify the cost of failing to plan ahead in settings where adaptations could realistically be pre-specified at the design stage. We find the cost to be meaningful for all designs and treatment effects considered, and to be substantial for designs frequently proposed in the literature. © 2014, The International Biometric Society.

  12. A bootstrap estimation scheme for chemical compositional data with nondetects

    USGS Publications Warehouse

    Palarea-Albaladejo, J; Martín-Fernández, J.A; Olea, Ricardo A.

    2014-01-01

    The bootstrap method is commonly used to estimate the distribution of estimators and their associated uncertainty when explicit analytic expressions are not available or are difficult to obtain. It has been widely applied in environmental and geochemical studies, where the data generated often represent parts of whole, typically chemical concentrations. This kind of constrained data is generically called compositional data, and they require specialised statistical methods to properly account for their particular covariance structure. On the other hand, it is not unusual in practice that those data contain labels denoting nondetects, that is, concentrations falling below detection limits. Nondetects impede the implementation of the bootstrap and represent an additional source of uncertainty that must be taken into account. In this work, a bootstrap scheme is devised that handles nondetects by adding an imputation step within the resampling process and conveniently propagates their associated uncertainly. In doing so, it considers the constrained relationships between chemical concentrations originated from their compositional nature. Bootstrap estimates using a range of imputation methods, including new stochastic proposals, are compared across scenarios of increasing difficulty. They are formulated to meet compositional principles following the log-ratio approach, and an adjustment is introduced in the multivariate case to deal with nonclosed samples. Results suggest that nondetect bootstrap based on model-based imputation is generally preferable. A robust approach based on isometric log-ratio transformations appears to be particularly suited in this context. Computer routines in the R statistical programming language are provided. 

  13. Quantification of damage due to low-dose radiation exposure in mice: construction and application of a biodosimetric model using mRNA indicators in circulating white blood cells

    PubMed Central

    Ishihara, Hiroshi; Tanaka, Izumi; Yakumaru, Haruko; Tanaka, Mika; Yokochi, Kazuko; Fukutsu, Kumiko; Tajima, Katsushi; Nishimura, Mayumi; Shimada, Yoshiya; Akashi, Makoto

    2016-01-01

    Biodosimetry, the measurement of radiation damage in a biologic sample, is a reliable tool for increasing the accuracy of dose estimation. Although established chromosome analyses are suitable for estimating the absorbed dose after high-dose irradiation, biodosimetric methodology to measure damage following low-dose exposure is underdeveloped. RNA analysis of circulating blood containing radiation-sensitive cells is a candidate biodosimetry method. Here we quantified RNA from a small amount of blood isolated from mice following low-dose body irradiation (<0.5 Gy) aimed at developing biodosimetric tools for situations that are difficult to study in humans. By focusing on radiation-sensitive undifferentiated cells in the blood based on Myc RNA expression, we quantified the relative levels of RNA for DNA damage-induced (DDI) genes, such as Bax, Bbc3 and Cdkn1a. The RNA ratios of DDI genes/Myc in the blood increased in a dose-dependent manner 4 h after whole-body irradiation at doses ranging from 0.1 to 0.5 Gy (air-kerma) of X-rays, regardless of whether the mice were in an active or resting state. The RNA ratios were significantly increased after 0.014 Gy (air-kerma) of single X-ray irradiation. The RNA ratios were directly proportional to the absorbed doses in water ranging from 0.1 to 0.5 Gy, based on gamma-irradiation from 137Cs. Four hours after continuous irradiation with gamma-rays or by internal contamination with a beta-emitter, the increased RNA ratios resembled those following single irradiation. These findings indicate that the RNA status can be utilized as a biodosimetric tool to estimate low-dose radiation when focusing on undifferentiated cells in blood. PMID:26589759

  14. Maternal mortality ratio in Lebanon in 2008: a hospital-based reproductive age mortality study (RAMOS).

    PubMed

    Hobeika, Elie; Abi Chaker, Samer; Harb, Hilda; Rahbany Saad, Rita; Ammar, Walid; Adib, Salim

    2014-01-01

    International agencies have recently assigned Lebanon to the group H of countries with "no national data on maternal mortality," and estimated a corresponding maternal mortality ratio (MMR) of 150 per 100,000 live births. The Ministry of Public Health addressed the discrepancy perceived between the reality of the maternal mortality ratio experience in Lebanon and the international report by facilitating a hospital-based reproductive age mortality study, sponsored by the World Health Organization Representative Office in Lebanon, aiming at providing an accurate estimate of a maternal mortality ratio for 2008. The survey allowed a detailed analysis of maternal causes of deaths. Reproductive age deaths (15-49 years) were initially identified through hospital records. A trained MD traveled to each hospital to ascertain whether recorded deaths were in fact maternal deaths or not. ICD10 codes were provided by the medical controller for each confirmed maternal deaths. There were 384 RA death cases, of which 13 were confirmed maternal deaths (339%) (numerator). In 2008, there were 84823 live births in Lebanon (denominator). The MMR in Lebanon in 2008 was thus officially estimated at 23/100,000 live births, with an "uncertainty range" from 153 to 30.6. Hemorrhage was the leading cause of death, with double the frequency of all other causes (pregnancy-induced hypertension, eclampsia, infection, and embolism). This specific enquiry responded to a punctual need to correct a clearly inadequate report, and it should be relayed by an on-going valid surveillance system. Results indicate that special attention has to be devoted to the management of peri-partum hemorrhage cases. Arab, postpartum hemorrhage, development, pregnancy management, verbal autopsy

  15. Differences in Movement Pattern and Detectability between Males and Females Influence How Common Sampling Methods Estimate Sex Ratio.

    PubMed

    Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco

    2016-01-01

    Sampling the biodiversity is an essential step for conservation, and understanding the efficiency of sampling methods allows us to estimate the quality of our biodiversity data. Sex ratio is an important population characteristic, but until now, no study has evaluated how efficient are the sampling methods commonly used in biodiversity surveys in estimating the sex ratio of populations. We used a virtual ecologist approach to investigate whether active and passive capture methods are able to accurately sample a population's sex ratio and whether differences in movement pattern and detectability between males and females produce biased estimates of sex-ratios when using these methods. Our simulation allowed the recognition of individuals, similar to mark-recapture studies. We found that differences in both movement patterns and detectability between males and females produce biased estimates of sex ratios. However, increasing the sampling effort or the number of sampling days improves the ability of passive or active capture methods to properly sample sex ratio. Thus, prior knowledge regarding movement patterns and detectability for species is important information to guide field studies aiming to understand sex ratio related patterns.

  16. Differences in Movement Pattern and Detectability between Males and Females Influence How Common Sampling Methods Estimate Sex Ratio

    PubMed Central

    Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco

    2016-01-01

    Sampling the biodiversity is an essential step for conservation, and understanding the efficiency of sampling methods allows us to estimate the quality of our biodiversity data. Sex ratio is an important population characteristic, but until now, no study has evaluated how efficient are the sampling methods commonly used in biodiversity surveys in estimating the sex ratio of populations. We used a virtual ecologist approach to investigate whether active and passive capture methods are able to accurately sample a population’s sex ratio and whether differences in movement pattern and detectability between males and females produce biased estimates of sex-ratios when using these methods. Our simulation allowed the recognition of individuals, similar to mark-recapture studies. We found that differences in both movement patterns and detectability between males and females produce biased estimates of sex ratios. However, increasing the sampling effort or the number of sampling days improves the ability of passive or active capture methods to properly sample sex ratio. Thus, prior knowledge regarding movement patterns and detectability for species is important information to guide field studies aiming to understand sex ratio related patterns. PMID:27441554

  17. Preterm Birth and Prenatal Maternal Occupation: The Role of Hispanic Ethnicity and Nativity in a Population-Based Sample in Los Angeles, California

    PubMed Central

    Wilhelm, Michelle; Wang, Anthony; Ritz, Beate

    2014-01-01

    Objectives. We investigated preterm birth (PTB) in relation to maternal occupational exposure and whether effect measures were modified by Hispanic ethnicity and nativity in a population-based sample with high proportion of Hispanics. Methods. We used a case-control study (n = 2543) nested within a cohort of 58 316 births in Los Angeles County, California, in 2003. We categorized prenatal occupations using the US Census Occupation Codes and Classification System and developed a job exposure matrix. Odds ratios for PTB were estimated using logistic regression. Results. Odds ratios for PTB were increased for all women in health care practitioner and technical occupations, but the 95% confidence intervals included the null value; effects were more pronounced among Hispanics. We estimated elevated odds ratios for foreign-born Hispanic women in building and grounds cleaning and maintenance occupations. Shift work and physically demanding work affected births among US-born but not foreign-born Hispanics. Conclusions. Hispanic women are at particular risk for PTB related to adverse prenatal occupational exposure. Nativity may moderate these effects on PTB. Maternal occupational exposures likely contribute to ethnic disparities in PTB. PMID:24354840

  18. Mapping Quantitative Traits in Unselected Families: Algorithms and Examples

    PubMed Central

    Dupuis, Josée; Shi, Jianxin; Manning, Alisa K.; Benjamin, Emelia J.; Meigs, James B.; Cupples, L. Adrienne; Siegmund, David

    2009-01-01

    Linkage analysis has been widely used to identify from family data genetic variants influencing quantitative traits. Common approaches have both strengths and limitations. Likelihood ratio tests typically computed in variance component analysis can accommodate large families but are highly sensitive to departure from normality assumptions. Regression-based approaches are more robust but their use has primarily been restricted to nuclear families. In this paper, we develop methods for mapping quantitative traits in moderately large pedigrees. Our methods are based on the score statistic which in contrast to the likelihood ratio statistic, can use nonparametric estimators of variability to achieve robustness of the false positive rate against departures from the hypothesized phenotypic model. Because the score statistic is easier to calculate than the likelihood ratio statistic, our basic mapping methods utilize relatively simple computer code that performs statistical analysis on output from any program that computes estimates of identity-by-descent. This simplicity also permits development and evaluation of methods to deal with multivariate and ordinal phenotypes, and with gene-gene and gene-environment interaction. We demonstrate our methods on simulated data and on fasting insulin, a quantitative trait measured in the Framingham Heart Study. PMID:19278016

  19. Stable estimate of primary OC/EC ratios in the EC tracer method

    NASA Astrophysics Data System (ADS)

    Chu, Shao-Hang

    In fine particulate matter studies, the primary OC/EC ratio plays an important role in estimating the secondary organic aerosol contribution to PM2.5 concentrations using the EC tracer method. In this study, numerical experiments are carried out to test and compare various statistical techniques in the estimation of primary OC/EC ratios. The influence of random measurement errors in both primary OC and EC measurements on the estimation of the expected primary OC/EC ratios is examined. It is found that random measurement errors in EC generally create an underestimation of the slope and an overestimation of the intercept of the ordinary least-squares regression line. The Deming regression analysis performs much better than the ordinary regression, but it tends to overcorrect the problem by slightly overestimating the slope and underestimating the intercept. Averaging the ratios directly is usually undesirable because the average is strongly influenced by unrealistically high values of OC/EC ratios resulting from random measurement errors at low EC concentrations. The errors generally result in a skewed distribution of the OC/EC ratios even if the parent distributions of OC and EC are close to normal. When measured OC contains a significant amount of non-combustion OC Deming regression is a much better tool and should be used to estimate both the primary OC/EC ratio and the non-combustion OC. However, if the non-combustion OC is negligibly small the best and most robust estimator of the OC/EC ratio turns out to be the simple ratio of the OC and EC averages. It not only reduces random errors by averaging individual variables separately but also acts as a weighted average of ratios to minimize the influence of unrealistically high OC/EC ratios created by measurement errors at low EC concentrations. The median of OC/EC ratios ranks a close second, and the geometric mean of ratios ranks third. This is because their estimations are insensitive to questionable extreme values. A real world example is given using the ambient data collected from an Atlanta STN site during the winter of 2001-2002.

  20. Risk of solid cancer in low dose-rate radiation epidemiological studies and the dose-rate effectiveness factor.

    PubMed

    Shore, Roy; Walsh, Linda; Azizova, Tamara; Rühm, Werner

    2017-10-01

    Estimated radiation risks used for radiation protection purposes have been based primarily on the Life Span Study (LSS) of atomic bomb survivors who received brief exposures at high dose rates, many with high doses. Information is needed regarding radiation risks from low dose-rate (LDR) exposures to low linear-energy-transfer (low-LET) radiation. We conducted a meta-analysis of LDR epidemiologic studies that provide dose-response estimates of total solid cancer risk in adulthood in comparison to corresponding LSS risks, in order to estimate a dose rate effectiveness factor (DREF). We identified 22 LDR studies with dose-response risk estimates for solid cancer after minimizing information overlap. For each study, a parallel risk estimate was derived from the LSS risk model using matching values for sex, mean ages at first exposure and attained age, targeted cancer types, and accounting for type of dosimetric assessment. For each LDR study, a ratio of the excess relative risk per Gy (ERR Gy -1 ) to the matching LSS ERR risk estimate (LDR/LSS) was calculated, and a meta-analysis of the risk ratios was conducted. The reciprocal of the resultant risk ratio provided an estimate of the DREF. The meta-analysis showed a LDR/LSS risk ratio of 0.36 (95% confidence interval [CI] 0.14, 0.57) for the 19 studies of solid cancer mortality and 0.33 (95% CI 0.13, 0.54) when three cohorts with only incidence data also were added, implying a DREF with values around 3, but statistically compatible with 2. However, the analyses were highly dominated by the Mayak worker study. When the Mayak study was excluded the LDR/LSS risk ratios increased: 1.12 (95% CI 0.40, 1.84) for mortality and 0.54 (95% CI 0.09, 0.99) for mortality + incidence, implying a lower DREF in the range of 1-2. Meta-analyses that included only cohorts in which the mean dose was <100 mGy yielded a risk ratio of 1.06 (95% CI 0.30, 1.83) for solid cancer mortality and 0.58 (95% CI 0.10, 1.06) for mortality + incidence data. The interpretation of a best estimate for a value of the DREF depends on the appropriateness of including the Mayak study. This study indicates a range of uncertainty in the value of DREF between 1 and about 2 after protracted radiation exposure. The LDR data provide direct evidence regarding risk from exposures at low dose rates as an important complement to the LSS risk estimates used for radiation protection purposes.

  1. Emission measures derived from far ultraviolet spectra of T Tauri stars

    NASA Astrophysics Data System (ADS)

    Cram, L. E.; Giampapa, M. S.; Imhoff, C. L.

    1980-06-01

    Spectroscopic diagnostics based on UV emission line observations have been developed to study the solar chromosphere, transition region, and corona. The atmospheric properties that can be inferred from observations of total line intensities include the temperature, by identifying the ionic species present; the temperature distribution of the emission measure, from the absolute intensities; and the electron density of the source, from line intensity ratios sensitive to the electron density. In the present paper, the temperature distribution of the emission measure is estimated from observations of far UV emission line fluxes of the T Tauri stars, RW Aurigae and RU Lupi, made on the IUE. A crude estimate of the electron density of one star is obtained, using density-sensitive line ratios.

  2. Cost-effectiveness of community-based practitioner programmes in Ethiopia, Indonesia and Kenya.

    PubMed

    McPake, Barbara; Edoka, Ijeoma; Witter, Sophie; Kielmann, Karina; Taegtmeyer, Miriam; Dieleman, Marjolein; Vaughan, Kelsey; Gama, Elvis; Kok, Maryse; Datiko, Daniel; Otiso, Lillian; Ahmed, Rukhsana; Squires, Neil; Suraratdecha, Chutima; Cometto, Giorgio

    2015-09-01

    To assess the cost-effectiveness of community-based practitioner programmes in Ethiopia, Indonesia and Kenya. Incremental cost-effectiveness ratios for the three programmes were estimated from a government perspective. Cost data were collected for 2012. Life years gained were estimated based on coverage of reproductive, maternal, neonatal and child health services. For Ethiopia and Kenya, estimates of coverage before and after the implementation of the programme were obtained from empirical studies. For Indonesia, coverage of health service interventions was estimated from routine data. We used the Lives Saved Tool to estimate the number of lives saved from changes in reproductive, maternal, neonatal and child health-service coverage. Gross domestic product per capita was used as the reference willingness-to-pay threshold value. The estimated incremental cost per life year gained was 82 international dollars ($)in Kenya, $999 in Ethiopia and $3396 in Indonesia. The results were most sensitive to uncertainty in the estimates of life-years gained. Based on the results of probabilistic sensitivity analysis, there was greater than 80% certainty that each programme was cost-effective. Community-based approaches are likely to be cost-effective for delivery of some essential health interventions where community-based practitioners operate within an integrated team supported by the health system. Community-based practitioners may be most appropriate in rural poor communities that have limited access to more qualified health professionals. Further research is required to understand which programmatic design features are critical to effectiveness.

  3. Statistical models for estimating daily streamflow in Michigan

    USGS Publications Warehouse

    Holtschlag, D.J.; Salehi, Habib

    1992-01-01

    Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.

  4. Rapid Estimation of Astaxanthin and the Carotenoid-to-Chlorophyll Ratio in the Green Microalga Chromochloris zofingiensis Using Flow Cytometry.

    PubMed

    Chen, Junhui; Wei, Dong; Pohnert, Georg

    2017-07-19

    The green microalga Chromochloris zofingiensis can accumulate significant amounts of valuable carotenoids, mainly natural astaxanthin, a product with applications in functional food, cosmetics, nutraceuticals, and with potential therapeutic value in cardiovascular and neurological diseases. To optimize the production of astaxanthin, it is essential to monitor the content of astaxanthin in algal cells during cultivation. The widely used HPLC (high-performance liquid chromatography) method for quantitative astaxanthin determination is time-consuming and laborious. In the present work, we present a method using flow cytometry (FCM) for in vivo determination of the astaxanthin content and the carotenoid-to-chlorophyll ratio (Car/Chl) in mixotrophic C. zofingiensis . The method is based on the assessment of fluorescent characteristics of cellular pigments. The mean fluorescence intensity (MFI) of living cells was determined by FCM to monitor pigment formation based on the correlation between MFI detected in particular channels (FL1: 533 ± 15 nm; FL2: 585 ± 20 nm; FL3: >670 nm) and pigment content in algal cells. Through correlation and regression analysis, a linear relationship was observed between MFI in FL2 (band-pass filter, emission at 585 nm in FCM) and astaxanthin content (in HPLC) and applied for predicting astaxanthin content. With similar procedures, the relationships between MFI in different channels and Car/Chl ratio in mixotrophic C. zofingiensis were also determined. Car/Chl ratios could be estimated by the ratios of MFI (FL1/FL3, FL2/FL3). FCM is thus a highly efficient and feasible method for rapid estimation of astaxanthin content in the green microalga C. zofingiensis . The rapid FCM method is complementary to the current HPLC method, especially for rapid evaluation and prediction of astaxanthin formation as it is required during the high-throughput culture in the laboratory and mass cultivation in industry.

  5. Rapid Estimation of Astaxanthin and the Carotenoid-to-Chlorophyll Ratio in the Green Microalga Chromochloris zofingiensis Using Flow Cytometry

    PubMed Central

    Chen, Junhui; Pohnert, Georg

    2017-01-01

    The green microalga Chromochloris zofingiensis can accumulate significant amounts of valuable carotenoids, mainly natural astaxanthin, a product with applications in functional food, cosmetics, nutraceuticals, and with potential therapeutic value in cardiovascular and neurological diseases. To optimize the production of astaxanthin, it is essential to monitor the content of astaxanthin in algal cells during cultivation. The widely used HPLC (high-performance liquid chromatography) method for quantitative astaxanthin determination is time-consuming and laborious. In the present work, we present a method using flow cytometry (FCM) for in vivo determination of the astaxanthin content and the carotenoid-to-chlorophyll ratio (Car/Chl) in mixotrophic C. zofingiensis. The method is based on the assessment of fluorescent characteristics of cellular pigments. The mean fluorescence intensity (MFI) of living cells was determined by FCM to monitor pigment formation based on the correlation between MFI detected in particular channels (FL1: 533 ± 15 nm; FL2: 585 ± 20 nm; FL3: >670 nm) and pigment content in algal cells. Through correlation and regression analysis, a linear relationship was observed between MFI in FL2 (band-pass filter, emission at 585 nm in FCM) and astaxanthin content (in HPLC) and applied for predicting astaxanthin content. With similar procedures, the relationships between MFI in different channels and Car/Chl ratio in mixotrophic C. zofingiensis were also determined. Car/Chl ratios could be estimated by the ratios of MFI (FL1/FL3, FL2/FL3). FCM is thus a highly efficient and feasible method for rapid estimation of astaxanthin content in the green microalga C. zofingiensis. The rapid FCM method is complementary to the current HPLC method, especially for rapid evaluation and prediction of astaxanthin formation as it is required during the high-throughput culture in the laboratory and mass cultivation in industry. PMID:28753934

  6. The Independent Associations between Walk Score® and Neighborhood Socioeconomic Status, Waist Circumference, Waist-To-Hip Ratio and Body Mass Index Among Urban Adults.

    PubMed

    McCormack, Gavin R; Blackstaffe, Anita; Nettel-Aguirre, Alberto; Csizmadi, Ilona; Sandalack, Beverly; Uribe, Francisco Alaniz; Rayes, Afrah; Friedenreich, Christine; Potestio, Melissa L

    2018-06-11

    Background : Environmental and policy factors can influence weight status via facilitating or discouraging physical activity and healthy diet. Despite mixed evidence, some findings suggest that the neighborhood built environment, including “walkability”, is associated with overweight and obesity. Most of these findings have measured body mass index (BMI), yet other weight status measures including waist circumference (WC) and waist-to-hip (W-H) ratio are also predictive of health outcomes, independent of BMI. Our study aim was to estimate the associations between walkability, measured using Walk Score ® , and each of WC, W-H ratio, and BMI among urban Canadian adults. Methods : In 2014, n = 851 adults recruited from 12 structurally and socioeconomic diverse neighborhoods (Calgary, Alberta, Canada) provided complete data on a physical activity, health and demographic questionnaire and self-reported anthropometric measures (i.e., height and weight, WC and hip circumference). Anthropometric data were used to estimate WC, W-H ratio, and BMI which were categorized into low and high risk in relation to their potential adverse effect on health. WC and BMI were also combined to provide a proxy measure of both overall and abdominal adiposity. Multivariable logistic regression models estimated odds ratios (OR) and 95% confidence intervals (CI) for associations between each weight status outcome and Walk Score ® . Results : A one-unit increase in Walk Score ® was associated with lower odds of being high-risk based on WC (OR = 0.99; 95%CI 0.97⁻0.99). Notably, those residing in socioeconomically disadvantage neighborhoods had significantly higher odds of being high risk based on WC, BMI, and WC-BMI combined compared with advantaged neighborhoods. Conclusions : Interventions that promote healthy weight through the design of neighborhoods that support and enhance the effect of physical activity and diet-related interventions could have a significant population health impact.

  7. Variable disparity estimation based intermediate view reconstruction in dynamic flow allocation over EPON-based access networks

    NASA Astrophysics Data System (ADS)

    Bae, Kyung-Hoon; Lee, Jungjoon; Kim, Eun-Soo

    2008-06-01

    In this paper, a variable disparity estimation (VDE)-based intermediate view reconstruction (IVR) in dynamic flow allocation (DFA) over an Ethernet passive optical network (EPON)-based access network is proposed. In the proposed system, the stereoscopic images are estimated by a variable block-matching algorithm (VBMA), and they are transmitted to the receiver through DFA over EPON. This scheme improves a priority-based access network by converting it to a flow-based access network with a new access mechanism and scheduling algorithm, and then 16-view images are synthesized by the IVR using VDE. Some experimental results indicate that the proposed system improves the peak-signal-to-noise ratio (PSNR) to as high as 4.86 dB and reduces the processing time to 3.52 s. Additionally, the network service provider can provide upper limits of transmission delays by the flow. The modeling and simulation results, including mathematical analyses, from this scheme are also provided.

  8. UWB pulse detection and TOA estimation using GLRT

    NASA Astrophysics Data System (ADS)

    Xie, Yan; Janssen, Gerard J. M.; Shakeri, Siavash; Tiberius, Christiaan C. J. M.

    2017-12-01

    In this paper, a novel statistical approach is presented for time-of-arrival (TOA) estimation based on first path (FP) pulse detection using a sub-Nyquist sampling ultra-wide band (UWB) receiver. The TOA measurement accuracy, which cannot be improved by averaging of the received signal, can be enhanced by the statistical processing of a number of TOA measurements. The TOA statistics are modeled and analyzed for a UWB receiver using threshold crossing detection of a pulse signal with noise. The detection and estimation scheme based on the Generalized Likelihood Ratio Test (GLRT) detector, which captures the full statistical information of the measurement data, is shown to achieve accurate TOA estimation and allows for a trade-off between the threshold level, the noise level, the amplitude and the arrival time of the first path pulse, and the accuracy of the obtained final TOA.

  9. Stereovision-based pose and inertia estimation of unknown and uncooperative space objects

    NASA Astrophysics Data System (ADS)

    Pesce, Vincenzo; Lavagna, Michèle; Bevilacqua, Riccardo

    2017-01-01

    Autonomous close proximity operations are an arduous and attractive problem in space mission design. In particular, the estimation of pose, motion and inertia properties of an uncooperative object is a challenging task because of the lack of available a priori information. This paper develops a novel method to estimate the relative position, velocity, angular velocity, attitude and the ratios of the components of the inertia matrix of an uncooperative space object using only stereo-vision measurements. The classical Extended Kalman Filter (EKF) and an Iterated Extended Kalman Filter (IEKF) are used and compared for the estimation procedure. In addition, in order to compute the inertia properties, the ratios of the inertia components are added to the state and a pseudo-measurement equation is considered in the observation model. The relative simplicity of the proposed algorithm could be suitable for an online implementation for real applications. The developed algorithm is validated by numerical simulations in MATLAB using different initial conditions and uncertainty levels. The goal of the simulations is to verify the accuracy and robustness of the proposed estimation algorithm. The obtained results show satisfactory convergence of estimation errors for all the considered quantities. The obtained results, in several simulations, shows some improvements with respect to similar works, which deal with the same problem, present in literature. In addition, a video processing procedure is presented to reconstruct the geometrical properties of a body using cameras. This inertia reconstruction algorithm has been experimentally validated at the ADAMUS (ADvanced Autonomous MUltiple Spacecraft) Lab at the University of Florida. In the future, this different method could be integrated to the inertia ratios estimator to have a complete tool for mass properties recognition.

  10. The method for homography estimation between two planes based on lines and points

    NASA Astrophysics Data System (ADS)

    Shemiakina, Julia; Zhukovsky, Alexander; Nikolaev, Dmitry

    2018-04-01

    The paper considers the problem of estimating a transform connecting two images of one plane object. The method based on RANSAC is proposed for calculating the parameters of projective transform which uses points and lines correspondences simultaneously. A series of experiments was performed on synthesized data. Presented results show that the algorithm convergence rate is significantly higher when actual lines are used instead of points of lines intersection. When using both lines and feature points it is shown that the convergence rate does not depend on the ratio between lines and feature points in the input dataset.

  11. Deletion Diagnostics for Alternating Logistic Regressions

    PubMed Central

    Preisser, John S.; By, Kunthel; Perin, Jamie; Qaqish, Bahjat F.

    2013-01-01

    Deletion diagnostics are introduced for the regression analysis of clustered binary outcomes estimated with alternating logistic regressions, an implementation of generalized estimating equations (GEE) that estimates regression coefficients in a marginal mean model and in a model for the intracluster association given by the log odds ratio. The diagnostics are developed within an estimating equations framework that recasts the estimating functions for association parameters based upon conditional residuals into equivalent functions based upon marginal residuals. Extensions of earlier work on GEE diagnostics follow directly, including computational formulae for one-step deletion diagnostics that measure the influence of a cluster of observations on the estimated regression parameters and on the overall marginal mean or association model fit. The diagnostic formulae are evaluated with simulations studies and with an application concerning an assessment of factors associated with health maintenance visits in primary care medical practices. The application and the simulations demonstrate that the proposed cluster-deletion diagnostics for alternating logistic regressions are good approximations of their exact fully iterated counterparts. PMID:22777960

  12. Development of a modeling approach to estimate indoor-to-outdoor sulfur ratios and predict indoor PM2.5 and black carbon concentrations for Eastern Massachusetts households

    PubMed Central

    Tang, Chia Hsi; Garshick, Eric; Grady, Stephanie; Coull, Brent; Schwartz, Joel; Koutrakis, Petros

    2018-01-01

    The effects of indoor air pollution on human health have drawn increasing attention among the scientific community as individuals spend most of their time indoors. However, indoor air sampling is labor-intensive and costly, which limits the ability to study the adverse health effects related to indoor air pollutants. To overcome this challenge, many researchers have attempted to predict indoor exposures based on outdoor pollutant concentrations, home characteristics, and weather parameters. Typically, these models require knowledge of the infiltration factor, which indicates the fraction of ambient particles that penetrates indoors. For estimating indoor fine particulate matter (PM2.5) exposure, a common approach is to use the indoor-to-outdoor sulfur ratio (Sindoor/Soutdoor) as a proxy of the infiltration factor. The objective of this study was to develop a robust model that estimates Sindoor/Soutdoor for individual households that can be incorporated into models to predict indoor PM2.5 and black carbon (BC) concentrations. Overall, our model adequately estimated Sindoor/Soutdoor with an out-of-sample by home-season R2 of 0.89. Estimated Sindoor/Soutdoor reflected behaviors that influence particle infiltration, including window opening, use of forced air heating, and air purifier. Sulfur ratio-adjusted models predicted indoor PM2.5 and BC with high precision, with out-of-sample R2 values of 0.79 and 0.76, respectively. PMID:29064481

  13. Likelihood ratios for glaucoma diagnosis using spectral-domain optical coherence tomography.

    PubMed

    Lisboa, Renato; Mansouri, Kaweh; Zangwill, Linda M; Weinreb, Robert N; Medeiros, Felipe A

    2013-11-01

    To present a methodology for calculating likelihood ratios for glaucoma diagnosis for continuous retinal nerve fiber layer (RNFL) thickness measurements from spectral-domain optical coherence tomography (spectral-domain OCT). Observational cohort study. A total of 262 eyes of 187 patients with glaucoma and 190 eyes of 100 control subjects were included in the study. Subjects were recruited from the Diagnostic Innovations Glaucoma Study. Eyes with preperimetric and perimetric glaucomatous damage were included in the glaucoma group. The control group was composed of healthy eyes with normal visual fields from subjects recruited from the general population. All eyes underwent RNFL imaging with Spectralis spectral-domain OCT. Likelihood ratios for glaucoma diagnosis were estimated for specific global RNFL thickness measurements using a methodology based on estimating the tangents to the receiver operating characteristic (ROC) curve. Likelihood ratios could be determined for continuous values of average RNFL thickness. Average RNFL thickness values lower than 86 μm were associated with positive likelihood ratios (ie, likelihood ratios greater than 1), whereas RNFL thickness values higher than 86 μm were associated with negative likelihood ratios (ie, likelihood ratios smaller than 1). A modified Fagan nomogram was provided to assist calculation of posttest probability of disease from the calculated likelihood ratios and pretest probability of disease. The methodology allowed calculation of likelihood ratios for specific RNFL thickness values. By avoiding arbitrary categorization of test results, it potentially allows for an improved integration of test results into diagnostic clinical decision making. Copyright © 2013. Published by Elsevier Inc.

  14. Hyperspectral narrowband and multispectral broadband indices for remote sensing of crop evapotranspiration and its components (transpiration and soil evaporation)

    USGS Publications Warehouse

    Marshall, Michael T.; Thenkabail, Prasad S.; Biggs, Trent; Post, Kirk

    2016-01-01

    Evapotranspiration (ET) is an important component of micro- and macro-scale climatic processes. In agriculture, estimates of ET are frequently used to monitor droughts, schedule irrigation, and assess crop water productivity over large areas. Currently, in situ measurements of ET are difficult to scale up for regional applications, so remote sensing technology has been increasingly used to estimate crop ET. Ratio-based vegetation indices retrieved from optical remote sensing, like the Normalized Difference Vegetation Index (NDVI), Soil Adjusted Vegetation Index, and Enhanced Vegetation Index are critical components of these models, particularly for the partitioning of ET into transpiration and soil evaporation. These indices have their limitations, however, and can induce large model bias and error. In this study, micrometeorological and spectroradiometric data collected over two growing seasons in cotton, maize, and rice fields in the Central Valley of California were used to identify spectral wavelengths from 428 to 2295 nm that produced the highest correlation to and lowest error with ET, transpiration, and soil evaporation. The analysis was performed with hyperspectral narrowbands (HNBs) at 10 nm intervals and multispectral broadbands (MSBBs) commonly retrieved by Earth observation platforms. The study revealed that (1) HNB indices consistently explained more variability in ET (ΔR2 = 0.12), transpiration (ΔR2 = 0.17), and soil evaporation (ΔR2 = 0.14) than MSBB indices; (2) the relationship between transpiration using the ratio-based index most commonly used for ET modeling, NDVI, was strong (R2 = 0.51), but the hyperspectral equivalent was superior (R2 = 0.68); and (3) soil evaporation was not estimated well using ratio-based indices from the literature (highest R2 = 0.37), but could be after further evaluation, using ratio-based indices centered on 743 and 953 nm (R2 = 0.72) or 428 and 1518 nm (R2 = 0.69).

  15. Pressure Ratio to Thermal Environments

    NASA Technical Reports Server (NTRS)

    Lopez, Pedro; Wang, Winston

    2012-01-01

    A pressure ratio to thermal environments (PRatTlE.pl) program is a Perl language code that estimates heating at requested body point locations by scaling the heating at a reference location times a pressure ratio factor. The pressure ratio factor is the ratio of the local pressure at the reference point and the requested point from CFD (computational fluid dynamics) solutions. This innovation provides pressure ratio-based thermal environments in an automated and traceable method. Previously, the pressure ratio methodology was implemented via a Microsoft Excel spreadsheet and macro scripts. PRatTlE is able to calculate heating environments for 150 body points in less than two minutes. PRatTlE is coded in Perl programming language, is command-line-driven, and has been successfully executed on both the HP and Linux platforms. It supports multiple concurrent runs. PRatTlE contains error trapping and input file format verification, which allows clear visibility into the input data structure and intermediate calculations.

  16. 40 CFR 63.1215 - What are the health-based compliance alternatives for total chlorine?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... (iii) Ratios for new sources. (A) You must use engineering information to estimate the Cl2/HCl... be obtained through the EPA's Air Toxics Web site at http://www.epa.gov/ttn/fera/risk_atra_main.html...

  17. 40 CFR 63.1215 - What are the health-based compliance alternatives for total chlorine?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... (iii) Ratios for new sources. (A) You must use engineering information to estimate the Cl2/HCl... be obtained through the EPA's Air Toxics Web site at http://www.epa.gov/ttn/fera/risk_atra_main.html...

  18. Optical and laser spectroscopic diagnostics for energy applications

    NASA Astrophysics Data System (ADS)

    Tripathi, Markandey Mani

    The continuing need for greater energy security and energy independence has motivated researchers to develop new energy technologies for better energy resource management and efficient energy usage. The focus of this dissertation is the development of optical (spectroscopic) sensing methodologies for various fuels, and energy applications. A fiber-optic NIR sensing methodology was developed for predicting water content in bio-oil. The feasibility of using the designed near infrared (NIR) system for estimating water content in bio-oil was tested by applying multivariate analysis to NIR spectral data. The calibration results demonstrated that the spectral information can successfully predict the bio-oil water content (from 16% to 36%). The effect of ultraviolet (UV) light on the chemical stability of bio-oil was studied by employing laser-induced fluorescence (LIF) spectroscopy. To simulate the UV light exposure, a laser in the UV region (325 nm) was employed for bio-oil excitation. The LIF, as a signature of chemical change, was recorded from bio-oil. From this study, it was concluded that phenols present in the bio-oil show chemical instability, when exposed to UV light. A laser-induced breakdown spectroscopy (LIBS)-based optical sensor was designed, developed, and tested for detection of four important trace impurities in rocket fuel (hydrogen). The sensor can simultaneously measure the concentrations of nitrogen, argon, oxygen, and helium in hydrogen from storage tanks and supply lines. The sensor had estimated lower detection limits of 80 ppm for nitrogen, 97 ppm for argon, 10 ppm for oxygen, and 25 ppm for helium. A chemiluminescence-based spectroscopic diagnostics were performed to measure equivalence ratios in methane-air premixed flames. A partial least-squares regression (PLS-R)-based multivariate sensing methodology was investigated. It was found that the equivalence ratios predicted with the PLS-R-based multivariate calibration model matched with the experimentally measured equivalence ratios within 7 %. A comparative study was performed for equivalence ratios measurement in atmospheric premixed methane-air flames with ungated LIBS and chemiluminescence spectroscopy. It was reported that LIBS-based calibration, which carries spectroscopic information from a "point-like-volume," provides better predictions of equivalence ratios compared to chemiluminescence-based calibration, which is essentially a "line-of-sight" measurement.

  19. Crude and intrinsic birth rates for Asian countries.

    PubMed

    Rele, J R

    1978-01-01

    An attempt to estimate birth rates for Asian countries. The main sources of information in developing countries has been census age-sex distribution, although inaccuracies in the basic data have made it difficult to reach a high degree of accuracy. Different methods bring widely varying results. The methodology presented here is based on the use of the conventional child-woman ratio from the census age-sex distribution, with a rough estimate of the expectation of life at birth. From the established relationships between child-woman ratio and the intrinsic birth rate of the nature y = a + bx + cx(2) at each level of life expectation, the intrinsic birth rate is first computed using coefficients already computed. The crude birth rate is obtained using the adjustment based on the census age-sex distribution. An advantage to this methodology is that the intrinsic birth rate, normally an involved computation, can be obtained relatively easily as a biproduct of the crude birth rates and the bases for the calculations for each of 33 Asian countries, in some cases over several time periods.

  20. Selection bias due to differential participation in a case-control study of mobile phone use and brain tumors.

    PubMed

    Lahkola, Anna; Salminen, Tiina; Auvinen, Anssi

    2005-05-01

    To evaluate the possible selection bias related to the differential participation of mobile phone users and non-users in a Finnish case-control study on mobile phone use and brain tumors. Mobile phone use was investigated among 777 controls and 726 cases participating in the full personal interview (full participants), and 321 controls and 103 cases giving only a brief phone interview (incomplete participants). To assess selection bias, the Mantel-Haenszel estimate of odds ratio was calculated for three different groups: full study participants, incomplete participants, and a combined group consisting of both full and incomplete participants. Among controls, 83% of the full participants and 73% of the incomplete participants had regularly used a mobile phone. Among cases, the figures were 76% and 64%, respectively. The odds ratio for brain tumor based on the combined group of full and incomplete participants was slightly closer to unity than that based only on the full participants. Selection bias tends to distort the effect estimates below unity, while analyses based on more comprehensive material gave results close to unity.

  1. The bioavailability of iron, zinc, protein and vitamin A is highly variable in French individual diets: Impact on nutrient inadequacy assessment and relation with the animal-to-plant ratio of diets.

    PubMed

    Perignon, Marlène; Barré, Tangui; Gazan, Rozenn; Amiot, Marie-Josèphe; Darmon, Nicole

    2018-01-01

    Nutritional adequacy depends on nutrient intakes and bioavailability which strongly varies with the plant- or animal-origin of foods. The aim was to estimate iron, zinc, protein and vitamin A bioavailability from individual diets, and investigate its relation with the animal-to-plant ratio (A/P) of diets. Bioavailability was estimated in 1899 French diets using diet-based algorithms or food-group specific conversion factors. Nutrient inadequacy was estimated based on i) bioavailability calculated in each individual diet and ii) average bioavailability assumed for Western-diets. Mean iron absorption, zinc absorption, protein quality and β-carotene conversion factor were 13%, 30%, 92%, and 17:1, respectively. Bioavailability displayed a high variability between individual diets, poorly explained by their A/P. Using individual bioavailability led to different inadequacy prevalence than with average factors assumed for Western-diets. In this population, the A/P does not seem sufficient to predict nutrient bioavailability and the corresponding recommended intakes. Nutritional adequacy should be assessed using bioavailability accounting for individual diets composition. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Effect of survey design and catch rate estimation on total catch estimates in Chinook salmon fisheries

    USGS Publications Warehouse

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2012-01-01

    Roving–roving and roving–access creel surveys are the primary techniques used to obtain information on harvest of Chinook salmon Oncorhynchus tshawytscha in Idaho sport fisheries. Once interviews are conducted using roving–roving or roving–access survey designs, mean catch rate can be estimated with the ratio-of-means (ROM) estimator, the mean-of-ratios (MOR) estimator, or the MOR estimator with exclusion of short-duration (≤0.5 h) trips. Our objective was to examine the relative bias and precision of total catch estimates obtained from use of the two survey designs and three catch rate estimators for Idaho Chinook salmon fisheries. Information on angling populations was obtained by direct visual observation of portions of Chinook salmon fisheries in three Idaho river systems over an 18-d period. Based on data from the angling populations, Monte Carlo simulations were performed to evaluate the properties of the catch rate estimators and survey designs. Among the three estimators, the ROM estimator provided the most accurate and precise estimates of mean catch rate and total catch for both roving–roving and roving–access surveys. On average, the root mean square error of simulated total catch estimates was 1.42 times greater and relative bias was 160.13 times greater for roving–roving surveys than for roving–access surveys. Length-of-stay bias and nonstationary catch rates in roving–roving surveys both appeared to affect catch rate and total catch estimates. Our results suggest that use of the ROM estimator in combination with an estimate of angler effort provided the least biased and most precise estimates of total catch for both survey designs. However, roving–access surveys were more accurate than roving–roving surveys for Chinook salmon fisheries in Idaho.

  3. A hybrid double-observer sightability model for aerial surveys

    USGS Publications Warehouse

    Griffin, Paul C.; Lubow, Bruce C.; Jenkins, Kurt J.; Vales, David J.; Moeller, Barbara J.; Reid, Mason; Happe, Patricia J.; Mccorquodale, Scott M.; Tirhi, Michelle J.; Schaberi, Jim P.; Beirne, Katherine

    2013-01-01

    Raw counts from aerial surveys make no correction for undetected animals and provide no estimate of precision with which to judge the utility of the counts. Sightability modeling and double-observer (DO) modeling are 2 commonly used approaches to account for detection bias and to estimate precision in aerial surveys. We developed a hybrid DO sightability model (model MH) that uses the strength of each approach to overcome the weakness in the other, for aerial surveys of elk (Cervus elaphus). The hybrid approach uses detection patterns of 2 independent observer pairs in a helicopter and telemetry-based detections of collared elk groups. Candidate MH models reflected hypotheses about effects of recorded covariates and unmodeled heterogeneity on the separate front-seat observer pair and back-seat observer pair detection probabilities. Group size and concealing vegetation cover strongly influenced detection probabilities. The pilot's previous experience participating in aerial surveys influenced detection by the front pair of observers if the elk group was on the pilot's side of the helicopter flight path. In 9 surveys in Mount Rainier National Park, the raw number of elk counted was approximately 80–93% of the abundance estimated by model MH. Uncorrected ratios of bulls per 100 cows generally were low compared to estimates adjusted for detection bias, but ratios of calves per 100 cows were comparable whether based on raw survey counts or adjusted estimates. The hybrid method was an improvement over commonly used alternatives, with improved precision compared to sightability modeling and reduced bias compared to DO modeling.

  4. Contrast-to-noise ratio optimization for a prototype phase-contrast computed tomography scanner.

    PubMed

    Müller, Mark; Yaroshenko, Andre; Velroyen, Astrid; Bech, Martin; Tapfer, Arne; Pauwels, Bart; Bruyndonckx, Peter; Sasov, Alexander; Pfeiffer, Franz

    2015-12-01

    In the field of biomedical X-ray imaging, novel techniques, such as phase-contrast and dark-field imaging, have the potential to enhance the contrast and provide complementary structural information about a specimen. In this paper, a first prototype of a preclinical X-ray phase-contrast CT scanner based on a Talbot-Lau interferometer is characterized. We present a study of the contrast-to-noise ratios for attenuation and phase-contrast images acquired with the prototype scanner. The shown results are based on a series of projection images and tomographic data sets of a plastic phantom in phase and attenuation-contrast recorded with varying acquisition settings. Subsequently, the signal and noise distribution of different regions in the phantom were determined. We present a novel method for estimation of contrast-to-noise ratios for projection images based on the cylindrical geometry of the phantom. Analytical functions, representing the expected signal in phase and attenuation-contrast for a circular object, are fitted to individual line profiles of the projection data. The free parameter of the fit function is used to estimate the contrast and the goodness of the fit is determined to assess the noise in the respective signal. The results depict the dependence of the contrast-to-noise ratios on the applied source voltages, the number of steps of the phase stepping routine, and the exposure times for an individual step. Moreover, the influence of the number of projection angles on the image quality of CT slices is investigated. Finally, the implications for future imaging purposes with the scanner are discussed.

  5. A Novel Attitude Estimation Algorithm Based on the Non-Orthogonal Magnetic Sensors

    PubMed Central

    Zhu, Jianliang; Wu, Panlong; Bo, Yuming

    2016-01-01

    Because the existing extremum ratio method for projectile attitude measurement is vulnerable to random disturbance, a novel integral ratio method is proposed to calculate the projectile attitude. First, the non-orthogonal measurement theory of the magnetic sensors is analyzed. It is found that the projectile rotating velocity is constant in one spinning circle and the attitude error is actually the pitch error. Next, by investigating the model of the extremum ratio method, an integral ratio mathematical model is established to improve the anti-disturbance performance. Finally, by combining the preprocessed magnetic sensor data based on the least-square method and the rotating extremum features in one cycle, the analytical expression of the proposed integral ratio algorithm is derived with respect to the pitch angle. The simulation results show that the proposed integral ratio method gives more accurate attitude calculations than does the extremum ratio method, and that the attitude error variance can decrease by more than 90%. Compared to the extremum ratio method (which collects only a single data point in one rotation cycle), the proposed integral ratio method can utilize all of the data collected in the high spin environment, which is a clearly superior calculation approach, and can be applied to the actual projectile environment disturbance. PMID:27213389

  6. Slip Ratio Estimation and Regenerative Brake Control for Decelerating Electric Vehicles without Detection of Vehicle Velocity and Acceleration

    NASA Astrophysics Data System (ADS)

    Suzuki, Toru; Fujimoto, Hiroshi

    In slip ratio control systems, it is necessary to detect the vehicle velocity in order to obtain the slip ratio. However, it is very difficult to measure this velocity directly. We have proposed slip ratio estimation and control methods that do not require the vehicle velocity with acceleration. In this paper, the slip ratio estimation and control methods are proposed without detecting the vehicle velocity and acceleration when it is decelerating. We carried out simulations and experiments by using an electric vehicle to verify the effectiveness of the proposed method.

  7. A comparison of two estimates of standard error for a ratio-of-means estimator for a mapped-plot sample design in southeast Alaska.

    Treesearch

    Willem W.S. van Hees

    2002-01-01

    Comparisons of estimated standard error for a ratio-of-means (ROM) estimator are presented for forest resource inventories conducted in southeast Alaska between 1995 and 2000. Estimated standard errors for the ROM were generated by using a traditional variance estimator and also approximated by bootstrap methods. Estimates of standard error generated by both...

  8. Likelihood ratio meta-analysis: New motivation and approach for an old method.

    PubMed

    Dormuth, Colin R; Filion, Kristian B; Platt, Robert W

    2016-03-01

    A 95% confidence interval (CI) in an updated meta-analysis may not have the expected 95% coverage. If a meta-analysis is simply updated with additional data, then the resulting 95% CI will be wrong because it will not have accounted for the fact that the earlier meta-analysis failed or succeeded to exclude the null. This situation can be avoided by using the likelihood ratio (LR) as a measure of evidence that does not depend on type-1 error. We show how an LR-based approach, first advanced by Goodman, can be used in a meta-analysis to pool data from separate studies to quantitatively assess where the total evidence points. The method works by estimating the log-likelihood ratio (LogLR) function from each study. Those functions are then summed to obtain a combined function, which is then used to retrieve the total effect estimate, and a corresponding 'intrinsic' confidence interval. Using as illustrations the CAPRIE trial of clopidogrel versus aspirin in the prevention of ischemic events, and our own meta-analysis of higher potency statins and the risk of acute kidney injury, we show that the LR-based method yields the same point estimate as the traditional analysis, but with an intrinsic confidence interval that is appropriately wider than the traditional 95% CI. The LR-based method can be used to conduct both fixed effect and random effects meta-analyses, it can be applied to old and new meta-analyses alike, and results can be presented in a format that is familiar to a meta-analytic audience. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Base Pressure at Supersonic Speeds on Two-dimensional Airfoils and on Bodies of Revolution with and Without Fins Having Turbulent Boundary Layers

    NASA Technical Reports Server (NTRS)

    LOVE EUGENE S

    1957-01-01

    An analysis has been made of available experimental data to show the effects of most of the variables that are more predominant in determining base pressure at supersonic speeds. The analysis covers base pressures for two-dimensional airfoils and for bodies of revolution with and without stabilizing fins and is restricted to turbulent boundary layers. The present status of available experimental information is summarized as are the existing methods for predicting base pressure. A simple semiempirical method is presented for estimating base pressure. For two-dimensional bases, this method stems from an analogy established between the base-pressure phenomena and the peak pressure rise associated with the separation of the boundary layer. An analysis made for axially symmetric flow indicates that the base pressure for bodies of revolution is subject to the same analogy. Based upon the methods presented, estimations are made of such effects as Mach number, angle of attack, boattailing, fineness ratio, and fins. These estimations give fair predictions of experimental results. (author)

  10. Comprehensive analysis of proton range uncertainties related to patient stopping-power-ratio estimation using the stoichiometric calibration

    PubMed Central

    Yang, M; Zhu, X R; Park, PC; Titt, Uwe; Mohan, R; Virshup, G; Clayton, J; Dong, L

    2012-01-01

    The purpose of this study was to analyze factors affecting proton stopping-power-ratio (SPR) estimations and range uncertainties in proton therapy planning using the standard stoichiometric calibration. The SPR uncertainties were grouped into five categories according to their origins and then estimated based on previously published reports or measurements. For the first time, the impact of tissue composition variations on SPR estimation was assessed and the uncertainty estimates of each category were determined for low-density (lung), soft, and high-density (bone) tissues. A composite, 95th percentile water-equivalent-thickness uncertainty was calculated from multiple beam directions in 15 patients with various types of cancer undergoing proton therapy. The SPR uncertainties (1σ) were quite different (ranging from 1.6% to 5.0%) in different tissue groups, although the final combined uncertainty (95th percentile) for different treatment sites was fairly consistent at 3.0–3.4%, primarily because soft tissue is the dominant tissue type in human body. The dominant contributing factor for uncertainties in soft tissues was the degeneracy of Hounsfield Numbers in the presence of tissue composition variations. To reduce the overall uncertainties in SPR estimation, the use of dual-energy computed tomography is suggested. The values recommended in this study based on typical treatment sites and a small group of patients roughly agree with the commonly referenced value (3.5%) used for margin design. By using tissue-specific range uncertainties, one could estimate the beam-specific range margin by accounting for different types and amounts of tissues along a beam, which may allow for customization of range uncertainty for each beam direction. PMID:22678123

  11. Top-down estimates of methane and nitrogen oxide emissions from shale gas production regions using aircraft measurements and a mesoscale Bayesian inversion system together with a flux ratio inversion technique

    NASA Astrophysics Data System (ADS)

    Cui, Y.; Brioude, J. F.; Angevine, W. M.; McKeen, S. A.; Henze, D. K.; Bousserez, N.; Liu, Z.; McDonald, B.; Peischl, J.; Ryerson, T. B.; Frost, G. J.; Trainer, M.

    2016-12-01

    Production of unconventional natural gas grew rapidly during the past ten years in the US which led to an increase in emissions of methane (CH4) and, depending on the shale region, nitrogen oxides (NOx). In terms of radiative forcing, CH4 is the second most important greenhouse gas after CO2. NOx is a precursor of ozone (O3) in the troposphere and nitrate particles, both of which are regulated by the US Clean Air Act. Emission estimates of CH4 and NOx from the shale regions are still highly uncertain. We present top-down estimates of CH4 and NOx surface fluxes from the Haynesville and Fayetteville shale production regions using aircraft data collected during the Southeast Nexus of Climate Change and Air Quality (SENEX) field campaign (June-July, 2013) and the Shale Oil and Natural Gas Nexus (SONGNEX) field campaign (March-May, 2015) within a mesoscale inversion framework. The inversion method is based on a mesoscale Bayesian inversion system using multiple transport models. EPA's 2011 National CH4 and NOx Emission Inventories are used as prior information to optimize CH4 and NOx emissions. Furthermore, the posterior CH4 emission estimates are used to constrain NOx emission estimates using a flux ratio inversion technique. Sensitivity of the posterior estimates to the use of off-diagonal terms in the error covariance matrices, the transport models, and prior estimates is discussed. Compared to the ground-based in-situ observations, the optimized CH4 and NOx inventories improve ground level CH4 and O3 concentrations calculated by the Weather Research and Forecasting mesoscale model coupled with chemistry (WRF-Chem).

  12. Stellar mass functions and implications for a variable IMF

    NASA Astrophysics Data System (ADS)

    Bernardi, M.; Sheth, R. K.; Fischer, J.-L.; Meert, A.; Chae, K.-H.; Dominguez-Sanchez, H.; Huertas-Company, M.; Shankar, F.; Vikram, V.

    2018-03-01

    Spatially resolved kinematics of nearby galaxies has shown that the ratio of dynamical to stellar population-based estimates of the mass of a galaxy (M_{*}^JAM/M_{*}) correlates with σe, the light-weighted velocity dispersion within its half-light radius, if M* is estimated using the same initial mass function (IMF) for all galaxies and the stellar mass-to-light ratio within each galaxy is constant. This correlation may indicate that, in fact, the IMF is more bottom-heavy or dwarf-rich for galaxies with large σ. We use this correlation to estimate a dynamical or IMF-corrected stellar mass, M_{*}^{α _{JAM}}, from M* and σe for a sample of 6 × 105 Sloan Digital Sky Survey (SDSS) galaxies for which spatially resolved kinematics is not available. We also compute the `virial' mass estimate k(n,R) R_e σ _R^2/G, where n is the Sérsic index, in the SDSS and ATLAS3D samples. We show that an n-dependent correction must be applied to the k(n, R) values provided by Prugniel & Simien. Our analysis also shows that the shape of the velocity dispersion profile in the ATLAS3D sample varies weakly with n: (σR/σe) = (R/Re)-γ(n). The resulting stellar mass functions, based on M_*^{α _{JAM}} and the recalibrated virial mass, are in good agreement. Using a Fundamental Plane-based observational proxy for σe produces comparable results. The use of direct measurements for estimating the IMF-dependent stellar mass is prohibitively expensive for a large sample of galaxies. By demonstrating that cheaper proxies are sufficiently accurate, our analysis should enable a more reliable census of the mass in stars, especially at high redshift, at a fraction of the cost. Our results are provided in tabular form.

  13. The Change in Oceanic O2 Inventory Associated with Recent Global Warming

    NASA Technical Reports Server (NTRS)

    Keeling, Ralph; Garcia, Hernan

    2002-01-01

    Oceans general circulation models predict that global warming may cause a decrease in the oceanic O2 inventory and an associated O2 outgassing. An independent argument is presented here in support of this prediction based on observational evidence of the ocean's biogeochemical response to natural warming. On time scales from seasonal to centennial, natural O2 flux/heat flux ratios are shown to occur in a range of 2 to 10 nmol O2 per Joule of warming, with larger ratios typically occurring at higher latitudes and over longer time scales. The ratios are several times larger than would be expected solely from the effect of heating on the O2 solubility, indicating that most of the O2 exchange is biologically mediated through links between heating and stratification. The change in oceanic O2 inventory through the 1990's is estimated to be 0.3 - 0.4 x 10(exp 14) mol O2 per year based on scaling the observed anomalous long-term ocean warming by natural O2 flux/heating ratios and allowing for uncertainty due to decadal variability. Implications are discussed for carbon budgets based on observed changes in atmospheric O2/N2 ratio and based on observed changes in ocean dissolved inorganic carbon.

  14. Integrating data from multiple sources for insights into demographic processes: Simulation studies and proof of concept for hierarchical change-in-ratio models.

    PubMed

    Nilsen, Erlend B; Strand, Olav

    2018-01-01

    We developed a model for estimating demographic rates and population abundance based on multiple data sets revealing information about population age- and sex structure. Such models have previously been described in the literature as change-in-ratio models, but we extend the applicability of the models by i) using time series data allowing the full temporal dynamics to be modelled, by ii) casting the model in an explicit hierarchical modelling framework, and by iii) estimating parameters based on Bayesian inference. Based on sensitivity analyses we conclude that the approach developed here is able to obtain estimates of demographic rate with high precision whenever unbiased data of population structure are available. Our simulations revealed that this was true also when data on population abundance are not available or not included in the modelling framework. Nevertheless, when data on population structure are biased due to different observability of different age- and sex categories this will affect estimates of all demographic rates. Estimates of population size is particularly sensitive to such biases, whereas demographic rates can be relatively precisely estimated even with biased observation data as long as the bias is not severe. We then use the models to estimate demographic rates and population abundance for two Norwegian reindeer (Rangifer tarandus) populations where age-sex data were available for all harvested animals, and where population structure surveys were carried out in early summer (after calving) and late fall (after hunting season), and population size is counted in winter. We found that demographic rates were similar regardless whether we include population count data in the modelling, but that the estimated population size is affected by this decision. This suggest that monitoring programs that focus on population age- and sex structure will benefit from collecting additional data that allow estimation of observability for different age- and sex classes. In addition, our sensitivity analysis suggests that focusing monitoring towards changes in demographic rates might be more feasible than monitoring abundance in many situations where data on population age- and sex structure can be collected.

  15. An 8-year, high-resolution reanalysis of atmospheric carbon dioxide mixing ratios based on OCO-2 and GOSAT-ACOS retrievals

    NASA Astrophysics Data System (ADS)

    Weir, B.; Chatterjee, A.; Ott, L. E.; Pawson, S.

    2017-12-01

    This talk presents an overview of results from the GEOS-Carb reanalysis of retrievals of average-column carbon dioxide (XCO2) from the Orbiting Carbon Observatory 2 (OCO-2) and Greenhouse Gases Observing Satellite (GOSAT) satellite missions. The reanalysis is a Level 3 (L3) product: a collection of 3D fields of carbon dioxide (CO2) mixing ratios every 6 hours beginning in April 2009 going until the present on a grid with a 0.5 degree horizontal resolution and 72 vertical levels from the surface to 0.01 hPa. Using an assimilation methodology based on the Goddard Earth Observing System (GEOS) atmospheric data assimilation system (ADAS), the L3 fields are weighted averages of the two satellite retrievals and predictions from the GEOS general circulation model driven by assimilated meteorology from the Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2). In places and times where there are a dense number of soundings, the observations dominate the predicted mixing ratios, while the model is used to fill in locations with fewer soundings, e.g., high latitudes and the Amazon. Blending the satellite observations with model predictions has at least two notable benefits. First, it provides a bridge for evaluating the satellite retrievals and their uncertainties against a heterogeneous collection of observations including those from surface sites, towers, aircraft, and soundings from the Total Carbon Column Observing Network (TCCON). Extensive evaluations of the L3 reanalysis clearly demonstrate both the strength and the deficiency of the satellite retrievals. Second, it is possible to estimate variables from the reanalysis without introducing bias due to spatiotemporal variability in sounding coverage. For example, the assimilated product provides robust estimates of the monthly CO2 global growth rate. These monthly growth rate estimates show significant differences from estimates based on in situ observations, which have sparse coverage, and those based on model surface fluxes, which imperfectly represent key processes. This presentation discusses the implications of this finding as well as ongoing strategies to extract more information from the satellite retrievals in future L3 reanalyses.

  16. White Matter Fiber-based Analysis of T1w/T2w Ratio Map.

    PubMed

    Chen, Haiwei; Budin, Francois; Noel, Jean; Prieto, Juan Carlos; Gilmore, John; Rasmussen, Jerod; Wadhwa, Pathik D; Entringer, Sonja; Buss, Claudia; Styner, Martin

    2017-02-01

    To develop, test, evaluate and apply a novel tool for the white matter fiber-based analysis of T1w/T2w ratio maps quantifying myelin content. The cerebral white matter in the human brain develops from a mostly non-myelinated state to a nearly fully mature white matter myelination within the first few years of life. High resolution T1w/T2w ratio maps are believed to be effective in quantitatively estimating myelin content on a voxel-wise basis. We propose the use of a fiber-tract-based analysis of such T1w/T2w ratio data, as it allows us to separate fiber bundles that a common regional analysis imprecisely groups together, and to associate effects to specific tracts rather than large, broad regions. We developed an intuitive, open source tool to facilitate such fiber-based studies of T1w/T2w ratio maps. Via its Graphical User Interface (GUI) the tool is accessible to non-technical users. The framework uses calibrated T1w/T2w ratio maps and a prior fiber atlas as an input to generate profiles of T1w/T2w values. The resulting fiber profiles are used in a statistical analysis that performs along-tract functional statistical analysis. We applied this approach to a preliminary study of early brain development in neonates. We developed an open-source tool for the fiber based analysis of T1w/T2w ratio maps and tested it in a study of brain development.

  17. White matter fiber-based analysis of T1w/T2w ratio map

    NASA Astrophysics Data System (ADS)

    Chen, Haiwei; Budin, Francois; Noel, Jean; Prieto, Juan Carlos; Gilmore, John; Rasmussen, Jerod; Wadhwa, Pathik D.; Entringer, Sonja; Buss, Claudia; Styner, Martin

    2017-02-01

    Purpose: To develop, test, evaluate and apply a novel tool for the white matter fiber-based analysis of T1w/T2w ratio maps quantifying myelin content. Background: The cerebral white matter in the human brain develops from a mostly non-myelinated state to a nearly fully mature white matter myelination within the first few years of life. High resolution T1w/T2w ratio maps are believed to be effective in quantitatively estimating myelin content on a voxel-wise basis. We propose the use of a fiber-tract-based analysis of such T1w/T2w ratio data, as it allows us to separate fiber bundles that a common regional analysis imprecisely groups together, and to associate effects to specific tracts rather than large, broad regions. Methods: We developed an intuitive, open source tool to facilitate such fiber-based studies of T1w/T2w ratio maps. Via its Graphical User Interface (GUI) the tool is accessible to non-technical users. The framework uses calibrated T1w/T2w ratio maps and a prior fiber atlas as an input to generate profiles of T1w/T2w values. The resulting fiber profiles are used in a statistical analysis that performs along-tract functional statistical analysis. We applied this approach to a preliminary study of early brain development in neonates. Results: We developed an open-source tool for the fiber based analysis of T1w/T2w ratio maps and tested it in a study of brain development.

  18. Extinction-to-Backscatter Ratios of Saharan Dust Layers Derived from In-Situ Measurements and CALIPSO Overflights During NAMMA

    NASA Technical Reports Server (NTRS)

    Omar, Ali H.; Liu, Zhaoyan; Vaughan, Mark A.; Thornhill, Kenneth L., II; Kittaka, Chieko; Ismail, Syed; Chen, Gao; Powell, Kathleen A.; Winker, David M.; Trepte, Charles R.; hide

    2010-01-01

    We determine the extinction-to-backscatter (Sa) ratios of dust using (1) airborne in-situ measurements of microphysical properties, (2) modeling studies, and (3) the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) observations recorded during the NASA African Monsoon Multidisciplinary Analyses (NAMMA) field experiment conducted from Sal, Cape Verde during Aug-Sept 2006. Using CALIPSO measurements of the attenuated backscatter of lofted Saharan dust layers, we apply the transmittance technique to estimate dust Sa ratios at 532 nm and a 2-color method to determine the corresponding 1064 nm Sa. This method yielded dust Sa ratios of 39.8 plus or minus 1.4 sr and 51.8 plus or minus 3.6 sr at 532 nm and 1064 nm, respectively. Secondly, Sa at both wavelengths is independently calculated using size distributions measured aboard the NASA DC-8 and estimates of Saharan dust complex refractive indices applied in a T-Matrix scheme. We found Sa ratios of 39.1 plus or minus 3.5 sr and 50.0 plus or minus 4 sr at 532 nm and 1064 nm, respectively, using the T-Matrix calculations applied to measured size spectra. Finally, in situ measurements of the total scattering (550 nm) and absorption coefficients (532 nm) are used to generate an extinction profile that is used to constrain the CALIPSO 532 nm extinction profile and thus generate a stratified 532 nm Sa. This method yielded an Sa ratio at 532 nm of 35.7 sr in the dust layer and 25 sr in the marine boundary layer consistent with a predominantly seasalt aerosol near the ocean surface. Combinatorial simulations using noisy size spectra and refractive indices were used to estimate the mean and uncertainty (one standard deviation) of these Sa ratios. These simulations produced a mean (plus or minus uncertainty) of 39.4 (plus or minus 5.9) sr and 56.5 (plus or minus 16.5) sr at 532 nm and 1064 nm, respectively, corresponding to percent uncertainties of 15% and 29%. These results will provide a measurements-based estimate of the dust Sa for use in backscatter lidar inversion algorithms such as CALIOP.

  19. Neutralization of Plutonium and Enriched Uranium Solutions Containing Gadolinium as a Neutron Poison

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BRONIKOWSKI, MG.

    2004-04-01

    Materials currently being dissolved in the HB-Line Facility will result in an accumulated solution containing an estimated uranium:plutonium (U:Pu) ratio of 4.3:1 and an 235U enrichment estimated at 30 per cent The U:Pu ratio and the enrichment are outside the evaluated concentration range for disposition to high level waste (HLW) using gadolinium (Gd) as a neutron poison. To confirm that the solution generated during the current HB-Line dissolving campaign can be poisoned with Gd, neutralized and discarded to the Savannah River Site (SRS) high level waste (HLW) system without undue nuclear safety concerns the caustic precipitation of surrogate solutions wasmore » examined. Experiments were performed with a U/Pu/Gd solution representative of the HB-Line estimated concentration ratio and also a U/Gd solution. Depleted U was used in the experiments as the enrichment of the U will not affect the chemical behavior during neutralization, but will affect the amount of Gd added to the solution. Settling behavior of the neutralized solutions was found to be comparable to previous studies. The neutralized solutions mixed easily and had expected densities of typical neutralized waste. The neutralized solids were found to be homogeneous and less than 20 microns in size. Partially neutralized solids were more amorphous than the fully neutralized solids. Based on the results of these experiments, Gd was found to be a viable poison for neutralizing a U/Pu/Gd solution with a U:Pu mass ratio of 4.3:1 thus extending the U:Pu mass ratio from the previously investigated 0-3:1 to 4.3:1. However, further work is needed to allow higher U concentrations or U:Pu ratios greater than investigated in this work.« less

  20. Ice particle morphology and microphysical properties of cirrus clouds inferred from combined CALIOP-IIR measurements

    NASA Astrophysics Data System (ADS)

    Saito, Masanori; Iwabuchi, Hironobu; Yang, Ping; Tang, Guanglin; King, Michael D.; Sekiguchi, Miho

    2017-04-01

    Ice particle morphology and microphysical properties of cirrus clouds are essential for assessing radiative forcing associated with these clouds. We develop an optimal estimation-based algorithm to infer cirrus cloud optical thickness (COT), cloud effective radius (CER), plate fraction including quasi-horizontally oriented plates (HOPs), and the degree of surface roughness from the Cloud Aerosol Lidar with Orthogonal Polarization (CALIOP) and the Infrared Imaging Radiometer (IIR) on the Cloud Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) platform. A simple but realistic ice particle model is used, and the relevant bulk optical properties are computed using state-of-the-art light-scattering computational capabilities. Rigorous estimation of uncertainties related to surface properties, atmospheric gases, and cloud heterogeneity is performed. The results based on the present method show that COTs are quite consistent with other satellite products and CERs essentially agree with the other counterparts. A 1 month global analysis for April 2007, in which CALIPSO off-nadir angle is 0.3°, shows that the HOP has significant temperature-dependence and is critical to the lidar ratio when cloud temperature is warmer than -40°C. The lidar ratio is calculated from the bulk optical properties based on the inferred parameters, showing robust temperature dependence. The median lidar ratio of cirrus clouds is 27-31 sr over the globe.

  1. Age estimation by pulp-to-tooth area ratio using cone-beam computed tomography: A preliminary analysis.

    PubMed

    Rai, Arpita; Acharya, Ashith B; Naikmasur, Venkatesh G

    2016-01-01

    Age estimation of living or deceased individuals is an important aspect of forensic sciences. Conventionally, pulp-to-tooth area ratio (PTR) measured from periapical radiographs have been utilized as a nondestructive method of age estimation. Cone-beam computed tomography (CBCT) is a new method to acquire three-dimensional images of the teeth in living individuals. The present study investigated age estimation based on PTR of the maxillary canines measured in three planes obtained from CBCT image data. Sixty subjects aged 20-85 years were included in the study. For each tooth, mid-sagittal, mid-coronal, and three axial sections-cementoenamel junction (CEJ), one-fourth root level from CEJ, and mid-root-were assessed. PTR was calculated using AutoCAD software after outlining the pulp and tooth. All statistical analyses were performed using an SPSS 17.0 software program. Linear regression analysis showed that only PTR in axial plane at CEJ had significant age correlation ( r = 0.32; P < 0.05). This is probably because of clearer demarcation of pulp and tooth outline at this level.

  2. The a priori SDR Estimation Techniques with Reduced Speech Distortion for Acoustic Echo and Noise Suppression

    NASA Astrophysics Data System (ADS)

    Thoonsaengngam, Rattapol; Tangsangiumvisai, Nisachon

    This paper proposes an enhanced method for estimating the a priori Signal-to-Disturbance Ratio (SDR) to be employed in the Acoustic Echo and Noise Suppression (AENS) system for full-duplex hands-free communications. The proposed a priori SDR estimation technique is modified based upon the Two-Step Noise Reduction (TSNR) algorithm to suppress the background noise while preserving speech spectral components. In addition, a practical approach to determine accurately the Echo Spectrum Variance (ESV) is presented based upon the linear relationship assumption between the power spectrum of far-end speech and acoustic echo signals. The ESV estimation technique is then employed to alleviate the acoustic echo problem. The performance of the AENS system that employs these two proposed estimation techniques is evaluated through the Echo Attenuation (EA), Noise Attenuation (NA), and two speech distortion measures. Simulation results based upon real speech signals guarantee that our improved AENS system is able to mitigate efficiently the problem of acoustic echo and background noise, while preserving the speech quality and speech intelligibility.

  3. Validation of lignocellulosic biomass carbohydrates determination via acid hydrolysis.

    PubMed

    Zhou, Shengfei; Runge, Troy M

    2014-11-04

    This work studied the two-step acid hydrolysis for determining carbohydrates in lignocellulosic biomass. Estimation of sugar loss based on acid hydrolyzed sugar standards or analysis of sugar derivatives was investigated. Four model substrates (starch, holocellulose, filter paper and cotton) and three levels of acid/material ratios (7.8, 10.3 and 15.4, v/w) were studied to demonstrate the range of test artifacts. The method for carbohydrates estimation based on acid hydrolyzed sugar standards having the most satisfactory carbohydrate recovery and relative standard deviation. Raw material and the acid/material ratio both had significant effect on carbohydrate hydrolysis, suggesting the acid to have impacts beyond a catalyst in the hydrolysis. Following optimal procedures, we were able to reach a carbohydrate recovery of 96% with a relative standard deviation less than 3%. The carbohydrates recovery lower than 100% was likely due to the incomplete hydrolysis of substrates, which was supported by scanning electron microscope (SEM) images. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Harmonised pesticide risk trend indicator for food (HAPERITIF): The methodological approach.

    PubMed

    Calliera, Maura; Finizio, Antonio; Azimonti, Giovanna; Benfenati, Emilio; Trevisan, Marco

    2006-12-01

    To provide a harmonised European approach for pesticide risk indicators, the Sixth EU Framework Programme recently financed the HAIR (HArmonised environmental Indicators for pesticide Risk) project. This paper illustrates the methodology underlying a new indicator-HAPERITIF (HArmonised PEsticide RIsk Trend Indicator for Food), developed in HAIR, for tracking acute and chronic pesticide risk trends for consumers. The acute indicator, HAPERITIF(ac), is based on the ratio between an estimated short-term intake (ESTI), calculated as recommended by the World Health Organisation (WHO), and the acute reference dose (ARfD); the chronic indicator HAPERITIF(chr) is based on the ratio between an estimated daily intake (EDI) and the admissible daily intake (ADI). HAPERITIF can be applied at different levels of aggregation. Each level gives information for proper risk management of pesticides to reduce the risk associated with food consumption. An example of application using realistic scenarios of pesticide treatments on a potato crop in central-northern Italy is reported to illustrate the different steps of HAPERITIF. Copyright 2006 Society of Chemical Industry.

  5. Feasibility test of a solid state spin-scan photo-imaging system

    NASA Technical Reports Server (NTRS)

    Laverty, N. P.

    1973-01-01

    The feasibility of using a solid-state photo-imaging system to obtain resolution imagery from a Pioneer-type spinning spacecraft in future exploratory missions to the outer planets is discussed. Evaluation of the photo-imaging system performance, based on electrical video signal analysis recorded on magnetic tape, shows that the signal-to-noise (S/N) ratios obtained at low spatial frequencies exceed the anticipated performance and that measured modulation transfer functions exhibited some degradation in comparison with the estimated values, primarily owing to the difficulty in obtaining a precise focus of the optical system in the laboratory with the test patterns in close proximity to the objective lens. A preliminary flight model design of the photo-imaging system is developed based on the use of currently available phototransistor arrays. Image quality estimates that will be obtained are presented in terms of S/N ratios and spatial resolution for the various planets and satellites. Parametric design tradeoffs are also defined.

  6. Can we improve C IV-based single epoch black hole mass estimations?

    NASA Astrophysics Data System (ADS)

    Mejía-Restrepo, J. E.; Trakhtenbrot, B.; Lira, P.; Netzer, H.

    2018-05-01

    In large optical surveys at high redshifts (z > 2), the C IV broad emission line is the most practical alternative to estimate the mass (MBH) of active super-massive black holes (SMBHs). However, mass determinations obtained with this line are known to be highly uncertain. In this work we use the Sloan Digital Sky Survey Data Release 7 and 12 quasar catalogues to statistically test three alternative methods put forward in the literature to improve C IV-based MBH estimations. These methods are constructed from correlations between the ratio of the C IV line-width to the low ionization line-widths (Hα, Hβ and Mg II) and several other properties of rest-frame UV emission lines. Our analysis suggests that these correction methods are of limited applicability, mostly because all of them depend on correlations that are driven by the linewidth of the C IV profile itself and not by an interconnection between the linewidth of the C IV line with the linewidth of the low ionization lines. Our results show that optical C IV-based mass estimates at high redshift cannot be a proper replacement for estimates based on IR spectroscopy of low ionization lines like Hα, Hβ and Mg II.

  7. Alternating steady state free precession for estimation of current-induced magnetic flux density: A feasibility study.

    PubMed

    Lee, Hyunyeol; Jeong, Woo Chul; Kim, Hyung Joong; Woo, Eung Je; Park, Jaeseok

    2016-05-01

    To develop a novel, current-controlled alternating steady-state free precession (SSFP)-based conductivity imaging method and corresponding MR signal models to estimate current-induced magnetic flux density (Bz ) and conductivity distribution. In the proposed method, an SSFP pulse sequence, which is in sync with alternating current pulses, produces dual oscillating steady states while yielding nonlinear relation between signal phase and Bz . A ratiometric signal model between the states was analytically derived using the Bloch equation, wherein Bz was estimated by solving a nonlinear inverse problem for conductivity estimation. A theoretical analysis on the signal-to-noise ratio of Bz was given. Numerical and experimental studies were performed using SSFP-FID and SSFP-ECHO with current pulses positioned either before or after signal encoding to investigate the feasibility of the proposed method in conductivity estimation. Given all SSFP variants herein, SSFP-FID with alternating current pulses applied before signal encoding exhibits the highest Bz signal-to-noise ratio and conductivity contrast. Additionally, compared with conventional conductivity imaging, the proposed method benefits from rapid SSFP acquisition without apparent loss of conductivity contrast. We successfully demonstrated the feasibility of the proposed method in estimating current-induced Bz and conductivity distribution. It can be a promising, rapid imaging strategy for quantitative conductivity imaging. © 2015 Wiley Periodicals, Inc.

  8. Incidence of childhood pneumonia: facility-based surveillance estimate compared to measured incidence in a South African birth cohort study

    PubMed Central

    le Roux, David M; Myer, Landon; Nicol, Mark P; Zar, Heather J

    2015-01-01

    Background Pneumonia is the leading cause of childhood mortality and a major contributor to childhood morbidity, but accurate measurement of pneumonia incidence is challenging. We compared pneumonia incidence using a facility-based surveillance system to estimates from a cohort study conducted contemporaneously in the same community in Cape Town, South Africa. Methods A surveillance system was developed in six public sector primary care clinics and in a regional referral hospital, to detect childhood pneumonia cases. Nurses recorded all children presenting to facilities who met WHO case definitions of pneumonia, and hospital records were reviewed. Estimates of pneumonia incidence and severity were compared with incidence rates based on active surveillance in the Drakenstein Child Health Study. Results From June 2012 until September 2013, the surveillance system detected 306 pneumonia episodes in children under 1 year of age, an incidence of 0.20 episodes/child-year (e/cy) (95% CI 0.17 to 0.22 e/cy). The incidence in the cohort study from the same period was 0.27 e/cy (95% CI 0.23 to 0.32 e/cy). Pneumonia incidence in the surveillance system was almost 30% lower than in the birth cohort; incidence rate ratio 0.72 (95% CI 0.58 to 0.89). In the surveillance system, 18% were severe pneumonia cases, compared to 23% in the birth cohort, rate ratio 0.81 (95% CI 0.55 to 1.18). Conclusions In this setting, facility-based pneumonia surveillance detected fewer cases of pneumonia, and fewer severe cases, compared to the corresponding cohort study. Facility pneumonia surveillance using data collected by local healthcare workers provides a useful estimate of the epidemiology of childhood pneumonia but may underestimate incidence and severity. PMID:26685027

  9. Estimating Discharge, Depth and Bottom Friction in Sand Bed Rivers Using Surface Currents and Water Surface Elevation Observations

    NASA Astrophysics Data System (ADS)

    Simeonov, J.; Czapiga, M. J.; Holland, K. T.

    2017-12-01

    We developed an inversion model for river bathymetry estimation using measurements of surface currents, water surface elevation slope and shoreline position. The inversion scheme is based on explicit velocity-depth and velocity-slope relationships derived from the along-channel momentum balance and mass conservation. The velocity-depth relationship requires the discharge value to quantitatively relate the depth to the measured velocity field. The ratio of the discharge and the bottom friction enter as a coefficient in the velocity-slope relationship and is determined by minimizing the difference between the predicted and the measured streamwise variation of the total head. Completing the inversion requires an estimate of the bulk friction, which in the case of sand bed rivers is a strong function of the size of dune bedforms. We explored the accuracy of existing and new empirical closures that relate the bulk roughness to parameters such as the median grain size diameter, ratio of shear velocity to sediment fall velocity or the Froude number. For given roughness parameterization, the inversion solution is determined iteratively since the hydraulic roughness depends on the unknown depth. We first test the new hydraulic roughness parameterization using estimates of the Manning roughness in sand bed rivers based on field measurements. The coupled inversion and roughness model is then tested using in situ and remote sensing measurements of the Kootenai River east of Bonners Ferry, ID.

  10. Size matters: relationships between body size and body mass of common coastal, aquatic invertebrates in the Baltic Sea

    PubMed Central

    Austin, Åsa; Bergström, Ulf; Donadi, Serena; Eriksson, Britas D.H.K.; Hansen, Joakim; Sundblad, Göran

    2017-01-01

    Background Organism biomass is one of the most important variables in ecological studies, making biomass estimations one of the most common laboratory tasks. Biomass of small macroinvertebrates is usually estimated as dry mass or ash-free dry mass (hereafter ‘DM’ vs. ‘AFDM’) per sample; a laborious and time consuming process, that often can be speeded up using easily measured and reliable proxy variables like body size or wet (fresh) mass. Another common way of estimating AFDM (one of the most accurate but also time-consuming estimates of biologically active tissue mass) is the use of AFDM/DM ratios as conversion factors. So far, however, these ratios typically ignore the possibility that the relative mass of biologically active vs. non-active support tissue (e.g., protective exoskeleton or shell)—and therefore, also AFDM/DM ratios—may change with body size, as previously shown for taxa like spiders, vertebrates and trees. Methods We collected aquatic, epibenthic macroinvertebrates (>1 mm) in 32 shallow bays along a 360 km stretch of the Swedish coast along the Baltic Sea; one of the largest brackish water bodies on Earth. We then estimated statistical relationships between the body size (length or height in mm), body dry mass and ash-free dry mass for 14 of the most common taxa; five gastropods, three bivalves, three crustaceans and three insect larvae. Finally, we statistically estimated the potential influence of body size on the AFDM/DM ratio per taxon. Results For most taxa, non-linear regression models describing the power relationship between body size and (i) DM and (ii) AFDM fit the data well (as indicated by low SE and high R2). Moreover, for more than half of the taxa studied (including the vast majority of the shelled molluscs), body size had a negative influence on organism AFDM/DM ratios. Discussion The good fit of the modelled power relationships suggests that the constants reported here can be used to quickly estimate organism dry- and ash-free dry mass based on body size, thereby freeing up considerable work resources. However, the considerable differences in constants between taxa emphasize the need for taxon-specific relationships, and the potential dangers associated with ignoring body size. The negative influence of body size on the AFDM/DM ratio found in a majority of the molluscs could be caused by increasingly thicker shells with organism age, and/or spawning-induced loss of biologically active tissue in adults. Consequently, future studies utilizing AFDM/DM (and presumably also AFDM/wet mass) ratios should carefully assess the potential influence of body size to ensure more reliable estimates of organism body mass. PMID:28149685

  11. Comparing GOSAT Observations of Localized CO2 Enhancements by Large Emitters with Inventory-Based Estimates

    NASA Technical Reports Server (NTRS)

    Janardanan, Rajesh; Maksyutov, Shamil; Oda, Tomohiro; Saito, Makoto; Kaiser, Johannes W.; Ganshin, Alexander; Stohl, Andreas; Matsunaga, Tsuneo; Yoshida, Yukio; Yokota, Tatsuya

    2016-01-01

    We employed an atmospheric transport model to attribute column-averaged CO2 mixing ratios (XCO2) observed by Greenhouse gases Observing SATellite (GOSAT) to emissions due to large sources such as megacities and power plants. XCO2 enhancements estimated from observations were compared to model simulations implemented at the spatial resolution of the satellite observation footprint (0.1deg × 0.1deg). We found that the simulated XCO2 enhancements agree with the observed over several continental regions across the globe, for example, for North America with an observation to simulation ratio of 1.05 +/- 0.38 (p<0.1), but with a larger ratio over East Asia (1.22 +/- 0.32; p<0.05). The obtained observation-model discrepancy (22%) for East Asia is comparable to the uncertainties in Chinese emission inventories (approx.15%) suggested by recent reports. Our results suggest that by increasing the number of observations around emission sources, satellite instruments like GOSAT can provide a tool for detecting biases in reported emission inventories.

  12. Anomaly Monitoring Method for Key Components of Satellite

    PubMed Central

    Fan, Linjun; Xiao, Weidong; Tang, Jun

    2014-01-01

    This paper presented a fault diagnosis method for key components of satellite, called Anomaly Monitoring Method (AMM), which is made up of state estimation based on Multivariate State Estimation Techniques (MSET) and anomaly detection based on Sequential Probability Ratio Test (SPRT). On the basis of analysis failure of lithium-ion batteries (LIBs), we divided the failure of LIBs into internal failure, external failure, and thermal runaway and selected electrolyte resistance (R e) and the charge transfer resistance (R ct) as the key parameters of state estimation. Then, through the actual in-orbit telemetry data of the key parameters of LIBs, we obtained the actual residual value (R X) and healthy residual value (R L) of LIBs based on the state estimation of MSET, and then, through the residual values (R X and R L) of LIBs, we detected the anomaly states based on the anomaly detection of SPRT. Lastly, we conducted an example of AMM for LIBs, and, according to the results of AMM, we validated the feasibility and effectiveness of AMM by comparing it with the results of threshold detective method (TDM). PMID:24587703

  13. Adjustments of the Pesticide Risk Index Used in Environmental Policy in Flanders

    PubMed Central

    Fevery, Davina; Peeters, Bob; Lenders, Sonia; Spanoghe, Pieter

    2015-01-01

    Indicators are used to quantify the pressure of pesticides on the environment. Pesticide risk indicators typically require weighting environmental exposure by a no effect concentration. An indicator based on spread equivalents (ΣSeq) is used in environmental policy in Flanders (Belgium). The pesticide risk for aquatic life is estimated by weighting active ingredient usage by the ratio of their maximum allowable concentration and their soil halflife. Accurate estimates of total pesticide usage in the region are essential in such calculations. Up to 2012, the environmental impact of pesticides was estimated on sales figures provided by the Federal Government. Since 2013, pesticide use is calculated based on results from the Farm Accountancy Data Network (FADN). The estimation of pesticide use was supplemented with data for non-agricultural use based on sales figures of amateur use provided by industry and data obtained from public services. The Seq-indicator was modified to better reflect reality. This method was applied for the period 2009-2012 and showed differences between estimated use and sales figures of pesticides. The estimated use of pesticides based on accountancy data is more accurate compared to sales figures. This approach resulted in a better view on pesticide use and its respective environmental impact in Flanders. PMID:26046655

  14. Adjustments of the Pesticide Risk Index Used in Environmental Policy in Flanders.

    PubMed

    Fevery, Davina; Peeters, Bob; Lenders, Sonia; Spanoghe, Pieter

    2015-01-01

    Indicators are used to quantify the pressure of pesticides on the environment. Pesticide risk indicators typically require weighting environmental exposure by a no effect concentration. An indicator based on spread equivalents (ΣSeq) is used in environmental policy in Flanders (Belgium). The pesticide risk for aquatic life is estimated by weighting active ingredient usage by the ratio of their maximum allowable concentration and their soil halflife. Accurate estimates of total pesticide usage in the region are essential in such calculations. Up to 2012, the environmental impact of pesticides was estimated on sales figures provided by the Federal Government. Since 2013, pesticide use is calculated based on results from the Farm Accountancy Data Network (FADN). The estimation of pesticide use was supplemented with data for non-agricultural use based on sales figures of amateur use provided by industry and data obtained from public services. The Seq-indicator was modified to better reflect reality. This method was applied for the period 2009-2012 and showed differences between estimated use and sales figures of pesticides. The estimated use of pesticides based on accountancy data is more accurate compared to sales figures. This approach resulted in a better view on pesticide use and its respective environmental impact in Flanders.

  15. Modal Damping Ratio and Optimal Elastic Moduli of Human Body Segments for Anthropometric Vibratory Model of Standing Subjects.

    PubMed

    Gupta, Manoj; Gupta, T C

    2017-10-01

    The present study aims to accurately estimate inertial, physical, and dynamic parameters of human body vibratory model consistent with physical structure of the human body that also replicates its dynamic response. A 13 degree-of-freedom (DOF) lumped parameter model for standing person subjected to support excitation is established. Model parameters are determined from anthropometric measurements, uniform mass density, elastic modulus of individual body segments, and modal damping ratios. Elastic moduli of ellipsoidal body segments are initially estimated by comparing stiffness of spring elements, calculated from a detailed scheme, and values available in literature for same. These values are further optimized by minimizing difference between theoretically calculated platform-to-head transmissibility ratio (TR) and experimental measurements. Modal damping ratios are estimated from experimental transmissibility response using two dominant peaks in the frequency range of 0-25 Hz. From comparison between dynamic response determined form modal analysis and experimental results, a set of elastic moduli for different segments of human body and a novel scheme to determine modal damping ratios from TR plots, are established. Acceptable match between transmissibility values calculated from the vibratory model and experimental measurements for 50th percentile U.S. male, except at very low frequencies, establishes the human body model developed. Also, reasonable agreement obtained between theoretical response curve and experimental response envelop for average Indian male, affirms the technique used for constructing vibratory model of a standing person. Present work attempts to develop effective technique for constructing subject specific damped vibratory model based on its physical measurements.

  16. Estimation of the proteomic cancer co-expression sub networks by using association estimators.

    PubMed

    Erdoğan, Cihat; Kurt, Zeyneb; Diri, Banu

    2017-01-01

    In this study, the association estimators, which have significant influences on the gene network inference methods and used for determining the molecular interactions, were examined within the co-expression network inference concept. By using the proteomic data from five different cancer types, the hub genes/proteins within the disease-associated gene-gene/protein-protein interaction sub networks were identified. Proteomic data from various cancer types is collected from The Cancer Proteome Atlas (TCPA). Correlation and mutual information (MI) based nine association estimators that are commonly used in the literature, were compared in this study. As the gold standard to measure the association estimators' performance, a multi-layer data integration platform on gene-disease associations (DisGeNET) and the Molecular Signatures Database (MSigDB) was used. Fisher's exact test was used to evaluate the performance of the association estimators by comparing the created co-expression networks with the disease-associated pathways. It was observed that the MI based estimators provided more successful results than the Pearson and Spearman correlation approaches, which are used in the estimation of biological networks in the weighted correlation network analysis (WGCNA) package. In correlation-based methods, the best average success rate for five cancer types was 60%, while in MI-based methods the average success ratio was 71% for James-Stein Shrinkage (Shrink) and 64% for Schurmann-Grassberger (SG) association estimator, respectively. Moreover, the hub genes and the inferred sub networks are presented for the consideration of researchers and experimentalists.

  17. Swept sine testing of rotor-bearing system for damping estimation

    NASA Astrophysics Data System (ADS)

    Chandra, N. Harish; Sekhar, A. S.

    2014-01-01

    Many types of rotating components commonly operate above the first or second critical speed and they are subjected to run-ups and shutdowns frequently. The present study focuses on developing FRF of rotor bearing systems for damping estimation from swept-sine excitation. The principle of active vibration control states that with increase in angular acceleration, the amplitude of vibration due to unbalance will reduce and the FRF envelope will shift towards the right (or higher frequency). The frequency response function (FRF) estimated by tracking filters or Co-Quad analyzers was proved to induce an error into the FRF estimate. Using Fast Fourier Transform (FFT) algorithm and stationary wavelet transform (SWT) decomposition FRF distortion can be reduced. To obtain a theoretical clarity, the shifting of FRF envelope phenomenon is incorporated into conventional FRF expressions and validation is performed with the FRF estimated using the Fourier Transform approach. The half-power bandwidth method is employed to extract damping ratios from the FRF estimates. While deriving half-power points for both types of responses (acceleration and displacement), damping ratio (ζ) is estimated with different approximations like classical definition (neglecting damping ratio of order higher than 2), third order (neglecting damping ratios with order higher than 4) and exact (no assumptions on damping ratio). The use of stationary wavelet transform to denoise the noise corrupted FRF data is explained. Finally, experiments are performed on a test rotor excited with different sweep rates to estimate the damping ratio.

  18. Evapotranspiration and microclimate at a low-level radioactive-waste disposal site in northwestern Illinois

    USGS Publications Warehouse

    Healy, R.W.; DeVries, M.P.; Sturrock, Alex M.

    1989-01-01

    From July 1982 through June 1984, a study was made of the evapotranspiration and microclimate at a low-level radioactive-waste disposal site near Sheffield, Bureau County, Illinois. Vegetation at the site consists of mixed pasture grasses, primarily awnless brome (Bromus inermis) and red clover (Trifoleum pratense). Three methods were used to estimate evapotranspiration: (1) an energy budget with the Bowen ratio, (2) an aerodynamic profile, and (3) a soil-based water budget. For the aerodynamic-profile method, sensible-heat flux was estimated by a profile equation and evapotranspiration was then calculated as the residual in the energy-balance equation. Estimates by the energy-budget and aerodynamic-profile methods were computed from hourly data and then summed by days and months. Yearly estimates (for March through November) by these methods were in close agreement: 648 and 626 millimeters, respectively. Daily estimates reach a maximum of about 6 millimeters. The water-budget method produced only monthly estimates based on weekly or biweekly soil-moisture content measurements. The yearly evapotranspiration estimated by this method (which actually included only the months of April through October) was 655 millimeters. The March-through-November average for the three methods of 657 millimeters was equivalent to 70 percent of total precipitation. Continuous measurements were made of incoming and reflected shortwave radiation, incoming and emitted longwave radiation, net radiation, soil-heat flux, soil temperature, horizontal windspeed, and wet- and dry-bulb air temperature. Windspeed and air temperature were measured at heights of 0.5 and 2.0 meters (and also at 1.0 meter after September 1983). Soilmoisture content of the soil zone was measured with a gamma-attenuation gage. Annual precipitation (938 millimeters) and average temperature (10.8 degrees Celsius) at the Sheffield site were virtually identical to long-term averages from nearby National Weather Service stations. Solar radiation averaged 65 percent of that normally expected under clear skies. Net radiation averaged 70.1 watts per square meter and was highest in July and negative during some winter months. Wind direction varied but was predominately south-southeasterly. Wind speed at the 2-meter height averaged 3.5 meters per second and was slightly higher in winter months than the rest of the year. The amount of water stored within the soil zone was greatest in early spring and least in late summer. Seasonal and diurnal trends of evapotranspiration rates mirrored those of net radiation; July was usually the month with the highest evapotranspiration rate. The ratio of sensible- to latentheat fluxes (commonly called the Bowen ratio) for the 2-year study period was 0.38, as averaged from the three methods. Monthly Bowen ratios fluctuated somewhat but averaged about 0.35 for late spring through summer. In fall, the ratio declined to zero or to slightly negative values. When the ratio was negative, the latent-heat flux was slightly greater than the net radiation because of additional energy supplied by' the cooling soil and air. Evapotranspiration calculated by the three methods averaged 75 percent of potential evapotranspiration, as estimated by the Penman equation. There was no apparent seasonal trend in the relation between actual and potential evapotranspiration rates.

  19. Genetic variation assessed with microsatellites in mass selection lines of the Pacific oyster ( Crassostrea gigas) in China

    NASA Astrophysics Data System (ADS)

    Wang, Xubo; Li, Qi; Yu, Hong; Kong, Lingfeng

    2016-12-01

    Four successive mass selection lines of the Pacific oyster, Crassostrea gigas, selected for faster growth in breeding programs in China were examined at ten polymorphic microsatellite loci to assess the level of allelic diversity and estimate the effective population size. These data were compared with those of their base population. The results showed that the genetic variation of the four generations were maintained at high levels with an average allelic richness of 18.8-20.6, and a mean expected heterozygosity of 0.902-0.921. They were not reduced compared with those of their base population. Estimated effective population sizes based on temporal variances in microsatellite frequencies were smaller to that of sex ratio-corrected broodstock count estimates. Using a relatively large number of broodstock and keeping an equal sex ratio in the broodstock each generation may have contributed to retaining the original genetic diversity and maintaining relatively large effective population size. The results obtained in this study showed that the genetic variation was not affected greatly by mass selection progress and high genetic variation still existed in the mass selection lines, suggesting that there is still potential for increasing the gains in future generations of C. gigas. The present study provided important information for future genetic improvement by selective breeding, and for the design of suitable management guidelines for genetic breeding of C. gigas.

  20. Estimation of Fine and Oversize Particle Ratio in a Heterogeneous Compound with Acoustic Emissions.

    PubMed

    Nsugbe, Ejay; Ruiz-Carcel, Cristobal; Starr, Andrew; Jennions, Ian

    2018-03-13

    The final phase of powder production typically involves a mixing process where all of the particles are combined and agglomerated with a binder to form a single compound. The traditional means of inspecting the physical properties of the final product involves an inspection of the particle sizes using an offline sieving and weighing process. The main downside of this technique, in addition to being an offline-only measurement procedure, is its inability to characterise large agglomerates of powders due to sieve blockage. This work assesses the feasibility of a real-time monitoring approach using a benchtop test rig and a prototype acoustic-based measurement approach to provide information that can be correlated to product quality and provide the opportunity for future process optimisation. Acoustic emission (AE) was chosen as the sensing method due to its low cost, simple setup process, and ease of implementation. The performance of the proposed method was assessed in a series of experiments where the offline quality check results were compared to the AE-based real-time estimations using data acquired from a benchtop powder free flow rig. A designed time domain based signal processing method was used to extract particle size information from the acquired AE signal and the results show that this technique is capable of estimating the required ratio in the washing powder compound with an average absolute error of 6%.

  1. Dust release rates and dust-to-gas mass ratios of eight comets

    NASA Technical Reports Server (NTRS)

    Singh, P. D.; De Almeida, A. A.; Huebner, W. F.

    1992-01-01

    Mass release rates of dust and mass ratios of dust-to-gas release rates of Comets Thiele (1985m), Wilson (1986l), P/Borrelly (1987p), Liller (1988a), Bradfield (1987s), Hartley-Good (1985l), P/Giacobini-Zinner (1984e), and P/Halley (1982i) are estimated from the analysis of continuum flux measurements at optical wavelengths. An attempt is made to estimate the size of each comet nucleus on the basis of water-ice sublimation (vaporization), assuming that the nucleus is spherical and only a fraction of its surface area is active. Where possible, the dust mass release rates are compared with those obtained by other investigators in the optical and IR wavelength regions. Good agreement with results based on IR observations is found.

  2. Land use/land cover mapping (1:25000) of Taiwan, Republic of China by automated multispectral interpretation of LANDSAT imagery

    NASA Technical Reports Server (NTRS)

    Sung, Q. C.; Miller, L. D.

    1977-01-01

    Three methods were tested for collection of the training sets needed to establish the spectral signatures of the land uses/land covers sought due to the difficulties of retrospective collection of representative ground control data. Computer preprocessing techniques applied to the digital images to improve the final classification results were geometric corrections, spectral band or image ratioing and statistical cleaning of the representative training sets. A minimal level of statistical verification was made based upon the comparisons between the airphoto estimates and the classification results. The verifications provided a further support to the selection of MSS band 5 and 7. It also indicated that the maximum likelihood ratioing technique can achieve more agreeable classification results with the airphoto estimates than the stepwise discriminant analysis.

  3. Age estimation of living Indian individuals based on aspartic acid racemization from tooth biopsy specimen

    PubMed Central

    Rastogi, Manu; Logani, Ajay; Shah, Naseem; Kumar, Abhishek; Arora, Saurabh

    2017-01-01

    Background: Age estimation in living individuals is imperative to amicably settle civil and criminal disputes. A biochemical method based on amino acid racemization was evaluated for age estimation of living Indian individuals. Design: Caries-free maxillary/mandibular premolar teeth (n = 90) were collected from participants with age proof documents and divided into predefined nine age groups. Materials and Methods: Dentine biopsy from the labial aspect of the tooth crown was taken with an indigenously developed microtrephine. The samples were processed and subjected to gas chromatography. Dextrorotatory:levorotatory ratios were calculated, and a regression equation was formulated. Results: Across all age groups, an error of 0 ± 4 years between protein racemization age and chronological age was observed. Conclusion: Aspartic acid racemization from dentine biopsy samples could be a viable and accurate technique for age estimation of living individuals who have attained a state of skeletal maturity. PMID:29263613

  4. A quantum framework for likelihood ratios

    NASA Astrophysics Data System (ADS)

    Bond, Rachael L.; He, Yang-Hui; Ormerod, Thomas C.

    The ability to calculate precise likelihood ratios is fundamental to science, from Quantum Information Theory through to Quantum State Estimation. However, there is no assumption-free statistical methodology to achieve this. For instance, in the absence of data relating to covariate overlap, the widely used Bayes’ theorem either defaults to the marginal probability driven “naive Bayes’ classifier”, or requires the use of compensatory expectation-maximization techniques. This paper takes an information-theoretic approach in developing a new statistical formula for the calculation of likelihood ratios based on the principles of quantum entanglement, and demonstrates that Bayes’ theorem is a special case of a more general quantum mechanical expression.

  5. Signal-to-noise ratio estimation in digital computer simulation of lowpass and bandpass systems with applications to analog and digital communications, volume 3

    NASA Technical Reports Server (NTRS)

    Tranter, W. H.; Turner, M. D.

    1977-01-01

    Techniques are developed to estimate power gain, delay, signal-to-noise ratio, and mean square error in digital computer simulations of lowpass and bandpass systems. The techniques are applied to analog and digital communications. The signal-to-noise ratio estimates are shown to be maximum likelihood estimates in additive white Gaussian noise. The methods are seen to be especially useful for digital communication systems where the mapping from the signal-to-noise ratio to the error probability can be obtained. Simulation results show the techniques developed to be accurate and quite versatile in evaluating the performance of many systems through digital computer simulation.

  6. Stratospheric lifetime ratio of CFC-11 and CFC-12 from satellite and model climatologies

    NASA Astrophysics Data System (ADS)

    Hoffmann, L.; Hoppe, C. M.; Müller, R.; Dutton, G. S.; Gille, J. C.; Griessbach, S.; Jones, A.; Meyer, C. I.; Spang, R.; Volk, C. M.; Walker, K. A.

    2014-06-01

    Chlorofluorocarbons (CFCs) play a key role in stratospheric ozone loss and are strong infrared absorbers that contribute to global warming. The stratospheric lifetimes of CFCs are a measure of their global loss rates that are needed to determine global warming and ozone depletion potentials. We applied the tracer-tracer correlation approach to zonal mean climatologies from satellite measurements and model data to assess the lifetimes of CFCl3 (CFC-11) and CF2Cl2 (CFC-12). We present estimates of the CFC-11/CFC-12 lifetime ratio and the absolute lifetime of CFC-12, based on a reference lifetime of 52 yr for CFC-11. We analyzed climatologies from three satellite missions, the Atmospheric Chemistry Experiment-Fourier Transform Spectrometer (ACE-FTS), the HIgh Resolution Dynamics Limb Sounder (HIRDLS), and the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS). We found a CFC-11/CFC-12 lifetime ratio of 0.47±0.08 and a CFC-12 lifetime of 111(96-132) yr for ACE-FTS, a ratio of 0.46±0.07 and a lifetime of 112(97-133) yr for HIRDLS, and a ratio of 0.46±0.08 and a lifetime of 112(96-135) yr for MIPAS. The error-weighted, combined CFC-11/CFC-12 lifetime ratio is 0.47±0.04 and the CFC-12 lifetime estimate is 112(102-123) yr. These results agree with the recent Stratosphere-troposphere Processes And their Role in Climate (SPARC) reassessment, which recommends lifetimes of 52(43-67) yr and 102(88-122) yr, respectively. Having smaller uncertainties than the results from other recent studies, our estimates can help to better constrain CFC-11 and CFC-12 lifetime recommendations in future scientific studies and assessments. Furthermore, the satellite observations were used to validate first simulation results from a new coupled model system, which integrates a Lagrangian chemistry transport model into a climate model. For the coupled model we found a CFC-11/CFC-12 lifetime ratio of 0.48±0.07 and a CFC-12 lifetime of 110(95-129) yr, based on a ten-year perpetual run. Closely reproducing the satellite observations, the new model system will likely become a useful tool to assess the impact of advective transport, mixing, and photochemistry as well as climatological variability on the stratospheric lifetimes of long-lived tracers.

  7. Stratospheric lifetime ratio of CFC-11 and CFC-12 from satellite and model climatologies

    NASA Astrophysics Data System (ADS)

    Hoffmann, L.; Hoppe, C. M.; Müller, R.; Dutton, G. S.; Gille, J. C.; Griessbach, S.; Jones, A.; Meyer, C. I.; Spang, R.; Volk, C. M.; Walker, K. A.

    2014-11-01

    Chlorofluorocarbons (CFCs) play a key role in stratospheric ozone loss and are strong infrared absorbers that contribute to global warming. The stratospheric lifetimes of CFCs are a measure of their stratospheric loss rates that are needed to determine global warming and ozone depletion potentials. We applied the tracer-tracer correlation approach to zonal mean climatologies from satellite measurements and model data to assess the lifetimes of CFCl3 (CFC-11) and CF2Cl2 (CFC-12). We present estimates of the CFC-11/CFC-12 lifetime ratio and the absolute lifetime of CFC-12, based on a reference lifetime of 52 years for CFC-11. We analyzed climatologies from three satellite missions, the Atmospheric Chemistry Experiment-Fourier Transform Spectrometer (ACE-FTS), the HIgh Resolution Dynamics Limb Sounder (HIRDLS), and the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS). We found a CFC-11/CFC-12 lifetime ratio of 0.47±0.08 and a CFC-12 lifetime of 112(96-133) years for ACE-FTS, a ratio of 0.46±0.07 and a lifetime of 113(97-134) years for HIRDLS, and a ratio of 0.46±0.08 and a lifetime of 114(98-136) years for MIPAS. The error-weighted, combined CFC-11/CFC-12 lifetime ratio is 0.46±0.04 and the CFC-12 lifetime estimate is 113(103-124) years. These results agree with the recent Stratosphere-troposphere Processes And their Role in Climate (SPARC) reassessment, which recommends lifetimes of 52(43-67) years and 102(88-122) years, respectively. Having smaller uncertainties than the results from other recent studies, our estimates can help to better constrain CFC-11 and CFC-12 lifetime recommendations in future scientific studies and assessments. Furthermore, the satellite observations were used to validate first simulation results from a new coupled model system, which integrates a Lagrangian chemistry transport model into a climate model. For the coupled model we found a CFC-11/CFC-12 lifetime ratio of 0.48±0.07 and a CFC-12 lifetime of 110(95-129) years, based on a 10-year perpetual run. Closely reproducing the satellite observations, the new model system will likely become a useful tool to assess the impact of advective transport, mixing, and photochemistry as well as climatological variability on the stratospheric lifetimes of long-lived tracers.

  8. Comparison of eating quality and physicochemical properties between Japanese and Chinese rice cultivars.

    PubMed

    Nakamura, Sumiko; Cui, Jing; Zhang, Xin; Yang, Fan; Xu, Ximing; Sheng, Hua; Ohtsubo, Ken'ichi

    2016-12-01

    In this study, we evaluated 16 Japanese and Chinese rice cultivars in terms of their main chemical components, iodine absorption curve, apparent amylose content (AAC), pasting property, resistant starch content, physical properties, sodium dodecyl sulfate-polyacrylamide gel electrophoresis analysis, and enzyme activity. Based on these quality evaluations, we concluded that Chinese rice varieties are characterized by a high protein and the grain texture after cooking has high hardness and low stickiness. In a previous study, we developed a novel formula for estimating AAC based on the iodine absorption curve. The validation test showed a determination coefficient of 0.996 for estimating AAC of Chinese rice cultivars as unknown samples. In the present study, we developed a novel formulae for estimating the balance degree of the surface layer of cooked rice (A3/A1: a ratio of workload of stickiness and hardness) based on the iodine absorption curve obtained using milled rice.

  9. A time and frequency synchronization method for CO-OFDM based on CMA equalizers

    NASA Astrophysics Data System (ADS)

    Ren, Kaixuan; Li, Xiang; Huang, Tianye; Cheng, Zhuo; Chen, Bingwei; Wu, Xu; Fu, Songnian; Ping, Perry Shum

    2018-06-01

    In this paper, an efficient time and frequency synchronization method based on a new training symbol structure is proposed for polarization division multiplexing (PDM) coherent optical orthogonal frequency division multiplexing (CO-OFDM) systems. The coarse timing synchronization is achieved by exploiting the correlation property of the first training symbol, and the fine timing synchronization is accomplished by using the time-domain symmetric conjugate of the second training symbol. Furthermore, based on these training symbols, a constant modulus algorithm (CMA) is proposed for carrier frequency offset (CFO) estimation. Theoretical analysis and simulation results indicate that the algorithm has the advantages of robustness to poor optical signal-to-noise ratio (OSNR) and chromatic dispersion (CD). The frequency offset estimation range can achieve [ -Nsc/2 ΔfN , + Nsc/2 ΔfN ] GHz with the mean normalized estimation error below 12 × 10-3 even under the condition of OSNR as low as 10 dB.

  10. Genetic variability in ABCB1, occupational pesticide exposure, and Parkinson's disease.

    PubMed

    Narayan, Shilpa; Sinsheimer, Janet S; Paul, Kimberly C; Liew, Zeyan; Cockburn, Myles; Bronstein, Jeff M; Ritz, Beate

    2015-11-01

    Studies suggested that variants in the ABCB1 gene encoding P-glycoprotein, a xenobiotic transporter, may increase susceptibility to pesticide exposures linked to Parkinson's Disease (PD) risk. To investigate the joint impact of two ABCB1 polymorphisms and pesticide exposures on PD risk. In a population-based case control study, we genotyped ABCB1 gene variants at rs1045642 (c.3435C/T) and rs2032582 (c.2677G/T/A) and assessed occupational exposures to organochlorine (OC) and organophosphorus (OP) pesticides based on self-reported occupational use and record-based ambient workplace exposures for 282 PD cases and 514 controls of European ancestry. We identified active ingredients in self-reported occupational use pesticides from a California database and estimated ambient workplace exposures between 1974 and 1999 employing a geographic information system together with records for state pesticide and land use. With unconditional logistic regression, we estimated marginal and joint contributions for occupational pesticide exposures and ABCB1 variants in PD. For occupationally exposed carriers of homozygous ABCB1 variant genotypes, we estimated odds ratios of 1.89 [95% confidence interval (CI): (0.87, 4.07)] to 3.71 [95% CI: (1.96, 7.02)], with the highest odds ratios estimated for occupationally exposed carriers of homozygous ABCB1 variant genotypes at both SNPs; but we found no multiplicative scale interactions. This study lends support to a previous report that commonly used pesticides, specifically OCs and OPs, and variant ABCB1 genotypes at two polymorphic sites jointly increase risk of PD. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Effect of Aspect Ratio on the Low-Speed Lateral Control Characteristics of Untapered Low-Aspect-Ratio Wings Equipped with Flap and with Retractable Ailerons

    NASA Technical Reports Server (NTRS)

    Fischel, Jack; Naeseth, Rodger L; Hagerman, John R; O'Hare, William M

    1952-01-01

    A low-speed wind-tunnel investigation was made to determine the lateral control characteristics of a series of untapered low-aspect-ratio wings. Sealed flap ailerons of various spans and spanwise locations were investigated on unswept wings of aspect ratios 1.13, 1.13, 4.13, and 6.13; and various projections of 0.60-semispan retractable ailerons were investigated on the unsweptback wings of aspect ratios 1.13, 2.13, and 4.13 and on a 45 degree sweptback wing. The retractable ailerons investigated on the unswept wings spanned the outboard stations of each wing; whereas the plain and stepped retractable ailerons investigated on the sweptback wing were located at various spanwise stations. Design charts based on experimental results are presented for estimating the flap aileron effectiveness for low-aspect-ratio, untapered, unswept.

  12. Wald Sequential Probability Ratio Test for Space Object Conjunction Assessment

    NASA Technical Reports Server (NTRS)

    Carpenter, James R.; Markley, F Landis

    2014-01-01

    This paper shows how satellite owner/operators may use sequential estimates of collision probability, along with a prior assessment of the base risk of collision, in a compound hypothesis ratio test to inform decisions concerning collision risk mitigation maneuvers. The compound hypothesis test reduces to a simple probability ratio test, which appears to be a novel result. The test satisfies tolerances related to targeted false alarm and missed detection rates. This result is independent of the method one uses to compute the probability density that one integrates to compute collision probability. A well-established test case from the literature shows that this test yields acceptable results within the constraints of a typical operational conjunction assessment decision timeline. Another example illustrates the use of the test in a practical conjunction assessment scenario based on operations of the International Space Station.

  13. Fugacity ratio estimations for high-melting rigid aromatic compounds.

    PubMed

    Van Noort, Paul C M

    2004-07-01

    Prediction of the environmental fate of organic compounds requires knowledge of their tendency to stay in the gas and water phase. Vapor pressure and aqueous solubility are commonly used descriptors for these processes. Depending on the type of distribution process, values for either the pure solid state or the (subcooled) liquid state have to be used. Values for the (subcooled) liquid state can be calculated from those for the solid state, and vice versa, using the fugacity ratio. Fugacity ratios are usually calculated from the entropy of fusion and the melting point. For polycyclic aromatic hydrocarbons, chlorobenzenes, chlorodibenzofuranes, and chlorodibenzo(p)dioxins, fugacity ratios calculated using experimental entropies of fusion were systematically less than those obtained from a thermodynamically more rigorous approach using heat capacity data. The deviation was more than 1 order of magnitude at the highest melting point. The use of a universal value for the entropy of fusion of 56 J/molK resulted in either over or underestimation by up to more than 1 order of magnitude. A simple correction factor, based on the melting point only, was derived. This correction factor allowed the fugacity ratios to be estimated from experimental entropies of fusion and melting point with an accuracy better than 0.1-0.2 log units. Copyright 2004 Elsevier Ltd.

  14. Efficient, nonlinear phase estimation with the nonmodulated pyramid wavefront sensor

    NASA Astrophysics Data System (ADS)

    Frazin, Richard A.

    2018-04-01

    The sensitivity of the the pyramid wavefront sensor (PyWFS) has made it a popular choice for astronomical adaptive optics (AAO) systems, and it is at its most sensitive when it is used without modulation of the input beam. In non-modulated mode, the device is highly nonlinear. Hence, all PyWFS implementations on current AAO systems employ modulation to make the device more linear. The upcoming era of 30-m class telescopes and the demand for ultra-precise wavefront control stemming from science objectives that include direct imaging of exoplanets make using the PyWFS without modulation desirable. This article argues that nonlinear estimation based on Newton's method for nonlinear optimization can be useful for mitigating the effects of nonlinearity in the non-modulated PyWFS. The proposed approach requires all optical modeling to be pre-computed, which has the advantage of avoiding real-time simulations of beam propagation. Further, the required real-time calculations are amenable to massively parallel computation. Numerical experiments simulate a currently operational PyWFS. A singular value analysis shows that the common practice of calculating two "slope" images from the four PyWFS pupil images discards critical information and is unsuitable for the non-modulated PyWFS simulated here. Instead, this article advocates estimators that use the raw pixel values not only from the four geometrical images of the pupil, but from surrounding pixels as well. The simulations indicate that nonlinear estimation can be effective when the Strehl ratio of the input beam is greater than 0.3, and the improvement relative to linear estimation tends to increase at larger Strehl ratios. At Strehl ratios less than about 0.5, the performances of both the nonlinear and linear estimators are relatively insensitive to noise, since they are dominated by nonlinearity error.

  15. Cost-effectiveness of a motivational intervention for alcohol-involved youth in a hospital emergency department.

    PubMed

    Neighbors, Charles J; Barnett, Nancy P; Rohsenow, Damaris J; Colby, Suzanne M; Monti, Peter M

    2010-05-01

    Brief interventions in the emergency department targeting risk-taking youth show promise to reduce alcohol-related injury. This study models the cost-effectiveness of a motivational interviewing-based intervention relative to brief advice to stop alcohol-related risk behaviors (standard care). Average cost-effectiveness ratios were compared between conditions. In addition, a cost-utility analysis examined the incremental cost of motivational interviewing per quality-adjusted life year gained. Microcosting methods were used to estimate marginal costs of motivational interviewing and standard care as well as two methods of patient screening: standard emergency-department staff questioning and proactive outreach by counseling staff. Average cost-effectiveness ratios were computed for drinking and driving, injuries, vehicular citations, and negative social consequences. Using estimates of the marginal effect of motivational interviewing in reducing drinking and driving, estimates of traffic fatality risk from drinking-and-driving youth, and national life tables, the societal costs per quality-adjusted life year saved by motivational interviewing relative to standard care were also estimated. Alcohol-attributable traffic fatality risks were estimated using national databases. Intervention costs per participant were $81 for standard care, $170 for motivational interviewing with standard screening, and $173 for motivational interviewing with proactive screening. The cost-effectiveness ratios for motivational interviewing were more favorable than standard care across all study outcomes and better for men than women. The societal cost per quality-adjusted life year of motivational interviewing was $8,795. Sensitivity analyses indicated that results were robust in terms of variability in parameter estimates. This brief intervention represents a good societal investment compared with other commonly adopted medical interventions.

  16. Performance of signal-to-noise ratio estimation for scanning electron microscope using autocorrelation Levinson-Durbin recursion model.

    PubMed

    Sim, K S; Lim, M S; Yeap, Z X

    2016-07-01

    A new technique to quantify signal-to-noise ratio (SNR) value of the scanning electron microscope (SEM) images is proposed. This technique is known as autocorrelation Levinson-Durbin recursion (ACLDR) model. To test the performance of this technique, the SEM image is corrupted with noise. The autocorrelation function of the original image and the noisy image are formed. The signal spectrum based on the autocorrelation function of image is formed. ACLDR is then used as an SNR estimator to quantify the signal spectrum of noisy image. The SNR values of the original image and the quantified image are calculated. The ACLDR is then compared with the three existing techniques, which are nearest neighbourhood, first-order linear interpolation and nearest neighbourhood combined with first-order linear interpolation. It is shown that ACLDR model is able to achieve higher accuracy in SNR estimation. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  17. [A method for obtaining redshifts of quasars based on wavelet multi-scaling feature matching].

    PubMed

    Liu, Zhong-Tian; Li, Xiang-Ru; Wu, Fu-Chao; Zhao, Yong-Heng

    2006-09-01

    The LAMOST project, the world's largest sky survey project being implemented in China, is expected to obtain 10(5) quasar spectra. The main objective of the present article is to explore methods that can be used to estimate the redshifts of quasar spectra from LAMOST. Firstly, the features of the broad emission lines are extracted from the quasar spectra to overcome the disadvantage of low signal-to-noise ratio. Then the redshifts of quasar spectra can be estimated by using the multi-scaling feature matching. The experiment with the 15, 715 quasars from the SDSS DR2 shows that the correct rate of redshift estimated by the method is 95.13% within an error range of 0. 02. This method was designed to obtain the redshifts of quasar spectra with relative flux and a low signal-to-noise ratio, which is applicable to the LAMOST data and helps to study quasars and the large-scale structure of the universe etc.

  18. Pulmonary function of U.S. coal miners related to dust exposure estimates.

    PubMed

    Attfield, M D; Hodous, T K

    1992-03-01

    This study of 7,139 U.S. coal miners used linear regression analysis to relate estimates of cumulative dust exposure to several pulmonary function variables measured during medical examinations undertaken between 1969 and 1971. The exposure data included newly derived cumulative dust exposure estimates for the period up to time of examination based on large data bases of underground airborne dust sampling measurements. Negative associations were found between measures of cumulative exposure and FEV1, FVC, and the FEV1/FVC ratio (p less than 0.001). In general, the relationships were similar to those reported for British coal miners. Overall, the results demonstrate an adverse effect of coal mine dust exposure on pulmonary function that occurs even in the absence of radiographically detected pneumoconiosis.

  19. Quantitative skeletal maturation estimation using cone-beam computed tomography-generated cervical vertebral images: a pilot study in 5- to 18-year-old Japanese children.

    PubMed

    Byun, Bo-Ram; Kim, Yong-Il; Yamaguchi, Tetsutaro; Maki, Koutaro; Ko, Ching-Chang; Hwang, Dea-Seok; Park, Soo-Byung; Son, Woo-Sung

    2015-11-01

    The purpose of this study was to establish multivariable regression models for the estimation of skeletal maturation status in Japanese boys and girls using the cone-beam computed tomography (CBCT)-based cervical vertebral maturation (CVM) assessment method and hand-wrist radiography. The analyzed sample consisted of hand-wrist radiographs and CBCT images from 47 boys and 57 girls. To quantitatively evaluate the correlation between the skeletal maturation status and measurement ratios, a CBCT-based CVM assessment method was applied to the second, third, and fourth cervical vertebrae. Pearson's correlation coefficient analysis and multivariable regression analysis were used to determine the ratios for each of the cervical vertebrae (p < 0.05). Four characteristic parameters ((OH2 + PH2)/W2, (OH2 + AH2)/W2, D2, AH3/W3), as independent variables, were used to build the multivariable regression models: for the Japanese boys, the skeletal maturation status according to the CBCT-based quantitative cervical vertebral maturation (QCVM) assessment was 5.90 + 99.11 × AH3/W3 - 14.88 × (OH2 + AH2)/W2 + 13.24 × D2; for the Japanese girls, it was 41.39 + 59.52 × AH3/W3 - 15.88 × (OH2 + PH2)/W2 + 10.93 × D2. The CBCT-generated CVM images proved very useful to the definition of the cervical vertebral body and the odontoid process. The newly developed CBCT-based QCVM assessment method showed a high correlation between the derived ratios from the second cervical vertebral body and odontoid process. There are high correlations between the skeletal maturation status and the ratios of the second cervical vertebra based on the remnant of dentocentral synchondrosis.

  20. Estimating crustal thickness and Vp/Vs ratio with joint constraints of receiver function and gravity data

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Guo, Lianghui; Ma, Yawei; Li, Yonghua; Wang, Weilai

    2018-05-01

    The technique of teleseismic receiver function H-κ stacking is popular for estimating the crustal thickness and Vp/Vs ratio. However, it has large uncertainty or ambiguity when the Moho multiples in receiver function are not easy to be identified. We present an improved technique to estimate the crustal thickness and Vp/Vs ratio by joint constraints of receiver function and gravity data. The complete Bouguer gravity anomalies, composed of the anomalies due to the relief of the Moho interface and the heterogeneous density distribution within the crust, are associated with the crustal thickness, density and Vp/Vs ratio. According to their relationship formulae presented by Lowry and Pérez-Gussinyé, we invert the complete Bouguer gravity anomalies by using a common algorithm of likelihood estimation to obtain the crustal thickness and Vp/Vs ratio, and then utilize them to constrain the receiver function H-κ stacking result. We verified the improved technique on three synthetic crustal models and evaluated the influence of selected parameters, the results of which demonstrated that the novel technique could reduce the ambiguity and enhance the accuracy of estimation. Real data test at two given stations in the NE margin of Tibetan Plateau illustrated that the improved technique provided reliable estimations of crustal thickness and Vp/Vs ratio.

  1. Prevalence of kidney disease in anaemia differs by GFR-estimating method: The Third National Health and Nutrition Examination Survey (1988–94)

    PubMed Central

    Estrella, Michelle M.; Astor, Brad C.; Köttgen, Anna; Selvin, Elizabeth; Coresh, Josef; Parekh, Rulan S.

    2010-01-01

    Background. Anaemia worsens as kidney function declines. Both conditions are associated with increased mortality. Serum cystatin C is purportedly a more sensitive marker of kidney disease and a better predictor of mortality than serum creatinine. However, studies suggest that extrarenal factors also influence cystatin C levels. Methods. We determined whether estimates of glomerular filtration rate [estimated glomerular filtration rate (eGFR)] based on serum cystatin C alone or in combination with serum creatinine were superior to those based on serum creatinine in recognizing impaired kidney function in the setting of anaemia in a sub-sample of the Third National Health and Nutrition Examination Survey of the USA consisting of 6734 participants, 20 years or older. Results. The prevalence of moderate to severe kidney disease (eGFR 15–59 mL/min/1.73 m2) among anaemic persons was 15–16% when based on serum creatinine alone (eGFRSCR) or combined with cystatin C (eGFRSCR + CYSC); this estimate increased to nearly 25% when kidney function was estimated by cystatin C (eGFRCYSC). The adjusted odds ratios of kidney disease in anaemic versus non-anaemic persons were slightly higher with eGFRCYSC than eGFRSCR and eGFRSCR + CYSC in younger adults [odds ratio (OR) = 5.22, 95% confidence interval (CI): 2.23, 12.17], women (OR = 5.34, 95% CI: 2.36, 12.06) and those with elevated C-reactive protein (CRP) (OR = 7.36, 95% CI: 1.98–27.36). Conclusions. Impaired kidney function was common in individuals with anaemia. Among anaemic individuals, the prevalence estimate for kidney disease was notably higher when kidney function was estimated by cystatin C alone compared with the estimations by serum creatinine alone or in combination with serum cystatin C. eGFRCYSC may be particularly helpful in identifying kidney disease in the setting of anaemia among younger persons, women and those with elevated CRP. Regardless of which renal biomarker is used, our study suggests that an evaluation for underlying kidney disease should be considered in the standard workup of anaemia. PMID:20176612

  2. Planning Inmarsat's second generation of spacecraft

    NASA Astrophysics Data System (ADS)

    Williams, W. P.

    1982-09-01

    The next generation of studies of the Inmarsat service are outlined, such as traffic forecasting studies, communications capacity estimates, space segment design, cost estimates, and financial analysis. Traffic forecasting will require future demand estimates, and a computer model has been developed which estimates demand over the Atlantic, Pacific, and Indian ocean regions. Communications estimates are based on traffic estimates, as a model converts traffic demand into a required capacity figure for a given area. The Erlang formula is used, requiring additional data such as peak hour ratios and distribution estimates. Basic space segment technical requirements are outlined (communications payload, transponder arrangements, etc), and further design studies involve such areas as space segment configuration, launcher and spacecraft studies, transmission planning, and earth segment configurations. Cost estimates of proposed design parameters will be performed, but options must be reduced to make construction feasible. Finally, a financial analysis will be carried out in order to calculate financial returns.

  3. Predicting speech intelligibility based on the signal-to-noise envelope power ratio after modulation-frequency selective processing.

    PubMed

    Jørgensen, Søren; Dau, Torsten

    2011-09-01

    A model for predicting the intelligibility of processed noisy speech is proposed. The speech-based envelope power spectrum model has a similar structure as the model of Ewert and Dau [(2000). J. Acoust. Soc. Am. 108, 1181-1196], developed to account for modulation detection and masking data. The model estimates the speech-to-noise envelope power ratio, SNR(env), at the output of a modulation filterbank and relates this metric to speech intelligibility using the concept of an ideal observer. Predictions were compared to data on the intelligibility of speech presented in stationary speech-shaped noise. The model was further tested in conditions with noisy speech subjected to reverberation and spectral subtraction. Good agreement between predictions and data was found in all cases. For spectral subtraction, an analysis of the model's internal representation of the stimuli revealed that the predicted decrease of intelligibility was caused by the estimated noise envelope power exceeding that of the speech. The classical concept of the speech transmission index fails in this condition. The results strongly suggest that the signal-to-noise ratio at the output of a modulation frequency selective process provides a key measure of speech intelligibility. © 2011 Acoustical Society of America

  4. Aerosol properties computed from aircraft-based observations during the ACE- Asia campaign. 2; A case study of lidar ratio closure and aerosol radiative effects

    NASA Technical Reports Server (NTRS)

    Kuzmanoski, Maja; Box, M. A.; Schmid, B.; Box, G. P.; Wang, J.; Russell, P. B.; Bates, D.; Jonsson, H. H.; Welton, Ellsworth J.; Flagan, R. C.

    2005-01-01

    For a vertical profile with three distinct layers (marine boundary, pollution and dust), observed during the ACE-Asia campaign, we carried out a comparison between the modeled lidar ratio vertical profile and that obtained from collocated airborne NASA AATS-14 sunphotometer and shipborne Micro-Pulse Lidar (MPL) measurements. Vertically resolved lidar ratio was calculated from two size distribution vertical profiles - one obtained by inversion of sunphotometer-derived extinction spectra, and one measured in-situ - combined with the same refractive index model based on aerosol chemical composition. The aerosol model implies single scattering albedos of 0.78 - 0.81 and 0.93 - 0.96 at 0.523 microns (the wavelength of the lidar measurements), in the pollution and dust layers, respectively. The lidar ratios calculated from the two size distribution profiles have close values in the dust layer; they are however, significantly lower than the lidar ratios derived from combined lidar and sunphotometer measurements, most probably due to the use of a simple nonspherical model with a single particle shape in our calculations. In the pollution layer, the two size distribution profiles yield generally different lidar ratios. The retrieved size distributions yield a lidar ratio which is in better agreement with that derived from lidar/sunphotometer measurements in this layer, with still large differences at certain altitudes (the largest relative difference was 46%). We explain these differences by non-uniqueness of the result of the size distribution retrieval and lack of information on vertical variability of particle refractive index. Radiative transfer calculations for this profile showed significant atmospheric radiative forcing, which occurred mainly in the pollution layer. We demonstrate that if the extinction profile is known then information on the vertical structure of absorption and asymmetry parameter is not significant for estimating forcing at TOA and the surface, while it is of importance for estimating vertical profiles of radiative forcing and heating rates.

  5. Cost-benefit analysis of establishing and operating radiation oncology services in Fiji.

    PubMed

    Kim, Eunkyoung; Cho, Yoon-Min; Kwon, Soonman; Park, Kunhee

    2017-10-01

    Rising demand for services of cancer patients has been recognised by the Government of Fiji as a national health priority. Increasing attention has been paid to the lack of service of radiation therapy or radiotherapy in Fiji. This study aims to estimate and compare the costs and benefits of introducing radiation oncology services in Fiji from the societal perspective. Time horizon for cost-benefit analysis (CBA) was 15 years from 2021 to 2035. The benefits and costs were converted to the present values of 2016. Estimates for the CBA model were taken from previous studies and expert opinions and data obtained from field visits to Fiji in January 2016. Sensitivity analyses with changing assumptions were undertaken. The estimated net benefit, applying the national minimum wage (NMW) to measure monetary value for life-year gained, was -31,624,421 FJD with 0.69 of benefit-cost (B/C) ratio. If gross national income (GNI) per capita was used for the value of life years, net benefit was 3,975,684 FJD (B/C ratio: 1.04). With a pessimistic scenario, establishing the center appeared to be not cost-beneficial, and the net benefit was -53,634,682 FJD (B/C ratio: 0.46); net benefit with an optimistic scenario was estimated 23,178,189 FJD (B/C ratio: 1.20). Based on the CBA results from using GNI per capita instead of the NMW, this project would be cost-beneficial. Introducing a radiation oncology center in Fiji would have potential impacts on financial sustainability, financial protection, and accessibility and equity of the health system. Copyright © 2017 World Health Organization. Published by Elsevier Ltd.. All rights reserved.

  6. Tomographic separation of composite spectra. 2: The components of 29 UW Canis Majoris

    NASA Technical Reports Server (NTRS)

    Bagnuolo, William G., Jr.; Gies, Douglas R.; Hahula, Michael E.; Wiemker, Rafael; Wiggs, Michael S.

    1994-01-01

    We have analyzed the UV photospheric lines of 29 CMa, a 4.39 day period, double-lined O-type spectroscopic binary. Archival data from International Ultraviolet Explorer (IUE)(28 spectra well distributed in oribital phase) were analyzed with several techniques. We find that the mass ratio is q = 1.20 +/- 0.16 (secondary more massive) based on three independent arguments. A tomography algorithm was used to produce the separate spectra of the two stars in six UV spectral regions. The MK spectral classifications of the primary and secondary, O7.5-8 Iab and O9.7 Ib, respectively, were estimated through a comparison of UV line ratios with those in spectral standard stars. The flux ratio of the stars in the UV is 0.36 +/- 0.07 (primary brighter). The primary has a strong P Cygni NIV wavelength 1718 feature, indicating a strong stellar wind. We also present tomographic reconstructions of visual spectral data in the range 4300-4950 A, based on seven observations of differing orbital phases, which confirm the UV classifications, and show that the primary is an Of star. From the spectral classifications, we estimate the temperatures of the stars to be 33,750 K and 29,000 K for primary and secondary, respectively. We then fit visual and UV light curves and show that reasonably good fits can be obtained with these temperatures, a semicontact configuration, an inclination of 74 deg. +/- 2 deg., and an intensity ratio r is less than 0.5.

  7. Verification of the Velocity Structure in Mexico Basin Using the H/V Spectral Ratio of Microtremors

    NASA Astrophysics Data System (ADS)

    Matsushima, S.; Sanchez-Sesma, F. J.; Nagashima, F.; Kawase, H.

    2011-12-01

    The authors have been proposing a new theory to calculate the Horizontal-to-Vertical (H/V) spectral ratio of microtremors assuming that the wave field is completely diffuse and have attempted to apply the theory to understand the observed microtremor data. It is anticipated that this new theory can be applied to detect the subsurface velocity structure beneath urban area. Precise information about the subsurface velocity structure is essential for predicting strong ground motion accurately, which is necessary to mitigate seismic disaster. Mexico basin, who witnessed severe damage during the 1985 Michoacán Earthquake (Ms 8.1) several hundreds of kilometers away from the source region, is an interesting location in which the reassessment of soil properties is urgent. Because of subsidence, having improved estimates of properties is mandatory. In order to estimate possible changes in the velocity structure in the Mexico basin, we measured microtremors at strong motion observation sites in Mexico City. At those sites, information about the velocity profiles are available. Using the obtained data, we derive observed H/V spectral ratio and compare it with the theoretical H/V spectral ratio to gauge the goodness of our new theory. First we compared the observed H/V spectral ratios for five stations to see the diverse characteristics of this measurement. Then we compared the observed H/V spectral ratios with the theoretical predictions to confirm our theory. We assumed the velocity model of previous surveys at the strong motions observation sites as an initial model. We were able to closely fit both the peak frequency and amplitude of the observed H/V spectral ratio, by the theoretical H/V spectral ratio calculated by our new method. These results show that we have a good initial model. However, the theoretical estimates need some improvement to perfectly fit the observed H/V spectral ratio. This may be an indication that the initial model needs some adjustments. We explore how to improve the velocity model based on the comparison between observations and theory.

  8. Carbon to organic matter ratios for soils in Rocky Mountain coniferous forests

    Treesearch

    Theresa B. Jain; Russell T. Graham; David L. Adams

    1997-01-01

    Vegetation type, soils, climate, and conversion ratios influence estimates of terrestrial C. Our objectives were to (i) determine carbon to organic matter (C/OM) ratios for brown cubical rotten wood, litter, surface humus, soil wood, and mineral soils; (ii) evaluate the validity of using 0.58 and 0.50 ratios for estimating C in mineral and organic soil components,...

  9. Diagnostic Performance of Bronchoalveolar Lavage Fluid CD4/CD8 Ratio for Sarcoidosis: A Meta-analysis.

    PubMed

    Shen, Yongchun; Pang, Caishuang; Wu, Yanqiu; Li, Diandian; Wan, Chun; Liao, Zenglin; Yang, Ting; Chen, Lei; Wen, Fuqiang

    2016-06-01

    The usefulness of bronchoalveolar lavage fluid (BALF) CD4/CD8 ratio for diagnosing sarcoidosis has been reported in many studies with variable results. Therefore, we performed a meta-analysis to estimate the overall diagnostic accuracy of BALF CD4/CD8 ratio based on the bulk of published evidence. Studies published prior to June 2015 and indexed in PubMed, OVID, Web of Science, Scopus and other databases were evaluated for inclusion. Data on sensitivity, specificity, positive likelihood ratio (PLR), negative likelihood ratio (NLR), and diagnostic odds ratio (DOR) were pooled from included studies. Summary receiver operating characteristic (SROC) curves were used to summarize overall test performance. Deeks's funnel plot was used to detect publication bias. Sixteen publications with 1885 subjects met our inclusion criteria and were included in this meta-analysis. Summary estimates of the diagnostic performance of the BALF CD4/CD8 ratio were as follows: sensitivity, 0.70 (95%CI 0.64-0.75); specificity, 0.83 (95%CI 0.78-0.86); PLR, 4.04 (95%CI 3.13-5.20); NLR, 0.36 (95%CI 0.30-0.44); and DOR, 11.17 (95%CI 7.31-17.07). The area under the SROC curve was 0.84 (95%CI 0.81-0.87). There was no evidence of publication bias. Measuring the BALF CD4/CD8 ratio may assist in the diagnosis of sarcoidosis when interpreted in parallel with other diagnostic factors. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  10. A Hybrid Seismic Inversion Method for V P/V S Ratio and Its Application to Gas Identification

    NASA Astrophysics Data System (ADS)

    Guo, Qiang; Zhang, Hongbing; Han, Feilong; Xiao, Wei; Shang, Zuoping

    2018-03-01

    The ratio of compressional wave velocity to shear wave velocity (V P/V S ratio) has established itself as one of the most important parameters in identifying gas reservoirs. However, considering that seismic inversion process is highly non-linear and geological conditions encountered may be complex, a direct estimation of V P/V S ratio from pre-stack seismic data remains a challenging task. In this paper, we propose a hybrid seismic inversion method to estimate V P/V S ratio directly. In this method, post- and pre-stack inversions are combined in which the pre-stack inversion for V P/V S ratio is driven by the post-stack inversion results (i.e., V P and density). In particular, the V P/V S ratio is considered as a model parameter and is directly inverted from the pre-stack inversion based on the exact Zoeppritz equation. Moreover, anisotropic Markov random field is employed in order to regularise the inversion process as well as taking care of geological structures (boundaries) information. Aided by the proposed hybrid inversion strategy, the directional weighting coefficients incorporated in the anisotropic Markov random field neighbourhoods are quantitatively calculated by the anisotropic diffusion method. The synthetic test demonstrates the effectiveness of the proposed inversion method. In particular, given low quality of the pre-stack data and high heterogeneity of the target layers in the field data, the proposed inversion method reveals the detailed model of V P/V S ratio that can successfully identify the gas-bearing zones.

  11. The Effects of Land-Use Change on Ecosystem Oxidative Ratio

    NASA Astrophysics Data System (ADS)

    Hockaday, W. C.; Masiello, C. A.; Gallagher, M. E.; Calligan, L.

    2009-12-01

    The carbon budgets of terrestrial ecosystems are typically estimated by tower-based CO2 fluxes and/or ground-based carbon inventories. Carbon uptake by the terrestrial biosphere can also be determined from their influence on the O2 concentration in the atmosphere (Keeling et al., 1996). Relating CO2 uptake to O2 production requires knowledge of the oxidative ratio (OR) of terrestrial ecosystems. Oxidative ratio (OR) is the molar ratio of O2:CO2 exchanged between the biosphere and the atmosphere by the processes of photosynthesis and respiration. Models currently used to apportion anthropogenic CO2 uptake between the land and ocean carbon sinks assume an invariant OR value of 1.10 for the terrestrial biosphere. The assumption of global invariance of OR is likely incorrect as climate and land-use changes alter ecosystem distributions. Moreover, small variations (0.01) in OR produce large global-scale discrepancies (1012 g C) in the estimated size of the terrestrial carbon sink (Randerson et al., 2006). The first SOCCR report estimated that 50% of the North American terrestrial carbon sink can be attributed to woody encroachment on abandoned agricultural lands (CCSP, 2007). The OR of early successional woodlands is poorly constrained and is likely to differ from croplands and forests. Therefore, we hypothesize that woody encroachment could drive a shift in the average OR value of North American ecosystems. The OR of an ecosystem can be measured by simultaneous CO2 and O2 flux measurements. Alternatively, OR can be estimated from the chemical composition of the organic matter in an ecosystem (plant biomass and soil). We used CHNOS combustion elemental analysis and 13C nuclear magnetic resonance to measure OR. We present a preliminary assessment of the OR of cropland, successional woodland, and mature forests at the Kellogg biological station LTER (Hickory Corners, MI, USA). We show significant variation in ecosystem OR, with coniferous forests having the highest OR values (~1.09), and corn agriculture having the lowest OR values (~1.04). Successional communities show large variation in OR values (ranging from ~1.03 to ~1.12).

  12. Sodium and potassium content of 24 h urinary collections: a comparison between field- and laboratory-based analysers.

    PubMed

    Yin, Xuejun; Neal, Bruce; Tian, Maoyi; Li, Zhifang; Petersen, Kristina; Komatsu, Yuichiro; Feng, Xiangxian; Wu, Yangfeng

    2018-04-01

    Measurement of mean population Na and K intakes typically uses laboratory-based assays, which can add significant logistical burden and costs. A valid field-based measurement method would be a significant advance. In the current study, we used 24 h urine samples to compare estimates of Na, K and Na:K ratio based upon assays done using the field-based Horiba twin meter v. laboratory-based methods. The performance of the Horiba twin meter was determined by comparing field-based estimates of mean Na and K against those obtained using laboratory-based methods. The reported 95 % limits of agreement of Bland-Altman plots were calculated based on a regression approach for non-uniform differences. The 24 h urine samples were collected as part of an ongoing study being done in rural China. One hundred and sixty-six complete 24 h urine samples were qualified for estimating 24 h urinary Na and K excretion. Mean Na and K excretion were estimated as 170·4 and 37·4 mmol/d, respectively, using the meter-based assays; and 193·4 and 43·8 mmol/d, respectively, using the laboratory-based assays. There was excellent relative reliability (intraclass correlation coefficient) for both Na (0·986) and K (0·986). Bland-Altman plots showed moderate-to-good agreement between the two methods. Na and K intake estimations were moderately underestimated using assays based upon the Horiba twin meter. Compared with standard laboratory-based methods, the portable device was more practical and convenient.

  13. Estimating hazard ratios in cohort data with missing disease information due to death.

    PubMed

    Binder, Nadine; Herrnböck, Anne-Sophie; Schumacher, Martin

    2017-03-01

    In clinical and epidemiological studies information on the primary outcome of interest, that is, the disease status, is usually collected at a limited number of follow-up visits. The disease status can often only be retrieved retrospectively in individuals who are alive at follow-up, but will be missing for those who died before. Right-censoring the death cases at the last visit (ad-hoc analysis) yields biased hazard ratio estimates of a potential risk factor, and the bias can be substantial and occur in either direction. In this work, we investigate three different approaches that use the same likelihood contributions derived from an illness-death multistate model in order to more adequately estimate the hazard ratio by including the death cases into the analysis: a parametric approach, a penalized likelihood approach, and an imputation-based approach. We investigate to which extent these approaches allow for an unbiased regression analysis by evaluating their performance in simulation studies and on a real data example. In doing so, we use the full cohort with complete illness-death data as reference and artificially induce missing information due to death by setting discrete follow-up visits. Compared to an ad-hoc analysis, all considered approaches provide less biased or even unbiased results, depending on the situation studied. In the real data example, the parametric approach is seen to be too restrictive, whereas the imputation-based approach could almost reconstruct the original event history information. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Reliability of COPVs Accounting for Margin of Safety on Design Burst

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L.N.

    2012-01-01

    In this paper, the stress rupture reliability of Carbon/Epoxy Composite Overwrapped Pressure Vessels (COPVs) is examined utilizing the classic Phoenix model and accounting for the differences between the design and the actual burst pressure, and the liner contribution effects. Stress rupture life primarily depends upon the fiber stress ratio which is defined as the ratio of stress in fibers at the maximum expected operating pressure to actual delivered fiber strength. The actual delivered fiber strength is calculated using the actual burst pressures of vessels established through burst tests. However, during the design phase the actual burst pressure is generally not known and to estimate the reliability of the vessels calculations are usually performed based upon the design burst pressure only. Since the design burst is lower than the actual burst, this process yields a much higher value for the stress ratio and consequently a conservative estimate for the reliability. Other complications arise due to the fact that the actual burst pressure and the liner contributions have inherent variability and therefore must be treated as random variables in order to compute the stress rupture reliability. Furthermore, the model parameters, which have to be established based on stress rupture tests of subscale vessels or coupons, have significant variability as well due to limited available data and hence must be properly accounted for. In this work an assessment of reliability of COPVs including both parameter uncertainties and physical variability inherent in liner and overwrap material behavior is made and estimates are provided in terms of degree of uncertainty in the actual burst pressure and the liner load sharing.

  15. Estimation of the proteomic cancer co-expression sub networks by using association estimators

    PubMed Central

    Kurt, Zeyneb; Diri, Banu

    2017-01-01

    In this study, the association estimators, which have significant influences on the gene network inference methods and used for determining the molecular interactions, were examined within the co-expression network inference concept. By using the proteomic data from five different cancer types, the hub genes/proteins within the disease-associated gene-gene/protein-protein interaction sub networks were identified. Proteomic data from various cancer types is collected from The Cancer Proteome Atlas (TCPA). Correlation and mutual information (MI) based nine association estimators that are commonly used in the literature, were compared in this study. As the gold standard to measure the association estimators’ performance, a multi-layer data integration platform on gene-disease associations (DisGeNET) and the Molecular Signatures Database (MSigDB) was used. Fisher's exact test was used to evaluate the performance of the association estimators by comparing the created co-expression networks with the disease-associated pathways. It was observed that the MI based estimators provided more successful results than the Pearson and Spearman correlation approaches, which are used in the estimation of biological networks in the weighted correlation network analysis (WGCNA) package. In correlation-based methods, the best average success rate for five cancer types was 60%, while in MI-based methods the average success ratio was 71% for James-Stein Shrinkage (Shrink) and 64% for Schurmann-Grassberger (SG) association estimator, respectively. Moreover, the hub genes and the inferred sub networks are presented for the consideration of researchers and experimentalists. PMID:29145449

  16. Estimates of evapotranspiration in alkaline scrub and meadow communities of Owens Valley, California, using the Bowen-ratio, eddy-correlation, and Penman-combination methods

    USGS Publications Warehouse

    Duell, L. F. W.

    1988-01-01

    In Owens Valley, evapotranspiration (ET) is one of the largest components of outflow in the hydrologic budget and the least understood. ET estimates for December 1983 through October 1985 were made for seven representative locations selected on the basis of geohydrology and the characteristics of phreatophytic alkaline scrub and meadow communities. The Bowen-ratio, eddy-correlation, and Penman-combination methods were used to estimate ET. The results of the analyses appear satisfactory when compared to other estimates of ET. Results by the eddy-correlation method are for a direct and a residual latent-heat flux that is based on sensible-heat flux and energy budget measurements. Penman-combination potential ET estimates were determined to be unusable because they overestimated actual ET. Modification in the psychrometer constant of this method to account for differences between heat-diffusion resistance and vapor-diffusion resistance permitted actual ET to be estimated. The methods may be used for studies in similar semiarid and arid rangeland areas in the Western United States. Meteorological data for three field sites are included in the appendix. Simple linear regression analysis indicates that ET estimates are correlated to air temperature, vapor-density deficit, and net radiation. Estimates of annual ET range from 300 mm at a low-density scrub site to 1,100 mm at a high-density meadow site. The monthly percentage of annual ET was determined to be similar for all sites studied. (Author 's abstract)

  17. Linking trading ratio with TMDL (total maximum daily load) allocation matrix and uncertainty analysis.

    PubMed

    Zhang, H X

    2008-01-01

    An innovative approach for total maximum daily load (TMDL) allocation and implementation is the watershed-based pollutant trading. Given the inherent scientific uncertainty for the tradeoffs between point and nonpoint sources, setting of trading ratios can be a contentious issue and was already listed as an obstacle by several pollutant trading programs. One of the fundamental reasons that a trading ratio is often set higher (e.g. greater than 2) is to allow for uncertainty in the level of control needed to attain water quality standards, and to provide a buffer in case traded reductions are less effective than expected. However, most of the available studies did not provide an approach to explicitly address the determination of trading ratio. Uncertainty analysis has rarely been linked to determination of trading ratio.This paper presents a practical methodology in estimating "equivalent trading ratio (ETR)" and links uncertainty analysis with trading ratio determination from TMDL allocation process. Determination of ETR can provide a preliminary evaluation of "tradeoffs" between various combination of point and nonpoint source control strategies on ambient water quality improvement. A greater portion of NPS load reduction in overall TMDL load reduction generally correlates with greater uncertainty and thus requires greater trading ratio. The rigorous quantification of trading ratio will enhance the scientific basis and thus public perception for more informed decision in overall watershed-based pollutant trading program. (c) IWA Publishing 2008.

  18. Using DNA fingerprints to infer familial relationships within NHANES III households

    PubMed Central

    Katki, Hormuzd A.; Sanders, Christopher L.; Graubard, Barry I.; Bergen, Andrew W.

    2009-01-01

    Developing, targeting, and evaluating genomic strategies for population-based disease prevention require population-based data. In response to this urgent need, genotyping has been conducted within the Third National Health and Nutrition Examination (NHANES III), the nationally-representative household-interview health survey in the U.S. However, before these genetic analyses can occur, family relationships within households must be accurately ascertained. Unfortunately, reported family relationships within NHANES III households based on questionnaire data are incomplete and inconclusive with regards to actual biological relatedness of family members. We inferred family relationships within households using DNA fingerprints (Identifiler®) that contain the DNA loci used by law enforcement agencies for forensic identification of individuals. However, performance of these loci for relationship inference is not well understood. We evaluated two competing statistical methods for relationship inference on pairs of household members: an exact likelihood ratio relying on allele frequencies to an Identical By State (IBS) likelihood ratio that only requires matching alleles. We modified these methods to account for genotyping errors and population substructure. The two methods usually agree on the rankings of the most likely relationships. However, the IBS method underestimates the likelihood ratio by not accounting for the informativeness of matching rare alleles. The likelihood ratio is sensitive to estimates of population substructure, and parent-child relationships are sensitive to the specified genotyping error rate. These loci were unable to distinguish second-degree relationships and cousins from being unrelated. The genetic data is also useful for verifying reported relationships and identifying data quality issues. An important by-product is the first explicitly nationally-representative estimates of allele frequencies at these ubiquitous forensic loci. PMID:20664713

  19. [Quantitative relationships between hyper-spectral vegetation indices and leaf area index of rice].

    PubMed

    Tian, Yong-Chao; Yang, Jie; Yao, Xia; Zhu, Yan; Cao, Wei-Xing

    2009-07-01

    Based on field experiments with different rice varieties under different nitrogen application levels, the quantitative relationships of rice leaf area index (LAI) with canopy hyper-spectral parameters at different growth stages were analyzed. Rice LAI had good relationships with several hyper-spectral vegetation indices, the correlation coefficient being the highest with DI (difference index), followed by with RI (ratio index), and NI (normalized index), based on the spectral reflectance or the first derivative spectra. The two best spectral indices for estimating LAI were the difference index DI (854, 760) (based on two spectral bands of 850 nm and 760 nm) and the difference index DI (D676, D778) (based on two first derivative bands of 676 nm and 778 nm). In general, the hyper-spectral vegetation indices based on spectral reflectance performed better than the spectral indices based on the first derivative spectra. The tests with independent dataset suggested that the rice LAI monitoring models with difference index DI (854,760) as the variable could give an accurate LAI estimation, being available for estimation of rice LAI.

  20. Calcaneotalar ratio: a new concept in the estimation of the length of the calcaneus.

    PubMed

    David, Vikram; Stephens, Terry J; Kindl, Radek; Ang, Andy; Tay, Wei-Han; Asaid, Rafik; McCullough, Keith

    2015-01-01

    Maintaining the calcaneal length after calcaneal fractures is vital to restoring the normal biomechanics of the foot, because it acts as an important lever arm to the plantarflexors of the foot. However, estimation of the length of the calcaneus to be reconstructed in comminuted calcaneal fractures can be difficult. We propose a new method to reliably estimate the calcaneal length radiographically by defining the calcaneotalar length ratio. A total of 100 ankle radiographs with no fracture in the calcaneus or talus taken in skeletally mature patients were reviewed by 6 observers. The anteroposterior lengths of the calcaneus and talus were measured, and the calcaneotalar length ratio was determined. The ratio was then used to estimate the length of the calcaneus. Interobserver reliability was determined using Cronbach's α coefficient and Pearson's correlation coefficient. The mean length of the calcaneus was 75 ± 0.6 mm, and the mean length of the talus was 59 ± 0.5 mm. The calcaneotalar ratio was 1.3. Using this ratio and multiplying it by the talar length, the mean average estimated length of the calcaneus was within 0.7 mm of the known calcaneal length. Cronbach's α coefficient and Pearson's correlation coefficient showed excellent interobserver reliability. The proposed calcaneotalar ratio is a new and reliable method to radiographically estimate the normal length of the calcaneus when reconstructing the calcaneus. Copyright © 2015 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  1. Variance to mean ratio, R(t), for poisson processes on phylogenetic trees.

    PubMed

    Goldman, N

    1994-09-01

    The ratio of expected variance to mean, R(t), of numbers of DNA base substitutions for contemporary sequences related by a "star" phylogeny is widely seen as a measure of the adherence of the sequences' evolution to a Poisson process with a molecular clock, as predicted by the "neutral theory" of molecular evolution under certain conditions. A number of estimators of R(t) have been proposed, all predicted to have mean 1 and distributions based on the chi 2. Various genes have previously been analyzed and found to have values of R(t) far in excess of 1, calling into question important aspects of the neutral theory. In this paper, I use Monte Carlo simulation to show that the previously suggested means and distributions of estimators of R(t) are highly inaccurate. The analysis is applied to star phylogenies and to general phylogenetic trees, and well-known gene sequences are reanalyzed. For star phylogenies the results show that Kimura's estimators ("The Neutral Theory of Molecular Evolution," Cambridge Univ. Press, Cambridge, 1983) are unsatisfactory for statistical testing of R(t), but confirm the accuracy of Bulmer's correction factor (Genetics 123: 615-619, 1989). For all three nonstar phylogenies studied, attained values of all three estimators of R(t), although larger than 1, are within their true confidence limits under simple Poisson process models. This shows that lineage effects can be responsible for high estimates of R(t), restoring some limited confidence in the molecular clock and showing that the distinction between lineage and molecular clock effects is vital.(ABSTRACT TRUNCATED AT 250 WORDS)

  2. Magnetic resonance fingerprinting based on realistic vasculature in mice

    PubMed Central

    Pouliot, Philippe; Gagnon, Louis; Lam, Tina; Avti, Pramod K.; Bowen, Chris; Desjardins, Michèle; Kakkar, Ashok K.; Thorin, E.; Sakadzic, Sava; Boas, David A.; Lesage, Frédéric

    2017-01-01

    Magnetic resonance fingerprinting (MRF) was recently proposed as a novel strategy for MR data acquisition and analysis. A variant of MRF called vascular MRF (vMRF) followed, that extracted maps of three parameters of physiological importance: cerebral oxygen saturation (SatO2), mean vessel radius and cerebral blood volume (CBV). However, this estimation was based on idealized 2-dimensional simulations of vascular networks using random cylinders and the empirical Bloch equations convolved with a diffusion kernel. Here we focus on studying the vascular MR fingerprint using real mouse angiograms and physiological values as the substrate for the MR simulations. The MR signal is calculated ab initio with a Monte Carlo approximation, by tracking the accumulated phase from a large number of protons diffusing within the angiogram. We first study the identifiability of parameters in simulations, showing that parameters are fully estimable at realistically high signal-to-noise ratios (SNR) when the same angiogram is used for dictionary generation and parameter estimation, but that large biases in the estimates persist when the angiograms are different. Despite these biases, simulations show that differences in parameters remain estimable. We then applied this methodology to data acquired using the GESFIDE sequence with SPIONs injected into 9 young wild type and 9 old atherosclerotic mice. Both the pre injection signal and the ratio of post-to-pre injection signals were modeled, using 5-dimensional dictionaries. The vMRF methodology extracted significant differences in SatO2, mean vessel radius and CBV between the two groups, consistent across brain regions and dictionaries. Further validation work is essential before vMRF can gain wider application. PMID:28043909

  3. Estimation of sulphur dioxide emission rate from a power plant based on the remote sensing measurement with an imaging-DOAS instrument

    NASA Astrophysics Data System (ADS)

    Chong, Jihyo; Kim, Young J.; Baek, Jongho; Lee, Hanlim

    2016-10-01

    Major anthropogenic sources of sulphur dioxide in the troposphere include point sources such as power plants and combustion-derived industrial sources. Spatially resolved remote sensing of atmospheric trace gases is desirable for better estimation and validation of emission from those sources. It has been reported that Imaging Differential Optical Absorption Spectroscopy (I-DOAS) technique can provide the spatially resolved two-dimensional distribution measurement of atmospheric trace gases. This study presents the results of I-DOAS observations of SO2 from a large power plant. The stack plume from the Taean coal-fired power plant was remotely sensed with an I-DOAS instrument. The slant column density (SCD) of SO2 was derived by data analysis of the absorption spectra of the scattered sunlight measured by an I-DOAS over the power plant stacks. Two-dimensional distribution of SO2 SCD was obtained over the viewing window of the I-DOAS instrument. The measured SCDs were converted to mixing ratios in order to estimate the rate of SO2 emission from each stack. The maximum mixing ratio of SO2 was measured to be 28.1 ppm with a SCD value of 4.15×1017 molecules/cm2. Based on the exit velocity of the plume from the stack, the emission rate of SO2 was estimated to be 22.54 g/s. Remote sensing of SO2 with an I-DOAS instrument can be very useful for independent estimation and validation of the emission rates from major point sources as well as area sources.

  4. Age estimation by canines' pulp/tooth ratio in an Iranian population using digital panoramic radiography.

    PubMed

    Dehghani, Mahdieh; Shadkam, Elaheh; Ahrari, Farzaneh; Dehghani, Mahboobe

    2018-04-01

    Age estimation in adults is an important issue in forensic science. This study aimed to estimate the chronological age of Iranians by means of pulp/tooth area ratio (AR) of canines in digital panoramic radiographs. The sample consisted of panoramic radiographs of 271 male and female subjects aged 16-64 years. The pulp/tooth area ratio (AR) of upper and lower canines was calculated by AutoCAD software. Data were subjected to correlation and regression analysis. There was a significant and inverse correlation between age and pulp/tooth area ratio of upper and lower canines (r=-0.794 for upper canine and r=-0.282 for lower canine; p-value<0.001). Linear regression equations were derived separately for upper, lower and both canines. The mean difference between actual and estimated age using upper canine was 6.07±1.7. The results showed that the pulp/tooth area ratios of canines are a reliable method for age estimation in Iranians. The pulp/tooth area ratio of upper canine was better correlated with chronological age than that of lower canine. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Estimating scatter in cone beam CT with striped ratio grids: A preliminary investigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsieh, Scott, E-mail: sshsieh@stanford.edu

    2016-09-15

    Purpose: To propose a new method for estimating scatter in x-ray imaging. Conventional antiscatter grids reject scatter at an efficiency that is constant or slowly varying over the surface of the grid. A striped ratio antiscatter grid, composed of stripes that alternate between high and low grid ratio, could be used instead. Such a striped ratio grid would reduce scatter-to-primary ratio as a conventional grid would, but more importantly, the signal discontinuities at the boundaries of stripes can be used to estimate local scatter content. Methods: Signal discontinuities provide information on scatter, but are contaminated by variation in primary radiation.more » A nonlinear image processing algorithm is used to estimate the scatter content in the presence of primary variation. We emulated a striped ratio grid by imaging phantoms with two sequential CT scans, one with and one without a conventional grid. These two scans are processed together to mimic a striped ratio grid. This represents a best case limit of the striped ratio grid, in that the extent of grid ratio modulation is very high and the scatter contrast is maximized. Results: In a uniform cylinder, the striped ratio grid virtually eliminates cupping. Artifacts from scatter are improved in an anthropomorphic phantom. Some banding artifacts are induced by the striped ratio grid. Conclusions: Striped ratio grids could be a simple and effective evolution of conventional antiscatter grids. Construction and validation of a physical prototype remains an important future step.« less

  6. Rayleigh wave ellipticity across the Iberian Peninsula and Morocco

    NASA Astrophysics Data System (ADS)

    Gómez García, Clara; Villaseñor, Antonio

    2015-04-01

    Spectral amplitude ratios between horizontal and vertical components (H/V ratios) from seismic records are useful to evaluate site effects, predict ground motion and invert for S velocity in the top several hundred meters. These spectral ratios can be obtained from both ambient noise and earthquakes. H/V ratios from ambient noise depend on the content and predominant wave types: body waves, Rayleigh waves, a mixture of different waves, etc. The H/V ratio computed in this way is assumed to measure Rayleigh wave ellipticity since ambient vibrations are dominated by Rayleigh waves. H/V ratios from earthquakes are able to determine the local crustal structure at the vicinity of the recording station. These ratios obtained from earthquakes are based on surface wave ellipticity measurements. Although long period (>20 seconds) Rayleigh H/V ratio is not currently used because of large scatter has been reported and uncertainly about whether these measurements are compatible with traditional phase and group velocity measurements, we will investigate whether it is possible to obtain stable estimates after collecting statistics for many earthquakes. We will use teleseismic events from shallow earthquakes (depth ≤ 40 km) between 2007 January 1 and 2012 December 31 with M ≥ 6 and we will compute H/V ratios for more than 400 stations from several seismic networks across the Iberian Peninsula and Morocco for periods between 20 and 100 seconds. Also H/V ratios from cross-correlations of ambient noise in different components for each station pair will be computed. Shorter period H/V ratio measurements based on ambient noise cross-correlations are strongly sensitive to near-surface structure, rather than longer period earthquake Rayleigh waves. The combination of ellipticity measurements based on earthquakes and ambient noise will allow us to perform a joint inversion with Rayleigh wave phase velocity. Upper crustal structure is better constrained by the joint inversion compared to inversions based on phase velocities alone.

  7. Associations of estimated glomerular filtration rate and albuminuria with mortality and renal failure by sex: a meta-analysis.

    PubMed

    Nitsch, Dorothea; Grams, Morgan; Sang, Yingying; Black, Corri; Cirillo, Massimo; Djurdjev, Ognjenka; Iseki, Kunitoshi; Jassal, Simerjot K; Kimm, Heejin; Kronenberg, Florian; Oien, Cecilia M; Levey, Andrew S; Levin, Adeera; Woodward, Mark; Hemmelgarn, Brenda R

    2013-01-29

    To assess for the presence of a sex interaction in the associations of estimated glomerular filtration rate and albuminuria with all-cause mortality, cardiovascular mortality, and end stage renal disease. Random effects meta-analysis using pooled individual participant data. 46 cohorts from Europe, North and South America, Asia, and Australasia. 2,051,158 participants (54% women) from general population cohorts (n=1,861,052), high risk cohorts (n=151,494), and chronic kidney disease cohorts (n=38,612). Eligible cohorts (except chronic kidney disease cohorts) had at least 1000 participants, outcomes of either mortality or end stage renal disease of ≥ 50 events, and baseline measurements of estimated glomerular filtration rate according to the Chronic Kidney Disease Epidemiology Collaboration equation (mL/min/1.73 m(2)) and urinary albumin-creatinine ratio (mg/g). Risks of all-cause mortality and cardiovascular mortality were higher in men at all levels of estimated glomerular filtration rate and albumin-creatinine ratio. While higher risk was associated with lower estimated glomerular filtration rate and higher albumin-creatinine ratio in both sexes, the slope of the risk relationship for all-cause mortality and for cardiovascular mortality were steeper in women than in men. Compared with an estimated glomerular filtration rate of 95, the adjusted hazard ratio for all-cause mortality at estimated glomerular filtration rate 45 was 1.32 (95% CI 1.08 to 1.61) in women and 1.22 (1.00 to 1.48) in men (P(interaction)<0.01). Compared with a urinary albumin-creatinine ratio of 5, the adjusted hazard ratio for all-cause mortality at urinary albumin-creatinine ratio 30 was 1.69 (1.54 to 1.84) in women and 1.43 (1.31 to 1.57) in men (P(interaction)<0.01). Conversely, there was no evidence of a sex difference in associations of estimated glomerular filtration rate and urinary albumin-creatinine ratio with end stage renal disease risk. Both sexes face increased risk of all-cause mortality, cardiovascular mortality, and end stage renal disease with lower estimated glomerular filtration rates and higher albuminuria. These findings were robust across a large global consortium.

  8. Detecting isotopic ratio outliers

    NASA Astrophysics Data System (ADS)

    Bayne, C. K.; Smith, D. H.

    An alternative method is proposed for improving isotopic ratio estimates. This method mathematically models pulse-count data and uses iterative reweighted Poisson regression to estimate model parameters to calculate the isotopic ratios. This computer-oriented approach provides theoretically better methods than conventional techniques to establish error limits and to identify outliers.

  9. Measures of model performance based on the log accuracy ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morley, Steven Karl; Brito, Thiago Vasconcelos; Welling, Daniel T.

    Quantitative assessment of modeling and forecasting of continuous quantities uses a variety of approaches. We review existing literature describing metrics for forecast accuracy and bias, concentrating on those based on relative errors and percentage errors. Of these accuracy metrics, the mean absolute percentage error (MAPE) is one of the most common across many fields and has been widely applied in recent space science literature and we highlight the benefits and drawbacks of MAPE and proposed alternatives. We then introduce the log accuracy ratio, and derive from it two metrics: the median symmetric accuracy; and the symmetric signed percentage bias. Robustmore » methods for estimating the spread of a multiplicative linear model using the log accuracy ratio are also presented. The developed metrics are shown to be easy to interpret, robust, and to mitigate the key drawbacks of their more widely-used counterparts based on relative errors and percentage errors. Their use is illustrated with radiation belt electron flux modeling examples.« less

  10. Measures of model performance based on the log accuracy ratio

    DOE PAGES

    Morley, Steven Karl; Brito, Thiago Vasconcelos; Welling, Daniel T.

    2018-01-03

    Quantitative assessment of modeling and forecasting of continuous quantities uses a variety of approaches. We review existing literature describing metrics for forecast accuracy and bias, concentrating on those based on relative errors and percentage errors. Of these accuracy metrics, the mean absolute percentage error (MAPE) is one of the most common across many fields and has been widely applied in recent space science literature and we highlight the benefits and drawbacks of MAPE and proposed alternatives. We then introduce the log accuracy ratio, and derive from it two metrics: the median symmetric accuracy; and the symmetric signed percentage bias. Robustmore » methods for estimating the spread of a multiplicative linear model using the log accuracy ratio are also presented. The developed metrics are shown to be easy to interpret, robust, and to mitigate the key drawbacks of their more widely-used counterparts based on relative errors and percentage errors. Their use is illustrated with radiation belt electron flux modeling examples.« less

  11. CONSTRAINTS ON BLACK HOLE GROWTH, QUASAR LIFETIMES, AND EDDINGTON RATIO DISTRIBUTIONS FROM THE SDSS BROAD-LINE QUASAR BLACK HOLE MASS FUNCTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelly, Brandon C.; Hernquist, Lars; Siemiginowska, Aneta

    2010-08-20

    We present an estimate of the black hole mass function of broad-line quasars (BLQSOs) that self-consistently corrects for incompleteness and the statistical uncertainty in the mass estimates, based on a sample of 9886 quasars at 1 < z < 4.5 drawn from the Sloan Digital Sky Survey (SDSS). We find evidence for 'cosmic downsizing' of black holes in BLQSOs, where the peak in their number density shifts to higher redshift with increasing black hole mass. The cosmic mass density for black holes seen as BLQSOs peaks at z {approx} 2. We estimate the completeness of the SDSS as a functionmore » of the black hole mass and Eddington ratio, and find that at z > 1 it is highly incomplete at M {sub BH} {approx}< 10{sup 9} M {sub sun} and L/L{sub Edd} {approx}< 0.5. We estimate a lower limit on the lifetime of a single BLQSO phase to be t {sub BL} > 150 {+-} 15 Myr for black holes at z = 1 with a mass of M {sub BH} = 10{sup 9} M{sub sun}, and we constrain the maximum mass of a black hole in a BLQSO to be {approx}3 x 10{sup 10} M{sub sun}. Our estimated distribution of BLQSO Eddington ratios peaks at L/L {sub Edd} {approx} 0.05 and has a dispersion of {approx}0.4 dex, implying that most BLQSOs are not radiating at or near the Eddington limit; however, the location of the peak is subject to considerable uncertainty. The steep increase in number density of BLQSOs toward lower Eddington ratios is expected if the BLQSO accretion rate monotonically decays with time. Furthermore, our estimated lifetime and Eddington ratio distributions imply that the majority of the most massive black holes spend a significant amount of time growing in an earlier obscured phase, a conclusion which is independent of the unknown obscured fraction. These results are consistent with models for self-regulated black hole growth, at least for massive systems at z > 1, where the BLQSO phase occurs at the end of a fueling event when black hole feedback unbinds the accreting gas, halting the accretion flow.« less

  12. Analysis of Measurement Error and Estimator Shape in Three-Point Hydraulic Gradient Estimators

    NASA Astrophysics Data System (ADS)

    McKenna, S. A.; Wahi, A. K.

    2003-12-01

    Three spatially separated measurements of head provide a means of estimating the magnitude and orientation of the hydraulic gradient. Previous work with three-point estimators has focused on the effect of the size (area) of the three-point estimator and measurement error on the final estimates of the gradient magnitude and orientation in laboratory and field studies (Mizell, 1980; Silliman and Frost, 1995; Silliman and Mantz, 2000; Ruskauff and Rumbaugh, 1996). However, a systematic analysis of the combined effects of measurement error, estimator shape and estimator orientation relative to the gradient orientation has not previously been conducted. Monte Carlo simulation with an underlying assumption of a homogeneous transmissivity field is used to examine the effects of uncorrelated measurement error on a series of eleven different three-point estimators having the same size but different shapes as a function of the orientation of the true gradient. Results show that the variance in the estimate of both the magnitude and the orientation increase linearly with the increase in measurement error in agreement with the results of stochastic theory for estimators that are small relative to the correlation length of transmissivity (Mizell, 1980). Three-point estimator shapes with base to height ratios between 0.5 and 5.0 provide accurate estimates of magnitude and orientation across all orientations of the true gradient. As an example, these results are applied to data collected from a monitoring network of 25 wells at the WIPP site during two different time periods. The simulation results are used to reduce the set of all possible combinations of three wells to those combinations with acceptable measurement errors relative to the amount of head drop across the estimator and base to height ratios between 0.5 and 5.0. These limitations reduce the set of all possible well combinations by 98 percent and show that size alone as defined by triangle area is not a valid discriminator of whether or not the estimator provides accurate estimates of the gradient magnitude and orientation. This research was funded by WIPP programs administered by the U.S Department of Energy. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  13. Global remote sensing of water-chlorophyll ratio in terrestrial plant leaves.

    PubMed

    Kushida, Keiji

    2012-10-01

    I evaluated the use of global remote sensing techniques for estimating plant leaf chlorophyll a + b (C(ab); μg cm(-2)) and water (C(w); mg cm(-2)) concentrations as well as the ratio of C(w)/C(ab) with the PROSAIL model under possible distributions for leaf and soil spectra, leaf area index (LAI), canopy geometric structure, and leaf size. First, I estimated LAI from the normalized difference vegetation index. I found that, at LAI values <2, C(ab), C(w), and C(w)/C(ab) could not be reliably estimated. At LAI values >2, C(ab) and C(w) could be estimated for only restricted ranges of the canopy structure; however, the ratio of C(w)/C(ab) could be reliably estimated for a variety of possible canopy structures with coefficients of determination (R(2)) ranging from 0.56 to 0.90. The remote estimation of the C(w)/C(ab) ratio from satellites offers information on plant condition at a global scale.

  14. Uncertainties of fluxes and 13C / 12C ratios of atmospheric reactive-gas emissions

    NASA Astrophysics Data System (ADS)

    Gromov, Sergey; Brenninkmeijer, Carl A. M.; Jöckel, Patrick

    2017-07-01

    We provide a comprehensive review of the proxy data on the 13C / 12C ratios and uncertainties of emissions of reactive carbonaceous compounds into the atmosphere, with a focus on CO sources. Based on an evaluated set-up of the EMAC model, we derive the isotope-resolved data set of its emission inventory for the 1997-2005 period. Additionally, we revisit the calculus required for the correct derivation of uncertainties associated with isotope ratios of emission fluxes. The resulting δ13C of overall surface CO emission in 2000 of -(25. 2 ± 0. 7) ‰ is in line with previous bottom-up estimates and is less uncertain by a factor of 2. In contrast to this, we find that uncertainties of the respective inverse modelling estimates may be substantially larger due to the correlated nature of their derivation. We reckon the δ13C values of surface emissions of higher hydrocarbons to be within -24 to -27 ‰ (uncertainty typically below ±1 ‰), with an exception of isoprene and methanol emissions being close to -30 and -60 ‰, respectively. The isotope signature of ethane surface emission coincides with earlier estimates, but integrates very different source inputs. δ13C values are reported relative to V-PDB.

  15. An Evaluation of Total Solar Reflectance and Spectral Band Ratioing Techniques for Estimating Soil Water Content

    NASA Technical Reports Server (NTRS)

    Reginato, R. J.; Vedder, J. F.; Idso, S. B.; Jackson, R. D.; Blanchard, M. B.; Goettelman, R.

    1977-01-01

    For several days in March of 1975, reflected solar radiation measurements were obtained from smooth and rough surfaces of wet, drying, and continually dry Avondale loam at Phoenix, Arizona, with pyranometers located 50 cm above the ground surface and a multispectral scanner flown at a 300-m height. The simple summation of the different band radiances measured by the multispectral scanner proved equally as good as the pyranometer data for estimating surface soil water content if the multispectral scanner data were standardized with respect to the intensity of incoming solar radiation or the reflected radiance from a reference surface, such as the continually dry soil. Without this means of standardization, multispectral scanner data are most useful in a spectral band ratioing context. Our results indicated that, for the bands used, no significant information on soil water content could be obtained by band ratioing. Thus the variability in soil water content should insignificantly affect soil-type discrimination based on identification of type-specific spectral signatures. Therefore remote sensing, conducted in the 0.4- to 1.0-micron wavelength region of the solar spectrum, would seem to be much More suited to identifying crop and soil types than to estimating of soil water content.

  16. Geostatistical estimation of signal-to-noise ratios for spectral vegetation indices

    USGS Publications Warehouse

    Ji, Lei; Zhang, Li; Rover, Jennifer R.; Wylie, Bruce K.; Chen, Xuexia

    2014-01-01

    In the past 40 years, many spectral vegetation indices have been developed to quantify vegetation biophysical parameters. An ideal vegetation index should contain the maximum level of signal related to specific biophysical characteristics and the minimum level of noise such as background soil influences and atmospheric effects. However, accurate quantification of signal and noise in a vegetation index remains a challenge, because it requires a large number of field measurements or laboratory experiments. In this study, we applied a geostatistical method to estimate signal-to-noise ratio (S/N) for spectral vegetation indices. Based on the sample semivariogram of vegetation index images, we used the standardized noise to quantify the noise component of vegetation indices. In a case study in the grasslands and shrublands of the western United States, we demonstrated the geostatistical method for evaluating S/N for a series of soil-adjusted vegetation indices derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor. The soil-adjusted vegetation indices were found to have higher S/N values than the traditional normalized difference vegetation index (NDVI) and simple ratio (SR) in the sparsely vegetated areas. This study shows that the proposed geostatistical analysis can constitute an efficient technique for estimating signal and noise components in vegetation indices.

  17. A collision history-based approach to Sensitivity/Perturbation calculations in the continuous energy Monte Carlo code SERPENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giuseppe Palmiotti

    In this work, the implementation of a collision history-based approach to sensitivity/perturbation calculations in the Monte Carlo code SERPENT is discussed. The proposed methods allow the calculation of the eects of nuclear data perturbation on several response functions: the eective multiplication factor, reaction rate ratios and bilinear ratios (e.g., eective kinetics parameters). SERPENT results are compared to ERANOS and TSUNAMI Generalized Perturbation Theory calculations for two fast metallic systems and for a PWR pin-cell benchmark. New methods for the calculation of sensitivities to angular scattering distributions are also presented, which adopts fully continuous (in energy and angle) Monte Carlo estimators.

  18. Evaluation of Brazed Joints Using Failure Assessment Diagram

    NASA Technical Reports Server (NTRS)

    Flom, Yury

    2012-01-01

    Fitness-for service approach was used to perform structural analysis of the brazed joints consisting of several base metal / filler metal combinations. Failure Assessment Diagrams (FADs) based on tensile and shear stress ratios were constructed and experimentally validated. It was shown that such FADs can provide a conservative estimate of safe combinations of stresses in the brazed joints. Based on this approach, Margins of Safety (MS) of the brazed joints subjected to multi-axial loading conditions can be evaluated..

  19. Observation-based Estimate of Climate Sensitivity with a Scaling Climate Response Function

    NASA Astrophysics Data System (ADS)

    Hébert, Raphael; Lovejoy, Shaun

    2016-04-01

    To properly adress the anthropogenic impacts upon the earth system, an estimate of the climate sensitivity to radiative forcing is essential. Observation-based estimates of climate sensitivity are often limited by their ability to take into account the slower response of the climate system imparted mainly by the large thermal inertia of oceans, they are nevertheless essential to provide an alternative to estimates from global circulation models and increase our confidence in estimates of climate sensitivity by the multiplicity of approaches. It is straightforward to calculate the Effective Climate Sensitivity(EffCS) as the ratio of temperature change to the change in radiative forcing; the result is almost identical to the Transient Climate Response(TCR), but it underestimates the Equilibrium Climate Sensitivity(ECS). A study of global mean temperature is thus presented assuming a Scaling Climate Response Function to deterministic radiative forcing. This general form is justified as there exists a scaling symmetry respected by the dynamics, and boundary conditions, over a wide range of scales and it allows for long-range dependencies while retaining only 3 parameter which are estimated empirically. The range of memory is modulated by the scaling exponent H. We can calculate, analytically, a one-to-one relation between the scaling exponent H and the ratio of EffCS to TCR and EffCS to ECS. The scaling exponent of the power law is estimated by a regression of temperature as a function of forcing. We consider for the analysis 4 different datasets of historical global mean temperature and 100 scenario runs of the Coupled Model Intercomparison Project Phase 5 distributed among the 4 Representative Concentration Pathways(RCP) scenarios. We find that the error function for the estimate on historical temperature is very wide and thus, many scaling exponent can be used without meaningful changes in the fit residuals of historical temperatures; their response in the year 2100 on the other hand, is very broad, especially for a low-emission scenario such as RCP 2.6. CMIP5 scenario runs thus allow for a narrower estimate of H which can then be used to estimate the ECS and TCR from the EffCS estimated from the historical data.

  20. AMS of the Minor Plutonium Isotopes

    NASA Astrophysics Data System (ADS)

    Steier, P.; Hrnecek, E.; Priller, A.; Quinto, F.; Srncik, M.; Wallner, A.; Wallner, G.; Winkler, S.

    2013-01-01

    VERA, the Vienna Environmental Research Accelerator, is especially equipped for the measurement of actinides, and performs a growing number of measurements on environmental samples. While AMS is not the optimum method for each particular plutonium isotope, the possibility to measure 239Pu, 240Pu, 241Pu, 242Pu and 244Pu on the same AMS sputter target is a great simplification. We have obtained a first result on the global fallout value of 244Pu/239Pu = (5.7 ± 1.0) × 10-5 based on soil samples from Salzburg prefecture, Austria. Furthermore, we suggest using the 242Pu/240Pu ratio as an estimate of the initial 241Pu/239Pu ratio, which allows dating of the time of irradiation based solely on Pu isotopes. We have checked the validity of this estimate using literature data, simulations, and environmental samples from soil from the Salzburg prefecture (Austria), from the shut down Garigliano Nuclear Power Plant (Sessa Aurunca, Italy) and from the Irish Sea near the Sellafield nuclear facility. The maximum deviation of the estimated dates from the expected ages is 6 years, while relative dating of material from the same source seems to be possible with a precision of less than 2 years. Additional information carried by the minor plutonium isotopes may allow further improvements of the precision of the method.

  1. Application of spectral decomposition algorithm for mapping water quality in a turbid lake (Lake Kasumigaura, Japan) from Landsat TM data

    NASA Astrophysics Data System (ADS)

    Oyama, Youichi; Matsushita, Bunkei; Fukushima, Takehiko; Matsushige, Kazuo; Imai, Akio

    The remote sensing of Case 2 water has been far less successful than that of Case 1 water, due mainly to the complex interactions among optically active substances (e.g., phytoplankton, suspended sediments, colored dissolved organic matter, and water) in the former. To address this problem, we developed a spectral decomposition algorithm (SDA), based on a spectral linear mixture modeling approach. Through a tank experiment, we found that the SDA-based models were superior to conventional empirical models (e.g. using single band, band ratio, or arithmetic calculation of band) for accurate estimates of water quality parameters. In this paper, we develop a method for applying the SDA to Landsat-5 TM data on Lake Kasumigaura, a eutrophic lake in Japan characterized by high concentrations of suspended sediment, for mapping chlorophyll-a (Chl-a) and non-phytoplankton suspended sediment (NPSS) distributions. The results show that the SDA-based estimation model can be obtained by a tank experiment. Moreover, by combining this estimation model with satellite-SRSs (standard reflectance spectra: i.e., spectral end-members) derived from bio-optical modeling, we can directly apply the model to a satellite image. The same SDA-based estimation model for Chl-a concentration was applied to two Landsat-5 TM images, one acquired in April 1994 and the other in February 2006. The average Chl-a estimation error between the two was 9.9%, a result that indicates the potential robustness of the SDA-based estimation model. The average estimation error of NPSS concentration from the 2006 Landsat-5 TM image was 15.9%. The key point for successfully applying the SDA-based estimation model to satellite data is the method used to obtain a suitable satellite-SRS for each end-member.

  2. Joint measurements of black carbon and particle mass for heavy-duty diesel vehicles using a portable emission measurement system

    NASA Astrophysics Data System (ADS)

    Zheng, Xuan; Wu, Ye; Zhang, Shaojun; Baldauf, Richard W.; Zhang, K. Max; Hu, Jingnan; Li, Zhenhua; Fu, Lixin; Hao, Jiming

    2016-09-01

    The black carbon (BC) emitted from heavy-duty diesel vehicles (HDDVs) is an important source of urban atmospheric pollution and creates strong climate-forcing impacts. The emission ratio of BC to total particle mass (PM) (i.e., BC/PM ratio) is an essential variable used to estimate total BC emissions from historical PM data; however, these ratios have not been measured using portable emission measurement systems (PEMS) in order to obtain real-world measurements over a wide range of driving conditions. In this study, we developed a PEMS platform by integrating two Aethalometers and an electric low pressure impactor to realize the joint measurement of real-world BC and PM emissions for ten HDDVs in China. Test results showed that the average BC/PM ratio for five HDDVs equipped with mechanical fuel injection (MI) engines was 0.43 ± 0.06, significantly lower (P < 0.05) than another five HDDVs equipped with electronically-controlled fuel injection (EI) engines (0.56 ± 0.12). Traffic conditions also affected the BC/PM ratios with higher ratios on freeway routes than on local roads. Furthermore, higher ratios were observed for HDDVs equipped with EI engines than for the MI engines for the highway and local road routes. With an operating mode binning approach, we observed that the instantaneous BC/PM ratios of EI engine vehicles were above those of the MI engine vehicles in all operating modes except for the braking mode (i.e., Bin 0). Therefore, the complex impacts from engine technology and traffic conditions on BC/PM ratios should be carefully considered when estimating real-world BC emissions from HDDVs based on overall PM emissions data.

  3. Association between the 2008-09 seasonal influenza vaccine and pandemic H1N1 illness during Spring-Summer 2009: four observational studies from Canada.

    PubMed

    Skowronski, Danuta M; De Serres, Gaston; Crowcroft, Natasha S; Janjua, Naveed Z; Boulianne, Nicole; Hottes, Travis S; Rosella, Laura C; Dickinson, James A; Gilca, Rodica; Sethi, Pam; Ouhoummane, Najwa; Willison, Donald J; Rouleau, Isabelle; Petric, Martin; Fonseca, Kevin; Drews, Steven J; Rebbapragada, Anuradha; Charest, Hugues; Hamelin, Marie-Eve; Boivin, Guy; Gardy, Jennifer L; Li, Yan; Kwindt, Trijntje L; Patrick, David M; Brunham, Robert C

    2010-04-06

    In late spring 2009, concern was raised in Canada that prior vaccination with the 2008-09 trivalent inactivated influenza vaccine (TIV) was associated with increased risk of pandemic influenza A (H1N1) (pH1N1) illness. Several epidemiologic investigations were conducted through the summer to assess this putative association. (1) test-negative case-control design based on Canada's sentinel vaccine effectiveness monitoring system in British Columbia, Alberta, Ontario, and Quebec; (2) conventional case-control design using population controls in Quebec; (3) test-negative case-control design in Ontario; and (4) prospective household transmission (cohort) study in Quebec. Logistic regression was used to estimate odds ratios for TIV effect on community- or hospital-based laboratory-confirmed seasonal or pH1N1 influenza cases compared to controls with restriction, stratification, and adjustment for covariates including combinations of age, sex, comorbidity, timeliness of medical visit, prior physician visits, and/or health care worker (HCW) status. For the prospective study risk ratios were computed. Based on the sentinel study of 672 cases and 857 controls, 2008-09 TIV was associated with statistically significant protection against seasonal influenza (odds ratio 0.44, 95% CI 0.33-0.59). In contrast, estimates from the sentinel and three other observational studies, involving a total of 1,226 laboratory-confirmed pH1N1 cases and 1,505 controls, indicated that prior receipt of 2008-09 TIV was associated with increased risk of medically attended pH1N1 illness during the spring-summer 2009, with estimated risk or odds ratios ranging from 1.4 to 2.5. Risk of pH1N1 hospitalization was not further increased among vaccinated people when comparing hospitalized to community cases. Prior receipt of 2008-09 TIV was associated with increased risk of medically attended pH1N1 illness during the spring-summer 2009 in Canada. The occurrence of bias (selection, information) or confounding cannot be ruled out. Further experimental and epidemiological assessment is warranted. Possible biological mechanisms and immunoepidemiologic implications are considered.

  4. Data-Rate Estimation for Autonomous Receiver Operation

    NASA Technical Reports Server (NTRS)

    Tkacenko, A.; Simon, M. K.

    2005-01-01

    In this article, we present a series of algorithms for estimating the data rate of a signal whose admissible data rates are integer base, integer powered multiples of a known basic data rate. These algorithms can be applied to the Electra radio currently used in the Deep Space Network (DSN), which employs data rates having the above relationship. The estimation is carried out in an autonomous setting in which very little a priori information is assumed. It is done by exploiting an elegant property of the split symbol moments estimator (SSME), which is traditionally used to estimate the signal-to-noise ratio (SNR) of the received signal. By quantizing the assumed symbol-timing error or jitter, we present an all-digital implementation of the SSME which can be used to jointly estimate the data rate, SNR, and jitter. Simulation results presented show that these joint estimation algorithms perform well, even in the low SNR regions typically encountered in the DSN.

  5. Role of survivor bias in pancreatic cancer case-control studies.

    PubMed

    Hu, Zhen-Huan; Connett, John E; Yuan, Jian-Min; Anderson, Kristin E

    2016-01-01

    The purpose of this study was to evaluate the impact of survivor bias on pancreatic cancer case-control studies. The authors constructed five case-loss scenarios based on the Iowa Women's Health Study cohort to reflect how case recruitment in population-based studies varies by case survival time. Risk factors for disease incidence included smoking, body mass index (BMI), waist circumference, diabetes, and alcohol consumption. Odds ratios (ORs) were estimated by conditional logistic regression and quantitatively compared by the interactions between risk factors and 3-month survival time. Additionally, Kaplan-Meier estimates for overall survival were compared within the subset cohort of pancreatic cancer cases. BMI and waist circumference showed a significant inverse relationship with survival time. Decreasing trends in ORs for BMI and waist circumference were observed with increasing case survival time. The interaction between BMI and survival time based on a cutpoint of 3 months was significant (P < .01) as was the interaction between waist circumference and survival time (P < .01). The findings suggested that case losses could result in survivor bias causing underestimated odds ratios for both BMI and waist circumference, whereas other risk factors were not significantly affected by case losses. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Efficacy of a Clinic-Based Safer Sex Program for Human Immunodeficiency Virus-Uninfected and Human Immunodeficiency Virus-Infected Young Black Men Who Have Sex With Men: A Randomized Controlled Trial.

    PubMed

    Crosby, Richard A; Mena, Leandro; Salazar, Laura F; Hardin, James W; Brown, Tim; Vickers Smith, Rachel

    2018-03-01

    To test the efficacy of a single-session, clinic-based intervention designed to promote condom use among young black men who have sex with men (YBMSM). Six hundred YBMSM were enrolled in a randomized controlled trial, using a 12-month observation period. An intent-to-treat analysis was performed, with multiple imputation for missing data. Compared with the reference group, human immunodeficiency virus (HIV)-infected men in the intervention group had 64% greater odds of reporting consistent condom use for anal receptive sex over 12 months (estimated odds ratio, 1.64; 95% confidence interval, 1.23-2.17, P = 0.001). Also, compared with the reference group, HIV-uninfected men in the intervention group had more than twice the odds of reporting consistent condom use for anal receptive sex over 12 months (estimated odds ratio, 2.14; 95% confidence interval, 1.74-2.63, P < 0.001). Significant intervention effects relative to incident sexually transmitted diseases were not observed. A single-session, clinic-based, intervention may help protect HIV-uninfected YBMSM against HIV acquisition and HIV-infected YBMSM from transmitting the virus to insertive partners.

  7. The Soil Moisture Dependence of TRMM Microwave Imager Rainfall Estimates

    NASA Astrophysics Data System (ADS)

    Seyyedi, H.; Anagnostou, E. N.

    2011-12-01

    This study presents an in-depth analysis of the dependence of overland rainfall estimates from the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI) on the soil moisture conditions at the land surface. TMI retrievals are verified against rainfall fields derived from a high resolution rain-gauge network (MESONET) covering Oklahoma. Soil moisture (SOM) patterns are extracted based on recorded data from 2000-2007 with 30 minutes temporal resolution. The area is divided into wet and dry regions based on normalized SOM (Nsom) values. Statistical comparison between two groups is conducted based on recorded ground station measurements and the corresponding passive microwave retrievals from TMI overpasses at the respective MESONET station location and time. The zero order error statistics show that the Probability of Detection (POD) for the wet regions (higher Nsom values) is higher than the dry regions. The Falls Alarm Ratio (FAR) and volumetric FAR is lower for the wet regions. The volumetric missed rain for the wet region is lower than dry region. Analysis of the MESONET-to-TMI ratio values shows that TMI tends to overestimate for surface rainfall intensities less than 12 (mm/h), however the magnitude of the overestimation over the wet regions is lower than the dry regions.

  8. Theoretical and experimental signal-to-noise ratio assessment in new direction sensing continuous-wave Doppler lidar

    NASA Astrophysics Data System (ADS)

    Tegtmeier Pedersen, A.; Abari, C. F.; Mann, J.; Mikkelsen, T.

    2014-06-01

    A new direction sensing continuous-wave Doppler lidar based on an image-reject homodyne receiver has recently been demonstrated at DTU Wind Energy, Technical University of Denmark. In this contribution we analyse the signal-to-noise ratio resulting from two different data processing methods both leading to the direction sensing capability. It is found that using the auto spectrum of the complex signal to determine the wind speed leads to a signal-to-noise ratio equivalent to that of a standard self-heterodyne receiver. Using the imaginary part of the cross spectrum to estimate the Doppler shift has the benefit of a zero-mean background spectrum, but comes at the expense of a decrease in the signal-to noise ratio by a factor of √2.

  9. Comparison and analysis of empirical equations for soil heat flux for different cropping systems and irrigation methods

    USGS Publications Warehouse

    Irmak, A.; Singh, Ramesh K.; Walter-Shea, Elizabeth; Verma, S.B.; Suyker, A.E.

    2011-01-01

    We evaluated the performance of four models for estimating soil heat flux density (G) in maize (Zea mays L.) and soybean (Glycine max L.) fields under different irrigation methods (center-pivot irrigated fields at Mead, Nebraska, and subsurface drip irrigated field at Clay Center, Nebraska) and rainfed conditions at Mead. The model estimates were compared against measurements made during growing seasons of 2003, 2004, and 2005 at Mead and during 2005, 2006, and 2007 at Clay Center. We observed a strong relationship between the G and net radiation (Rn) ratio (G/Rn) and the normalized difference vegetation index (NDVI). When a significant portion of the ground was bare soil, G/Rn ranged from 0.15 to 0.30 and decreased with increasing NDVI. In contrast to the NDVI progression, the G/Rn ratio decreased with crop growth and development. The G/Rn ratio for subsurface drip irrigated crops was smaller than for the center-pivot irrigated crops. The seasonal average G was 13.1%, 15.2%, 10.9%, and 12.8% of Rn for irrigated maize, rainfed maize, irrigated soybean, and rainfed soybean, respectively. Statistical analyses of the performance of the four models showed a wide range of variation in G estimation. The root mean square error (RMSE) of predictions ranged from 15 to 81.3 W m-2. Based on the wide range of RMSE, it is recommended that local calibration of the models should be carried out for remote estimation of soil heat flux.

  10. Accurate measurement of imaging photoplethysmographic signals based camera using weighted average

    NASA Astrophysics Data System (ADS)

    Pang, Zongguang; Kong, Lingqin; Zhao, Yuejin; Sun, Huijuan; Dong, Liquan; Hui, Mei; Liu, Ming; Liu, Xiaohua; Liu, Lingling; Li, Xiaohui; Li, Rongji

    2018-01-01

    Imaging Photoplethysmography (IPPG) is an emerging technique for the extraction of vital signs of human being using video recordings. IPPG technology with its advantages like non-contact measurement, low cost and easy operation has become one research hot spot in the field of biomedicine. However, the noise disturbance caused by non-microarterial area cannot be removed because of the uneven distribution of micro-arterial, different signal strength of each region, which results in a low signal noise ratio of IPPG signals and low accuracy of heart rate. In this paper, we propose a method of improving the signal noise ratio of camera-based IPPG signals of each sub-region of the face using a weighted average. Firstly, we obtain the region of interest (ROI) of a subject's face based camera. Secondly, each region of interest is tracked and feature-based matched in each frame of the video. Each tracked region of face is divided into 60x60 pixel block. Thirdly, the weights of PPG signal of each sub-region are calculated, based on the signal-to-noise ratio of each sub-region. Finally, we combine the IPPG signal from all the tracked ROI using weighted average. Compared with the existing approaches, the result shows that the proposed method takes modest but significant effects on improvement of signal noise ratio of camera-based PPG estimated and accuracy of heart rate measurement.

  11. Estimation of static parameters based on dynamical and physical properties in limestone rocks

    NASA Astrophysics Data System (ADS)

    Ghafoori, Mohammad; Rastegarnia, Ahmad; Lashkaripour, Gholam Reza

    2018-01-01

    Due to the importance of uniaxial compressive strength (UCS), static Young's modulus (ES) and shear wave velocity, it is always worth to predict these parameters from empirical relations that suggested for other formations with same lithology. This paper studies the physical, mechanical and dynamical properties of limestone rocks using the results of laboratory tests which carried out on 60 the Jahrum and the Asmari formations core specimens. The core specimens were obtained from the Bazoft dam site, hydroelectric supply and double-curvature arch dam in Iran. The Dynamic Young's modulus (Ed) and dynamic Poisson ratio were calculated using the existing relations. Some empirical relations were presented to estimate uniaxial compressive strength, as well as static Young's modulus and shear wave velocity (Vs). Results showed the static parameters such as uniaxial compressive strength and static Young's modulus represented low correlation with water absorption. It is also found that the uniaxial compressive strength and static Young's modulus had high correlation with compressional wave velocity and dynamic Young's modulus, respectively. Dynamic Young's modulus was 5 times larger than static Young's modulus. Further, the dynamic Poisson ratio was 1.3 times larger than static Poisson ratio. The relationship between shear wave velocity (Vs) and compressional wave velocity (Vp) was power and positive with high correlation coefficient. Prediction of uniaxial compressive strength based on Vp was better than that based on Vs . Generally, both UCS and static Young's modulus (ES) had good correlation with Ed.

  12. Evaluation of mean transit time of aerosols from the area of origin to the Arctic with 210Pb/210Po daily monitoring data.

    PubMed

    Zhang, Weihua; Sadi, Baki; Rinaldo, Christopher; Chen, Jing; Spencer, Norman; Ungar, Kurt

    2018-08-01

    In this study, the activity concentrations of 210 Pb and 210 Po on the 22 daily air filter samples, collected at CTBT Yellowknife station from September 2015 to April 2016, were analysed. To estimate the time scale of atmospheric long-range transport aerosol bearing 210 Pb in the Arctic during winter, the mean transit time of aerosol bearing 210 Pb from its origin was determined based on the activity ratios of 210 Po/ 210 Pb and the parent-progeny decay/ingrowth equation. The activity ratios of 210 Po/ 210 Pb varied between 0.06 and 0.21 with a median value of 0.11. The aerosol mean transit time based the activity ratio of 210 Po/ 210 Pb suggests longer mean transit time of 210 Pb aerosols in winter (12 d) than in autumn (3.7 d) and spring (2.9 d). Four years 210 Pb and 212 Pb monitoring results and meteorological conditions at the Yellowknife station indicate that the 212 Pb activity is mostly of local origin, and that 210 Pb aerosol in wintertime are mainly from outside of the Arctic regions in common with other pollutants and sources contributing to the Arctic. The activity concentration ratios of 210 Pb and 212 Pb have a relatively constant value in summer with a significant peak observed in winter, centered in the month of February. Comparison of the 210 Pb/ 212 Pb activity ratios and the estimated mean 210 Pb transit time, the mean aerosol transit times were real reflection of the atmosphere transport characteristics, which can be used as a radio-chronometer for the transport of air masses to the Arctic region. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  13. Novel methods to estimate the enantiomeric ratio and the kinetic parameters of enantiospecific enzymatic reactions.

    PubMed

    Machado, G D.C.; Paiva, L M.C.; Pinto, G F.; Oestreicher, E G.

    2001-03-08

    1The Enantiomeric Ratio (E) of the enzyme, acting as specific catalysts in resolution of enantiomers, is an important parameter in the quantitative description of these chiral resolution processes. In the present work, two novel methods hereby called Method I and II, for estimating E and the kinetic parameters Km and Vm of enantiomers were developed. These methods are based upon initial rate (v) measurements using different concentrations of enantiomeric mixtures (C) with several molar fractions of the substrate (x). Both methods were tested using simulated "experimental data" and actual experimental data. Method I is easier to use than Method II but requires that one of the enantiomers is available in pure form. Method II, besides not requiring the enantiomers in pure form shown better results, as indicated by the magnitude of the standard errors of estimates. The theoretical predictions were experimentally confirmed by using the oxidation of 2-butanol and 2-pentanol catalyzed by Thermoanaerobium brockii alcohol dehydrogenase as reaction models. The parameters E, Km and Vm were estimated by Methods I and II with precision and were not significantly different from those obtained experimentally by direct estimation of E from the kinetic parameters of each enantiomer available in pure form.

  14. Per tree estimates with n-tree distance sampling: an application to increment core data

    Treesearch

    Thomas B. Lynch; Robert F. Wittwer

    2002-01-01

    Per tree estimates using the n trees nearest a point can be obtained by using a ratio of per unit area estimates from n-tree distance sampling. This ratio was used to estimate average age by d.b.h. classes for cottonwood trees (Populus deltoides Bartr. ex Marsh.) on the Cimarron National Grassland. Increment...

  15. [MRI-Based Ratio of Fetal Lung to Body Volume as New Prognostic Marker for Chronic Lung Disease in Patients with Congenital Diaphragmatic Hernia].

    PubMed

    Winkler, Melissa M; Weis, Meike; Henzler, Claudia; Weiß, Christel; Kehl, Sven; Schoenberg, Stefan O; Neff, Wolfgang; Schaible, Thomas

    2017-03-01

    Background Our aim was to evaluate the prognostic value of magnetic resonance imaging (MRI)-based ratio of fetal lung volume (FLV) to fetal body volume (FBV) as a marker for development of chronic lung disease (CLD) in fetuses with congenital diaphragmatic hernia (CDH). Patients and Methods FLV and FBV were measured and the individual FLV/FBV ratio was calculated in 132 fetuses. Diagnosis of CLD was established following prespecified criteria and graded into mild/moderate/severe if present. Logistic regression analysis was used to calculate the probability of postnatal development of CLD in dependence of the FLV/FBV ratio. Receiver operating characteristic curves were analysed by calculating the area under the curve to evaluate the prognostic accuracy of this marker. Results 61 of 132 fetuses developed CLD (46.21%). The FLV/FBV ratio was significantly lower in fetuses with CLD (p=0.0008; AUC 0.743). Development of CLD was significantly associated with thoracic herniation of liver parenchyma (p<0.0001), requirement of extracorporal membrane oxygenation (ECMO) (p<0.0001) and gestational age at delivery (p=0.0052). Conclusion The MRI-based ratio of FLV to FBV is a highly valuable prenatal parameter for development of CLD. The ratio is helpful for early therapeutic decisions by estimating the probability to develop CLD. Perinatally, gestational age at delivery and ECMO requirement are useful additional parameters to further improve prediction of CLD. © Georg Thieme Verlag KG Stuttgart · New York.

  16. Maternal and perinatal mortality by place of delivery in sub-Saharan Africa: a meta-analysis of population-based cohort studies.

    PubMed

    Chinkhumba, Jobiba; De Allegri, Manuela; Muula, Adamson S; Robberstad, Bjarne

    2014-09-28

    Facility-based delivery has gained traction as a key strategy for reducing maternal and perinatal mortality in developing countries. However, robust evidence of impact of place of delivery on maternal and perinatal mortality is lacking. We aimed to estimate the risk of maternal and perinatal mortality by place of delivery in sub-Saharan Africa. We conducted a systematic review of population-based cohort studies reporting on risk of maternal or perinatal mortality at the individual level by place of delivery in sub-Saharan Africa. Newcastle-Ottawa Scale was used to assess study quality. Outcomes were summarized in pooled analyses using fixed and random effects models. We calculated attributable risk percentage reduction in mortality to estimate exposure effect. We report mortality ratios, crude odds ratios and associated 95% confidence intervals. We found 9 population-based cohort studies: 6 reporting on perinatal and 3 on maternal mortality. The mean study quality score was 10 out of 15 points. Control for confounders varied between the studies. A total of 36,772 pregnancy episodes were included in the analyses. Overall, perinatal mortality is 21% higher for home compared to facility-based deliveries, but the difference is only significant when produced with a fixed effects model (OR 1.21, 95% CI: 1.02-1.46) and not when produced by a random effects model (OR 1.21, 95% CI: 0.79-1.84). Under best settings, up to 14 perinatal deaths might be averted per 1000 births if the women delivered at facilities instead of homes. We found significantly increased risk of maternal mortality for facility-based compared to home deliveries (OR 2.29, 95% CI: 1.58-3.31), precluding estimates of attributable risk fraction. Evaluating the impact of facility-based delivery strategy on maternal and perinatal mortality using population-based studies is complicated by selection bias and poor control of confounders. Studies that pool data at an individual level may overcome some of these problems and provide better estimates of relative effectiveness of place of delivery in the region.

  17. Transfer Entropy as a Log-Likelihood Ratio

    NASA Astrophysics Data System (ADS)

    Barnett, Lionel; Bossomaier, Terry

    2012-09-01

    Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.

  18. Transfer entropy as a log-likelihood ratio.

    PubMed

    Barnett, Lionel; Bossomaier, Terry

    2012-09-28

    Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.

  19. A Feature Selection Method Based on Fisher's Discriminant Ratio for Text Sentiment Classification

    NASA Astrophysics Data System (ADS)

    Wang, Suge; Li, Deyu; Wei, Yingjie; Li, Hongxia

    With the rapid growth of e-commerce, product reviews on the Web have become an important information source for customers' decision making when they intend to buy some product. As the reviews are often too many for customers to go through, how to automatically classify them into different sentiment orientation categories (i.e. positive/negative) has become a research problem. In this paper, based on Fisher's discriminant ratio, an effective feature selection method is proposed for product review text sentiment classification. In order to validate the validity of the proposed method, we compared it with other methods respectively based on information gain and mutual information while support vector machine is adopted as the classifier. In this paper, 6 subexperiments are conducted by combining different feature selection methods with 2 kinds of candidate feature sets. Under 1006 review documents of cars, the experimental results indicate that the Fisher's discriminant ratio based on word frequency estimation has the best performance with F value 83.3% while the candidate features are the words which appear in both positive and negative texts.

  20. Application of Kalman filter in frequency offset estimation for coherent optical quadrature phase-shift keying communication system

    NASA Astrophysics Data System (ADS)

    Jiang, Wen; Yang, Yanfu; Zhang, Qun; Sun, Yunxu; Zhong, Kangping; Zhou, Xian; Yao, Yong

    2016-09-01

    The frequency offset estimation (FOE) schemes based on Kalman filter are proposed and investigated in detail via numerical simulation and experiment. The schemes consist of a modulation phase removing stage and Kalman filter estimation stage. In the second stage, the Kalman filters are employed for tracking either differential angles or differential data between two successive symbols. Several implementations of the proposed FOE scheme are compared by employing different modulation removing methods and two Kalman algorithms. The optimal FOE implementation is suggested for different operating conditions including optical signal-to-noise ratio and the number of the available data symbols.

  1. Estimates of late middle Eocene pCO2 based on stomatal density of modern and fossil Nageia leaves

    NASA Astrophysics Data System (ADS)

    Liu, X. Y.; Gao, Q.; Han, M.; Jin, J. H.

    2016-02-01

    Atmospheric pCO2 concentrations have been estimated for intervals of the Eocene using various models and proxy information. Here we reconstruct late middle Eocene (42.0-38.5 Ma) pCO2 based on the fossil leaves of Nageia maomingensis Jin et Liu collected from the Maoming Basin, Guangdong Province, China. We first determine relationships between atmospheric pCO2 concentrations, stomatal density (SD) and stomatal index (SI) using "modern" leaves of N. motleyi (Parl.) De Laub, the nearest living species to the Eocene fossils. This work indicates that the SD inversely responds to pCO2, while SI has almost no relationship with pCO2. Eocene pCO2 concentrations can be reconstructed based on a regression approach and the stomatal ratio method by using the SD. The first approach gives a pCO2 of 351.9 ± 6.6 ppmv, whereas the one based on stomatal ratio gives a pCO2 of 537.5 ± 56.5 ppmv. Here, we explored the potential of N. maomingensis in pCO2 reconstruction and obtained different results according to different methods, providing a new insight for the reconstruction of paleoclimate and paleoenvironment in conifers.

  2. A Direct Position-Determination Approach for Multiple Sources Based on Neural Network Computation.

    PubMed

    Chen, Xin; Wang, Ding; Yin, Jiexin; Wu, Ying

    2018-06-13

    The most widely used localization technology is the two-step method that localizes transmitters by measuring one or more specified positioning parameters. Direct position determination (DPD) is a promising technique that directly localizes transmitters from sensor outputs and can offer superior localization performance. However, existing DPD algorithms such as maximum likelihood (ML)-based and multiple signal classification (MUSIC)-based estimations are computationally expensive, making it difficult to satisfy real-time demands. To solve this problem, we propose the use of a modular neural network for multiple-source DPD. In this method, the area of interest is divided into multiple sub-areas. Multilayer perceptron (MLP) neural networks are employed to detect the presence of a source in a sub-area and filter sources in other sub-areas, and radial basis function (RBF) neural networks are utilized for position estimation. Simulation results show that a number of appropriately trained neural networks can be successfully used for DPD. The performance of the proposed MLP-MLP-RBF method is comparable to the performance of the conventional MUSIC-based DPD algorithm for various signal-to-noise ratios and signal power ratios. Furthermore, the MLP-MLP-RBF network is less computationally intensive than the classical DPD algorithm and is therefore an attractive choice for real-time applications.

  3. Design of a fuzzy differential evolution algorithm to predict non-deposition sediment transport

    NASA Astrophysics Data System (ADS)

    Ebtehaj, Isa; Bonakdari, Hossein

    2017-12-01

    Since the flow entering a sewer contains solid matter, deposition at the bottom of the channel is inevitable. It is difficult to understand the complex, three-dimensional mechanism of sediment transport in sewer pipelines. Therefore, a method to estimate the limiting velocity is necessary for optimal designs. Due to the inability of gradient-based algorithms to train Adaptive Neuro-Fuzzy Inference Systems (ANFIS) for non-deposition sediment transport prediction, a new hybrid ANFIS method based on a differential evolutionary algorithm (ANFIS-DE) is developed. The training and testing performance of ANFIS-DE is evaluated using a wide range of dimensionless parameters gathered from the literature. The input combination used to estimate the densimetric Froude number ( Fr) parameters includes the volumetric sediment concentration ( C V ), ratio of median particle diameter to hydraulic radius ( d/R), ratio of median particle diameter to pipe diameter ( d/D) and overall friction factor of sediment ( λ s ). The testing results are compared with the ANFIS model and regression-based equation results. The ANFIS-DE technique predicted sediment transport at limit of deposition with lower root mean square error (RMSE = 0.323) and mean absolute percentage of error (MAPE = 0.065) and higher accuracy ( R 2 = 0.965) than the ANFIS model and regression-based equations.

  4. Q estimation of seismic data using the generalized S-transform

    NASA Astrophysics Data System (ADS)

    Hao, Yaju; Wen, Xiaotao; Zhang, Bo; He, Zhenhua; Zhang, Rui; Zhang, Jinming

    2016-12-01

    Quality factor, Q, is a parameter that characterizes the energy dissipation during seismic wave propagation. The reservoir pore is one of the main factors that affect the value of Q. Especially, when pore space is filled with oil or gas, the rock usually exhibits a relative low Q value. Such a low Q value has been used as a direct hydrocarbon indicator by many researchers. The conventional Q estimation method based on spectral ratio suffers from the problem of waveform tuning; hence, many researchers have introduced time-frequency analysis techniques to tackle this problem. Unfortunately, the window functions adopted in time-frequency analysis algorithms such as continuous wavelet transform (CWT) and S-transform (ST) contaminate the amplitude spectra because the seismic signal is multiplied by the window functions during time-frequency decomposition. The basic assumption of the spectral ratio method is that there is a linear relationship between natural logarithmic spectral ratio and frequency. However, this assumption does not hold if we take the influence of window functions into consideration. In this paper, we first employ a recently developed two-parameter generalized S-transform (GST) to obtain the time-frequency spectra of seismic traces. We then deduce the non-linear relationship between natural logarithmic spectral ratio and frequency. Finally, we obtain a linear relationship between natural logarithmic spectral ratio and a newly defined parameter γ by ignoring the negligible second order term. The gradient of this linear relationship is 1/Q. Here, the parameter γ is a function of frequency and source wavelet. Numerical examples for VSP and post-stack reflection data confirm that our algorithm is capable of yielding accurate results. The Q-value results estimated from field data acquired in western China show reasonable comparison with oil-producing well location.

  5. An audit strategy for progression-free survival

    PubMed Central

    Dodd, Lori E.; Korn, Edward L.; Freidlin, Boris; Gray, Robert; Bhattacharya, Suman

    2010-01-01

    Summary In randomized clinical trials, the use of potentially subjective endpoints has led to frequent use of blinded independent central review (BICR) and event adjudication committees to reduce possible bias in treatment effect estimators based on local evaluations (LE). In oncology trials, progression-free survival (PFS) is one such endpoint. PFS requires image interpretation to determine whether a patient’s cancer has progressed, and BICR has been advocated to reduce the potential for endpoints to be biased by knowledge of treatment assignment. There is current debate, however, about the value of such reviews with time-to-event outcomes like PFS. We propose a BICR audit strategy as an alternative to a complete-case BICR to provide assurance of the presence of a treatment effect. We develop an auxiliary-variable estimator of the log-hazard ratio that is more efficient than simply using the audited (i.e., sampled) BICR data for estimation. Our estimator incorporates information from the LE on all the cases and the audited BICR cases, and is an asymptotically unbiased estimator of the log-hazard ratio from BICR. The estimator offers considerable efficiency gains that improve as the correlation between LE and BICR increases. A two-stage auditing strategy is also proposed and evaluated through simulation studies. The method is applied retrospectively to a large oncology trial that had a complete-case BICR, showing the potential for efficiency improvements. PMID:21210772

  6. Cost effectiveness of a general practice chronic disease management plan for coronary heart disease in Australia.

    PubMed

    Chew, Derek P; Carter, Robert; Rankin, Bree; Boyden, Andrew; Egan, Helen

    2010-05-01

    The cost effectiveness of a general practice-based program for managing coronary heart disease (CHD) patients in Australia remains uncertain. We have explored this through an economic model. A secondary prevention program based on initial clinical assessment and 3 monthly review, optimising of pharmacotherapies and lifestyle modification, supported by a disease registry and financial incentives for quality of care and outcomes achieved was assessed in terms of incremental cost effectiveness ratio (ICER), in Australian dollars per disability adjusted life year (DALY) prevented. Based on 2006 estimates, 263 487 DALYs were attributable to CHD in Australia. The proposed program would add $115 650 000 to the annual national heath expenditure. Using an estimated 15% reduction in death and disability and a 40% estimated program uptake, the program's ICER is $8081 per DALY prevented. With more conservative estimates of effectiveness and uptake, estimates of up to $38 316 per DALY are observed in sensitivity analysis. Although innovation in CHD management promises improved future patient outcomes, many therapies and strategies proven to reduce morbidity and mortality are available today. A general practice-based program for the optimal application of current therapies is likely to be cost-effective and provide substantial and sustainable benefits to the Australian community.

  7. A Taxonomic Reduced-Space Pollen Model for Paleoclimate Reconstruction

    NASA Astrophysics Data System (ADS)

    Wahl, E. R.; Schoelzel, C.

    2010-12-01

    Paleoenvironmental reconstruction from fossil pollen often attempts to take advantage of the rich taxonomic diversity in such data. Here, a taxonomically "reduced-space" reconstruction model is explored that would be parsimonious in introducing parameters needing to be estimated within a Bayesian Hierarchical Modeling context. This work involves a refinement of the traditional pollen ratio method. This method is useful when one (or a few) dominant pollen type(s) in a region have a strong positive correlation with a climate variable of interest and another (or a few) dominant pollen type(s) have a strong negative correlation. When, e.g., counts of pollen taxa a and b (r >0) are combined with pollen types c and d (r <0) to form ratios of the form (a + b) / (a + b + c + d), an appropriate estimation form is the binomial logistic generalized linear model (GLM). The GLM can readily model this relationship in the forward form, pollen = g(climate), which is more physically realistic than inverse models often used in paleoclimate reconstruction [climate = f(pollen)]. The specification of the model is: rnum Bin(n,p), where E(r|T) = p = exp(η)/[1+exp(η)], and η = α + β(T); r is the pollen ratio formed as above, rnum is the ratio numerator, n is the ratio denominator (i.e., the sum of pollen counts), the denominator-specific count is (n - rnum), and T is the temperature at each site corresponding to a specific value of r. Ecological and empirical screening identified the model (Spruce+Birch) / (Spruce+Birch+Oak+Hickory) for use in temperate eastern N. America. α and β were estimated using both "traditional" and Bayesian GLM algorithms (in R). Although it includes only four pollen types, the ratio model yields more explained variation ( 80%) in the pollen-temperature relationship of the study region than a 64-taxon modern analog technique (MAT). Thus, the new pollen ratio method represents an information-rich, reduced space data model that can be efficiently employed in a BHM framework. The ratio model can directly reconstruct past temperature by solving the GLM equations for T as a function of α, β, and E(r|T): T = {ln[E(r|T)/{1-E(r|T)}]-α}/β. To enable use in paleoreconstruction, the observed r values from fossil pollen data are, by assumption, treated as unbiased estimators of the true r value at each time sampled, which can be substituted for E(r|T). Uncertainty in this reconstruction is systematically evaluated in two parts: 1) the observed r values and their corresponding n values are input as parameters into the binomial distribution, Monte Carlo random pollen count draws are made, and a new ratio value is determined for each iteration; and 2) in the "traditional" GLM the estimated SEs for α and β are used with the α and β EV estimates to yield Monte Carlo random draws for each binomial draw (assuming α and β are Gaussian), in the Bayesian GLM random draws for α and β are taken directly from their estimated posterior distribution. Both methods yield nearly identical reconstructions from varved lakes in Wisconsin where the model has been tested; slightly narrower uncertainty ranges are produced by the Bayesian model. The Little Ice Age is readily identified. Pine:Oak and Fir:Oak versions of the model used in S. California show differences from MAT-based reconstructions.

  8. An international database of radionuclide concentration ratios for wildlife: development and uses.

    PubMed

    Copplestone, D; Beresford, N A; Brown, J E; Yankovich, T

    2013-12-01

    A key element of most systems for assessing the impact of radionuclides on the environment is a means to estimate the transfer of radionuclides to organisms. To facilitate this, an international wildlife transfer database has been developed to provide an online, searchable compilation of transfer parameters in the form of equilibrium-based whole-organism to media concentration ratios. This paper describes the derivation of the wildlife transfer database, the key data sources it contains and highlights the applications for the data. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. The fatigue behavior of composite laminates under various mean stresses

    NASA Technical Reports Server (NTRS)

    Rotem, A.

    1991-01-01

    A method is developed for predicting the S-N curve of a composite laminate which is subjected to an arbitrary stress ratio, R (minimum stress/maximum stress). The method is based on the measuring of the S-N behavior of two distinct cases, tension-tension and compression-compression fatigue loadings. Using these parameters, expressions are formulated that estimate the fatigue behavior under any stress ratio loading. Experimental results from the testing of graphite/epoxy laminates, with various structures, are compared with the predictions and show good agreement.

  10. Two smart spectrophotometric methods for the simultaneous estimation of Simvastatin and Ezetimibe in combined dosage form

    NASA Astrophysics Data System (ADS)

    Magdy, Nancy; Ayad, Miriam F.

    2015-02-01

    Two simple, accurate, precise, sensitive and economic spectrophotometric methods were developed for the simultaneous determination of Simvastatin and Ezetimibe in fixed dose combination products without prior separation. The first method depends on a new chemometrics-assisted ratio spectra derivative method using moving window polynomial least square fitting method (Savitzky-Golay filters). The second method is based on a simple modification for the ratio subtraction method. The suggested methods were validated according to USP guidelines and can be applied for routine quality control testing.

  11. Analysis of nursing home use and bed supply: Wisconsin, 1983.

    PubMed Central

    Nyman, J A

    1989-01-01

    This article presents evidence that in 1983 excess demand was a prevailing characteristic of nursing home care markets in Wisconsin, a state with one of the highest bed to elderly population ratios. It further shows that excess demand is the source of at least three types of error in use-based estimates of the determinants of the need for nursing home care. First, if excess demand is present, estimates of the determinants of Medicaid use may simply represent a crowding out of Medicaid patients, driven by the determinants of private use. As a result, factors associated with greater overall need in an area will be correlated with fewer Medicaid patients in nursing homes, ceteris paribus. Second, estimates of the substitutability of home health care for nursing home care may be misleadingly insignificant if they are based on the bed supply-constrained behavior of Medicaid-eligible subjects. Third, because the determinants of bed supply become the determinants of overall use under excess-demand conditions, the determinants of use will reflect, to some extent, the nursing home's desire for profits. Because profitability considerations are reflected in use based estimates of need, these estimates are likely to be misleading. PMID:2681081

  12. Classifying low flow hydrological regimes at a regional scale

    NASA Astrophysics Data System (ADS)

    Kirkby, M. J.; Gallart, F.; Kjeldsen, T. R.; Irvine, B. J.; Froebrich, J.; Lo Porto, A.; de Girolamo, A.; Mirage Team

    2011-12-01

    The paper uses a simple water balance model that partitions the precipitation between actual evapotranspiration, quick flow and delayed flow, and has sufficient complexity to capture the essence of climate and vegetation controls on this partitioning. Using this model, monthly flow duration curves have been constructed from climate data across Europe to address the relative frequency of ecologically critical low flow stages in semi-arid rivers, when flow commonly persists only in disconnected pools in the river bed. The hydrological model is based on a dynamic partitioning of precipitation to estimate water available for evapotranspiration and plant growth and for residual runoff. The duration curve for monthly flows has then been analysed to give an estimate of bankfull flow based on recurrence interval. Arguing from observed ratios of cross-sectional areas at flood and low flows, hydraulic geometry suggests that disconnected flow under "pool" conditions is approximately 0.1% of bankfull flow. Flow duration curves define a measure of bankfull discharge on the basis of frequency. The corresponding frequency for pools is then read from the duration curve, using this (0.1%) ratio to estimate pool discharge from bank full discharge. The flow duration curve then provides an estimate of the frequency of poorly connected pool conditions, corresponding to this discharge, that constrain survival of river-dwelling arthropods and fish. The methodology has here been applied across Europe at 15 km resolution, and the potential is demonstrated for applying the methodology under alternative climatic scenarios.

  13. Age estimation by pulp-to-tooth area ratio using cone-beam computed tomography: A preliminary analysis

    PubMed Central

    Rai, Arpita; Acharya, Ashith B.; Naikmasur, Venkatesh G.

    2016-01-01

    Background: Age estimation of living or deceased individuals is an important aspect of forensic sciences. Conventionally, pulp-to-tooth area ratio (PTR) measured from periapical radiographs have been utilized as a nondestructive method of age estimation. Cone-beam computed tomography (CBCT) is a new method to acquire three-dimensional images of the teeth in living individuals. Aims: The present study investigated age estimation based on PTR of the maxillary canines measured in three planes obtained from CBCT image data. Settings and Design: Sixty subjects aged 20–85 years were included in the study. Materials and Methods: For each tooth, mid-sagittal, mid-coronal, and three axial sections—cementoenamel junction (CEJ), one-fourth root level from CEJ, and mid-root—were assessed. PTR was calculated using AutoCAD software after outlining the pulp and tooth. Statistical Analysis Used: All statistical analyses were performed using an SPSS 17.0 software program. Results and Conclusions: Linear regression analysis showed that only PTR in axial plane at CEJ had significant age correlation (r = 0.32; P < 0.05). This is probably because of clearer demarcation of pulp and tooth outline at this level. PMID:28123269

  14. Genetic variants in loci 1p13 and 9p21 and fatal coronary heart disease in a Norwegian case-cohort study.

    PubMed

    Jansen, Mona Dverdal; Knudsen, Gun Peggy; Myhre, Ronny; Høiseth, Gudrun; Mørland, Jørg; Næss, Øyvind; Tambs, Kristian; Magnus, Per

    2014-05-01

    Single nucleotide polymorphisms (SNPs) in loci 1p13 and 9p21 have previously been found to be associated with incident coronary heart disease (CHD). This study aimed to investigate whether these SNPs show associations with fatal CHD in a population-based cohort study after adjustment for socioeconomic- and lifestyle-related CHD risk factors not commonly included in genetic association studies. Using the population-based Cohort of Norway (CONOR), a nested case-cohort study was set up and DNA from 2,953 subjects (829 cases and 2,124 non-cases) were genotyped. The association with fatal CHD was estimated for four SNPs, three from locus 1p13 and one from locus 9p21. Multivariable Cox regression was used to estimate unstratified and gender-stratified hazard ratios while adjusting for major CHD risk factors. The associations between three SNPs from locus 1p13 and non-HDL cholesterol levels were also estimated. Men homozygous for the risk alleles on rs1333049 (9p21) and rs14000 (1p13) were found to have significantly increased hazard ratios in crude and adjusted models, and the hazard ratios remained statistically significant when both genders were analyzed together. Adjustment for additional socioeconomic- and lifestyle-related CHD risk factors influenced the association estimates only slightly. No significant associations were observed between the other two SNPs in loci 1p13 (rs599839 and rs646776) and CHD mortality in either gender. Both rs599839 and rs646776 showed significant, gradual increases in non-HDL cholesterol levels with increasing number of risk alleles. This study confirms the association between 9p21 (rs1333049) and fatal CHD in a Norwegian population-based cohort. The effect was not influenced by several socioeconomic- and lifestyle-related risk factors. Our results show that 1p13 (rs14000) may also be associated with fatal CHD. SNPs at 1p13 (rs599839 and rs646776) were associated with non-HDL cholesterol levels.

  15. Variance in age-specific sex composition of Pacific halibut catches, and comparison of statistical and genetic methods for reconstructing sex ratios

    NASA Astrophysics Data System (ADS)

    Loher, Timothy; Woods, Monica A.; Jimenez-Hidalgo, Isadora; Hauser, Lorenz

    2016-01-01

    Declines in size at age of Pacific halibut Hippoglossus stenolepis, in concert with sexually-dimorphic growth and a constant minimum commercial size limit, have led to the expectation that the sex composition of commercial catches should be increasingly female-biased. Sensitivity analyses suggest that variance in sex composition of landings may be the most influential source of uncertainty affecting current understanding of spawning stock biomass. However, there is no reliable way to determine sex at landing because all halibut are eviscerated at sea. In 2014, a statistical method based on survey data was developed to estimate the probability that fish of any given length at age (LAA) would be female, derived from the fundamental observation that large, young fish are likely female whereas small, old fish have a high probability of being male. Here, we examine variability in age-specific sex composition using at-sea commercial and closed-season survey catches, and compare the accuracy of the survey-based LAA technique to genetic markers for reconstructing the sex composition of catches. Sexing by LAA performed best for summer-collected samples, consistent with the hypothesis that the ability to characterize catches can be influenced by seasonal demographic shifts. Additionally, differences between survey and commercial selectivity that allow fishers to harvest larger fish within cohorts may generate important mismatch between survey and commercial datasets. Length-at-age-based estimates ranged from 4.7% underestimation of female proportion to 12.0% overestimation, with mean error of 5.8 ± 1.5%. Ratios determined by genetics were closer to true sample proportions and displayed less variability; estimation to within < 1% of true ratios was limited to genetics. Genetic estimation of female proportions ranged from 4.9% underestimation to 2.5% overestimation, with a mean absolute error of 1.2 ± 1.2%. Males were generally more difficult to assign than females: 6.7% of males and 3.4% of females were incorrectly assigned. Although nuclear microsatellites proved more consistent at partitioning catches by sex, we recommend that SNP assays be developed to allow for rapid, cost-effective, and accurate sex identification.

  16. Estimated human absorbed dose of a new (153)Sm bone seeking agent based on biodistribution data in mice: Comparison with (153)Sm-EDTMP.

    PubMed

    Yousefnia, Hassan; Zolghadri, Samaneh

    2015-11-01

    The main goal in radiotherapy is to deliver the absorbed dose within the target organs in highest possible amount, while the absorbed dose of the other organs, especially the critical organs, should be kept as low as possible. In this work, the absorbed dose to human organs for a new (153)Sm bone-seeking agent was investigated. (153)Sm-(4-{[(bis(phosphonomethyl))carbamoyl]methyl}-7,10-bis(carboxymethyl)-1,4,7,10-tetraazacyclododec-1-yl) acetic acid ((153)Sm-BPAMD) complex was successfully prepared. The biodistribution of the complex was investigated in male Syrian mice up to 48 h post injection. The human absorbed dose of the complex was estimated based on the biodistribution data of the mice by radiation absorbed dose assessment resource (RADAR) method. The target to non-target absorbed dose ratios for (153)Sm-BPAMD were compared with these ratios for (153)Sm-EDTMP. The highest absorbed dose for (153)Sm-BPAMD was observed in bone surface with 5.828 mGy/MBq. The dose ratios of the bone surface to the red marrow and to the total body for (153)Sm-BPAMD were 5.3 and 20.0, respectively, while these ratios for (153)Sm-EDTMP were 4.4 and 18.3, respectively. This means, for a given dose to the bone surface as the target organ, the red marrow (as the main critical organ) and the total body would receive lesser absorbed dose in the case of (153)Sm-BPAMD. Generally, the human absorbed dose estimation of (153)Sm-BPAMD indicated that all other tissues approximately received insignificant absorbed dose in comparison with bone surface and therefore can be regarded as a new potential agent for bone pain palliation therapy. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  17. Increased accuracy of batch fecundity estimates using oocyte stage ratios in Plectropomus leopardus.

    PubMed

    Carter, A B; Williams, A J; Russ, G R

    2009-08-01

    Using the ratio of the number of migratory nuclei to hydrated oocytes to estimate batch fecundity of common coral trout Plectropomus leopardus increases the time over which samples can be collected and, therefore, increases the sample size available and reduces biases in batch fecundity estimates.

  18. Tire-road friction coefficient estimation based on the resonance frequency of in-wheel motor drive system

    NASA Astrophysics Data System (ADS)

    Chen, Long; Bian, Mingyuan; Luo, Yugong; Qin, Zhaobo; Li, Keqiang

    2016-01-01

    In this paper, a resonance frequency-based tire-road friction coefficient (TRFC) estimation method is proposed by considering the dynamics performance of the in-wheel motor drive system under small slip ratio conditions. A frequency response function (FRF) is deduced for the drive system that is composed of a dynamic tire model and a simplified motor model. A linear relationship between the squared system resonance frequency and the TFRC is described with the FRF. Furthermore, the resonance frequency is identified by the Auto-Regressive eXogenous model using the information of the motor torque and the wheel speed, and the TRFC is estimated thereafter by a recursive least squares filter with the identified resonance frequency. Finally, the effectiveness of the proposed approach is demonstrated through simulations and experimental tests on different road surfaces.

  19. Proton magnetic resonance spectroscopy for assessment of human body composition.

    PubMed

    Kamba, M; Kimura, K; Koda, M; Ogawa, T

    2001-02-01

    The usefulness of magnetic resonance spectroscopy (MRS)-based techniques for assessment of human body composition has not been established. We compared a proton MRS-based technique with the total body water (TBW) method to determine the usefulness of the former technique for assessment of human body composition. Proton magnetic resonance spectra of the chest to abdomen, abdomen to pelvis, and pelvis to thigh regions were obtained from 16 volunteers by using single, free induction decay measurement with a clinical magnetic resonance system operating at 1.5 T. The MRS-derived metabolite ratio was determined as the ratio of fat methyl and methylene proton resonance to water proton resonance. The peak areas for the chest to abdomen and the pelvis to thigh regions were normalized to an external reference (approximately 2200 g benzene) and a weighted average of the MRS-derived metabolite ratios for the 2 positions was calculated. TBW for each subject was determined by the deuterium oxide dilution technique. The MRS-derived metabolite ratios were significantly correlated with the ratio of body fat to lean body mass estimated by TBW. The MRS-derived metabolite ratio for the abdomen to pelvis region correlated best with the ratio of body fat to lean body mass on simple regression analyses (r = 0.918). The MRS-derived metabolite ratio for the abdomen to pelvis region and that for the pelvis to thigh region were selected for a multivariate regression model (R = 0.947, adjusted R(2) = 0.881). This MRS-based technique is sufficiently accurate for assessment of human body composition.

  20. Efficient low-bit-rate adaptive mesh-based motion compensation technique

    NASA Astrophysics Data System (ADS)

    Mahmoud, Hanan A.; Bayoumi, Magdy A.

    2001-08-01

    This paper proposes a two-stage global motion estimation method using a novel quadtree block-based motion estimation technique and an active mesh model. In the first stage, motion parameters are estimated by fitting block-based motion vectors computed using a new efficient quadtree technique, that divides a frame into equilateral triangle blocks using the quad-tree structure. Arbitrary partition shapes are achieved by allowing 4-to-1, 3-to-1 and 2-1 merge/combine of sibling blocks having the same motion vector . In the second stage, the mesh is constructed using an adaptive triangulation procedure that places more triangles over areas with high motion content, these areas are estimated during the first stage. finally the motion compensation is achieved by using a novel algorithm that is carried by both the encoder and the decoder to determine the optimal triangulation of the resultant partitions followed by affine mapping at the encoder. Computer simulation results show that the proposed method gives better performance that the conventional ones in terms of the peak signal-to-noise ration (PSNR) and the compression ratio (CR).

  1. Geochemical Evidence for Calcification from the Drake Passage Time-series

    NASA Astrophysics Data System (ADS)

    Munro, D. R.; Lovenduski, N. S.; Takahashi, T.; Stephens, B. B.; Newberger, T.; Dierssen, H. M.; Randolph, K. L.; Freeman, N. M.; Bushinsky, S. M.; Key, R. M.; Sarmiento, J. L.; Sweeney, C.

    2016-12-01

    Satellite imagery suggests high particulate inorganic carbon within a circumpolar region north of the Antarctic Polar Front (APF), but in situ evidence for calcification in this region is sparse. Given the geochemical relationship between calcification and total alkalinity (TA), seasonal changes in surface concentrations of potential alkalinity (PA), which accounts for changes in TA due to variability in salinity and nitrate, can be used as a means to evaluate satellite-based calcification algorithms. Here, we use surface carbonate system measurements collected from 2002 to 2016 for the Drake Passage Time-series (DPT) to quantify rates of calcification across the Antarctic Circumpolar Current. We also use vertical PA profiles collected during two cruises across the Drake Passage in March 2006 and September 2009 to estimate the calcium carbonate to organic carbon export ratio. We find geochemical evidence for calcification both north and south of the APF with the highest rates observed north of the APF. Calcification estimates from the DPT are compared to satellite-based estimates and estimates based on hydrographic data from other regions around the Southern Ocean.

  2. Comparison of Bowen-ratio, eddy-correlation, and weighing-lysimeter evapotranspiration for two sparse-canopy sites in eastern Washington

    USGS Publications Warehouse

    Tomlinson, S.A.

    1996-01-01

    This report compares evapotranspiration estimated with the Bowen-ratio and eddy-correlation methods with evapotranspiration measured by weighing lysimeters for two sparse-canopy sites in eastern Washington. The sites are located in a grassland area (grass lysimeter site) and a sagbrush- covered area (sage lysimeter site) on the Arid Lands Ecology Reserve in Benton County, Washington. Lysimeter data were collected at the sites from August 1990 to November 1994. Bowen-ratio data were collected for varying periods from May 1993 to November 1994. Additional Bowen-ratio data without interchanging air- temperature and vapor-pressure sensors to remove sensor bias (fixed-sensor system) were collected from October 1993 to June 1994. Eddy-correlation data were collected at the grass lysimeter site from March to April 1994, and at the sage lysimeter site from April to May 1994. The comparisons of evapotranspiration determined by the various methods differed considerably, depending on the periods of record being compared and the sites being analyzed. The year 1993 was very wet, with about 50 percent more precipitation than average; 1994 was a very dry year, with only about half the average precipitation. The study showed that on an annual basis, at least in 1994, Bowen-ratio evapotranspiration closely matched lysimeter evapotranspiration. In 1993, Bowen-ratio and lysimeter evapotranspiration comparisons were variable. Evapotranspiration estimated with the Bowen-ratio method averaged 5 percent more than evapotranspiration measured by lysimeters at the grass lysimeter site from October 1993 to November 1994, and 3 percent less than lysimeters at the sage lysimeter site from November 1993 to October 1994. From March 24 to April 5, 1994, at the grass lysimeter site, the Bowen-ratio method estimated 11 percent less, the Bowen-ratio method utilizing the fixed sensor system about 7 percent more, and the eddy-correlation method about 28 percent less evapotranspiration than the lysimeters measured. From May 7 to June 18, 1993, however, the Bowen-ratio method estimated only 54 percent of the evapotranspiration measured by lysimeters at the grass lysimeter site. This large difference possibly may be attributed to Bowen-ratio instrument variability or error, to the density of grasses in the lysimeters being greater than in the surrounding area, or to heating effects on the lysimeters. From September 1 to October 31, 1993, the Bowen-ratio method estimated more than 450 percent more evapotranspiration than was measured by lysimeters at the sage lysimeter site. This difference may have been due to conditions in the lysimeters at the sage lysimeter site that were unrepresentative of natural conditions. The Bowen-ratio instruments measured evapotrans- piration over sagebrush plants outside the lysimeters, which were blooming very heavily, possibly using supplemental ground water or spring water from nearby upslope areas. The sagebrush plants contained by the lysimeters showed very little evapotranspiration, possibly because they were root-bound and had already used all available water. Also, plants in the lysimeters would not have been able to access any supplemental water available to plants outside the confines of the lysimeters. Earlier in 1993, from June 17 to July 12, the Bowen-ratio method estimated only 1 percent less evapotranspiration than determined for the lysimeters at the sage lysimeter site. On the basis of lysimeter measurements from August 1990 to November 1994, cumulative evapotrans- piration ranged from about 97 to 103 percent of the annual precipitation each year. The evapotranspiration measurements made at the grass and sage lysimeter sites, which were based on weight changes in the lysimeters, showed that storage changes became nearly zero each year some time between August and November as average surface soil moisture decreased to about 2 percent and evapotranspiration rates decreased to less than 0.1 millimeter per day.

  3. Estimators of wheel slip for electric vehicles using torque and encoder measurements

    NASA Astrophysics Data System (ADS)

    Boisvert, M.; Micheau, P.

    2016-08-01

    For the purpose of regenerative braking control in hybrid and electrical vehicles, recent studies have suggested controlling the slip ratio of the electric-powered wheel. A slip tracking controller requires an accurate slip estimation in the overall range of the slip ratio (from 0 to 1), contrary to the conventional slip limiter (ABS) which calls for an accurate slip estimation in the critical slip area, estimated at around 0.15 in several applications. Considering that it is not possible to directly measure the slip ratio of a wheel, the problem is to estimate the latter from available online data. To estimate the slip of a wheel, both wheel speed and vehicle speed must be known. Several studies provide algorithms that allow obtaining a good estimation of vehicle speed. On the other hand, there is no proposed algorithm for the conditioning of the wheel speed measurement. Indeed, the noise included in the wheel speed measurement reduces the accuracy of the slip estimation, a disturbance increasingly significant at low speed and low torque. Herein, two different extended Kalman observers of slip ratio were developed. The first calculates the slip ratio with data provided by an observer of vehicle speed and of propeller wheel speed. The second observer uses an original nonlinear model of the slip ratio as a function of the electric motor. A sinus tracking algorithm is included in the two observers, in order to reject harmonic disturbances of wheel speed measurement. Moreover, mass and road uncertainties can be compensated with a coefficient adapted online by an RLS. The algorithms were implemented and tested with a three-wheel recreational hybrid vehicle. Experimental results show the efficiency of both methods.

  4. GUM Analysis for TIMS and SIMS Isotopic Ratios in Graphite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heasler, Patrick G.; Gerlach, David C.; Cliff, John B.

    2007-04-01

    This report describes GUM calculations for TIMS and SIMS isotopic ratio measurements of reactor graphite samples. These isotopic ratios are used to estimate reactor burn-up, and currently consist of various ratios of U, Pu, and Boron impurities in the graphite samples. The GUM calculation is a propagation of error methodology that assigns uncertainties (in the form of standard error and confidence bound) to the final estimates.

  5. Simplification of a light-based model for estimating final internode length in greenhouse cucumber canopies.

    PubMed

    Kahlen, Katrin; Stützel, Hartmut

    2011-10-01

    Light quantity and quality affect internode lengths in cucumber (Cucumis sativus), whereby leaf area and the optical properties of the leaves mainly control light quality within a cucumber plant community. This modelling study aimed at providing a simple, non-destructive method to predict final internode lengths (FILs) using light quantity and leaf area data. Several simplifications of a light quantity and quality sensitive model for estimating FILs in cucumber have been tested. The direct simplifications substitute the term for the red : far-red (R : FR) ratios, by a term for (a) the leaf area index (LAI, m(2) m(-2)) or (b) partial LAI, the cumulative leaf area per m(2) ground, where leaf area per m(2) ground is accumulated from the top of each plant until a number, n, of leaves per plant is reached. The indirect simplifications estimate the input R : FR ratio based on partial leaf area and plant density. In all models, simulated FILs were in line with the measured FILs over various canopy architectures and light conditions, but the prediction quality varied. The indirect simplification based on leaf area of ten leaves revealed the best fit with measured data. Its prediction quality was even higher than of the original model. This study showed that for vertically trained cucumber plants, leaf area data can substitute local light quality data for estimating FIL data. In unstressed canopies, leaf area over the upper ten ranks seems to represent the feedback of the growing architecture on internode elongation with respect to light quality. This highlights the role of this domain of leaves as the primary source for the specific R : FR signal controlling the final length of an internode and could therefore guide future research on up-scaling local processes to the crop level.

  6. Uncertainty analysis for effluent trading planning using a Bayesian estimation-based simulation-optimization modeling approach.

    PubMed

    Zhang, J L; Li, Y P; Huang, G H; Baetz, B W; Liu, J

    2017-06-01

    In this study, a Bayesian estimation-based simulation-optimization modeling approach (BESMA) is developed for identifying effluent trading strategies. BESMA incorporates nutrient fate modeling with soil and water assessment tool (SWAT), Bayesian estimation, and probabilistic-possibilistic interval programming with fuzzy random coefficients (PPI-FRC) within a general framework. Based on the water quality protocols provided by SWAT, posterior distributions of parameters can be analyzed through Bayesian estimation; stochastic characteristic of nutrient loading can be investigated which provides the inputs for the decision making. PPI-FRC can address multiple uncertainties in the form of intervals with fuzzy random boundaries and the associated system risk through incorporating the concept of possibility and necessity measures. The possibility and necessity measures are suitable for optimistic and pessimistic decision making, respectively. BESMA is applied to a real case of effluent trading planning in the Xiangxihe watershed, China. A number of decision alternatives can be obtained under different trading ratios and treatment rates. The results can not only facilitate identification of optimal effluent-trading schemes, but also gain insight into the effects of trading ratio and treatment rate on decision making. The results also reveal that decision maker's preference towards risk would affect decision alternatives on trading scheme as well as system benefit. Compared with the conventional optimization methods, it is proved that BESMA is advantageous in (i) dealing with multiple uncertainties associated with randomness and fuzziness in effluent-trading planning within a multi-source, multi-reach and multi-period context; (ii) reflecting uncertainties existing in nutrient transport behaviors to improve the accuracy in water quality prediction; and (iii) supporting pessimistic and optimistic decision making for effluent trading as well as promoting diversity of decision alternatives. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. High-Resolution Time-Frequency Spectrum-Based Lung Function Test from a Smartphone Microphone

    PubMed Central

    Thap, Tharoeun; Chung, Heewon; Jeong, Changwon; Hwang, Ki-Eun; Kim, Hak-Ryul; Yoon, Kwon-Ha; Lee, Jinseok

    2016-01-01

    In this paper, a smartphone-based lung function test, developed to estimate lung function parameters using a high-resolution time-frequency spectrum from a smartphone built-in microphone is presented. A method of estimation of the forced expiratory volume in 1 s divided by forced vital capacity (FEV1/FVC) based on the variable frequency complex demodulation method (VFCDM) is first proposed. We evaluated our proposed method on 26 subjects, including 13 healthy subjects and 13 chronic obstructive pulmonary disease (COPD) patients, by comparing with the parameters clinically obtained from pulmonary function tests (PFTs). For the healthy subjects, we found that an absolute error (AE) and a root mean squared error (RMSE) of the FEV1/FVC ratio were 4.49% ± 3.38% and 5.54%, respectively. For the COPD patients, we found that AE and RMSE from COPD patients were 10.30% ± 10.59% and 14.48%, respectively. For both groups, we compared the results using the continuous wavelet transform (CWT) and short-time Fourier transform (STFT), and found that VFCDM was superior to CWT and STFT. Further, to estimate other parameters, including forced vital capacity (FVC), forced expiratory volume in 1 s (FEV1), and peak expiratory flow (PEF), regression analysis was conducted to establish a linear transformation. However, the parameters FVC, FEV1, and PEF had correlation factor r values of 0.323, 0.275, and −0.257, respectively, while FEV1/FVC had an r value of 0.814. The results obtained suggest that only the FEV1/FVC ratio can be accurately estimated from a smartphone built-in microphone. The other parameters, including FVC, FEV1, and PEF, were subjective and dependent on the subject’s familiarization with the test and performance of forced exhalation toward the microphone. PMID:27548164

  8. Likelihood Ratios for Glaucoma Diagnosis Using Spectral Domain Optical Coherence Tomography

    PubMed Central

    Lisboa, Renato; Mansouri, Kaweh; Zangwill, Linda M.; Weinreb, Robert N.; Medeiros, Felipe A.

    2014-01-01

    Purpose To present a methodology for calculating likelihood ratios for glaucoma diagnosis for continuous retinal nerve fiber layer (RNFL) thickness measurements from spectral domain optical coherence tomography (spectral-domain OCT). Design Observational cohort study. Methods 262 eyes of 187 patients with glaucoma and 190 eyes of 100 control subjects were included in the study. Subjects were recruited from the Diagnostic Innovations Glaucoma Study. Eyes with preperimetric and perimetric glaucomatous damage were included in the glaucoma group. The control group was composed of healthy eyes with normal visual fields from subjects recruited from the general population. All eyes underwent RNFL imaging with Spectralis spectral-domain OCT. Likelihood ratios for glaucoma diagnosis were estimated for specific global RNFL thickness measurements using a methodology based on estimating the tangents to the Receiver Operating Characteristic (ROC) curve. Results Likelihood ratios could be determined for continuous values of average RNFL thickness. Average RNFL thickness values lower than 86μm were associated with positive LRs, i.e., LRs greater than 1; whereas RNFL thickness values higher than 86μm were associated with negative LRs, i.e., LRs smaller than 1. A modified Fagan nomogram was provided to assist calculation of post-test probability of disease from the calculated likelihood ratios and pretest probability of disease. Conclusion The methodology allowed calculation of likelihood ratios for specific RNFL thickness values. By avoiding arbitrary categorization of test results, it potentially allows for an improved integration of test results into diagnostic clinical decision-making. PMID:23972303

  9. Mortality Rates Among Substance Use Disorder Participants in Clinical Trials: Pooled Analysis of Twenty-Two Clinical Trials Within the National Drug Abuse Treatment Clinical Trials Network.

    PubMed

    Lindblad, Robert; Hu, Lian; Oden, Neal; Wakim, Paul; Rosa, Carmen; VanVeldhuisen, Paul

    2016-11-01

    Most substance use disorders (SUD) treatment clinical trials are too short and small to reliably estimate the incidence of rare events like death. The aim of this study is to estimate the overall mortality rates among a SUD treatment-seeking population by pooling participants from multiple clinical trials conducted through the National Institute on Drug Abuse (NIDA)-sponsored National Drug Abuse Treatment Clinical Trials Network (CTN). Drug and or alcohol users (N=9866) who sought treatment and participated in one of the twenty-two CTN trials. Data were collected through randomized clinical trials in national community treatment programs for SUD. Pooled analysis was performed to assess age- and gender-standardized mortality rate(s) (SM rate(s)), and mortality ratio(s) (SM ratio(s)) of CTN trial participants compared to the U.S. general population. The age- and gender-SM rate among CTN trials participants was 1403 (95% CI: 862-2074) per 100,000 person years (PY) compared to 542 (95% CI: 541-543) per 100,000 PY among the U.S. general population in 2005. By gender, age-adjusted SM ratio for female CTN trial participants was over five times (SM ratio=5.35, 95% CI: 3.31-8.19)), and for male CTN trial participants, it was over three times (SM ratio=3.39, 95% CI: 2.25-4.90) higher than their gender comparable peers in the U.S. general population. Age and gender-standardized mortality rates and ratios among NIDA CTN SUD treatment-seeking clinical trial participants are higher than the age and gender comparable U.S. general population. The overall mortality rates of CTN trial participants are similar to in-treatment mortality reported in large U.S. and non-U.S. cohorts of opioid users. Future analysis with additional CTN trial participants and risk times will improve the stability of estimates, especially within subgroups based on primary substance of abuse. These SUD mortality rates can be used to facilitate safety monitoring within SUD clinical trials. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. [Soil moisture estimation method based on both ground-based remote sensing data and air temperature in a summer maize ecosystem.

    PubMed

    Wang, Min Zheng; Zhou, Guang Sheng

    2016-06-01

    Soil moisture is an important component of the soil-vegetation-atmosphere continuum (SPAC). It is a key factor to determine the water status of terrestrial ecosystems, and is also the main source of water supply for crops. In order to estimate soil moisture at different soil depths at a station scale, based on the energy balance equation and the water deficit index (WDI), a soil moisture estimation model was established in terms of the remote sensing data (the normalized difference vegetation index and surface temperature) and air temperature. The soil moisture estimation model was validated based on the data from the drought process experiment of summer maize (Zea mays) responding to different irrigation treatments carried out during 2014 at Gucheng eco-agrometeorological experimental station of China Meteorological Administration. The results indicated that the soil moisture estimation model developed in this paper was able to evaluate soil relative humidity at different soil depths in the summer maize field, and the hypothesis was reasonable that evapotranspiration deficit ratio (i.e., WDI) linearly depended on soil relative humidity. It showed that the estimation accuracy of 0-10 cm surface soil moisture was the highest (R 2 =0.90). The RMAEs of the estimated and measured soil relative humidity in deeper soil layers (up to 50 cm) were less than 15% and the RMSEs were less than 20%. The research could provide reference for drought monitoring and irrigation management.

  11. Estimation of seismic attenuation in carbonate rocks using three different methods: Application on VSP data from Abu Dhabi oilfield

    NASA Astrophysics Data System (ADS)

    Bouchaala, F.; Ali, M. Y.; Matsushima, J.

    2016-06-01

    In this study a relationship between the seismic wavelength and the scale of heterogeneity in the propagating medium has been examined. The relationship estimates the size of heterogeneity that significantly affects the wave propagation at a specific frequency, and enables a decrease in the calculation time of wave scattering estimation. The relationship was applied in analyzing synthetic and Vertical Seismic Profiling (VSP) data obtained from an onshore oilfield in the Emirate of Abu Dhabi, United Arab Emirates. Prior to estimation of the attenuation, a robust processing workflow was applied to both synthetic and recorded data to increase the Signal-to-Noise Ratio (SNR). Two conventional methods of spectral ratio and centroid frequency shift methods were applied to estimate the attenuation from the extracted seismic waveforms in addition to a new method based on seismic interferometry. The attenuation profiles derived from the three approaches demonstrated similar variation, however the interferometry method resulted in greater depth resolution, differences in attenuation magnitude. Furthermore, the attenuation profiles revealed significant contribution of scattering on seismic wave attenuation. The results obtained from the seismic interferometry method revealed estimated scattering attenuation ranges from 0 to 0.1 and estimated intrinsic attenuation can reach 0.2. The subsurface of the studied zones is known to be highly porous and permeable, which suggest that the mechanism of the intrinsic attenuation is probably the interactions between pore fluids and solids.

  12. A Timing Estimation Method Based-on Skewness Analysis in Vehicular Wireless Networks.

    PubMed

    Cui, Xuerong; Li, Juan; Wu, Chunlei; Liu, Jian-Hang

    2015-11-13

    Vehicle positioning technology has drawn more and more attention in vehicular wireless networks to reduce transportation time and traffic accidents. Nowadays, global navigation satellite systems (GNSS) are widely used in land vehicle positioning, but most of them are lack precision and reliability in situations where their signals are blocked. Positioning systems base-on short range wireless communication are another effective way that can be used in vehicle positioning or vehicle ranging. IEEE 802.11p is a new real-time short range wireless communication standard for vehicles, so a new method is proposed to estimate the time delay or ranges between vehicles based on the IEEE 802.11p standard which includes three main steps: cross-correlation between the received signal and the short preamble, summing up the correlated results in groups, and finding the maximum peak using a dynamic threshold based on the skewness analysis. With the range between each vehicle or road-side infrastructure, the position of neighboring vehicles can be estimated correctly. Simulation results were presented in the International Telecommunications Union (ITU) vehicular multipath channel, which show that the proposed method provides better precision than some well-known timing estimation techniques, especially in low signal to noise ratio (SNR) environments.

  13. Emg Amplitude Estimators Based on Probability Distribution for Muscle-Computer Interface

    NASA Astrophysics Data System (ADS)

    Phinyomark, Angkoon; Quaine, Franck; Laurillau, Yann; Thongpanja, Sirinee; Limsakul, Chusak; Phukpattaranont, Pornchai

    To develop an advanced muscle-computer interface (MCI) based on surface electromyography (EMG) signal, the amplitude estimations of muscle activities, i.e., root mean square (RMS) and mean absolute value (MAV) are widely used as a convenient and accurate input for a recognition system. Their classification performance is comparable to advanced and high computational time-scale methods, i.e., the wavelet transform. However, the signal-to-noise-ratio (SNR) performance of RMS and MAV depends on a probability density function (PDF) of EMG signals, i.e., Gaussian or Laplacian. The PDF of upper-limb motions associated with EMG signals is still not clear, especially for dynamic muscle contraction. In this paper, the EMG PDF is investigated based on surface EMG recorded during finger, hand, wrist and forearm motions. The results show that on average the experimental EMG PDF is closer to a Laplacian density, particularly for male subject and flexor muscle. For the amplitude estimation, MAV has a higher SNR, defined as the mean feature divided by its fluctuation, than RMS. Due to a same discrimination of RMS and MAV in feature space, MAV is recommended to be used as a suitable EMG amplitude estimator for EMG-based MCIs.

  14. Expression of Proteins Involved in Epithelial-Mesenchymal Transition as Predictors of Metastasis and Survival in Breast Cancer Patients

    DTIC Science & Technology

    2013-11-01

    Ptrend 0.78 0.62 0.75 Unconditional logistic regression was used to estimate odds ratios (OR) and 95 % confidence intervals (CI) for risk of node...Ptrend 0.71 0.67 Unconditional logistic regression was used to estimate odds ratios (OR) and 95 % confidence intervals (CI) for risk of high-grade tumors... logistic regression was used to estimate odds ratios (OR) and 95 % confidence intervals (CI) for the associations between each of the seven SNPs and

  15. Comparison of estimation techniques for a forest inventory in which double sampling for stratification is used

    Treesearch

    Michael S. Williams

    2001-01-01

    A number of different estimators can be used when forest inventory plots cover two or more distinctly different condition classes. In this article the properties of two approximate Horvitz- Thompson (HT) estimators, a ratio of means (RM), and a mean of ratios (MR) estimator are explored in the framework of double sampling for stratification. Relevant theoretical...

  16. Ratios of the molecular species of triacylglycerols in lesquerella (Physaria fendleri) oil estimated by mass spectrometry

    USDA-ARS?s Scientific Manuscript database

    The ratios of regioisomers of 72 molecular species of triacylglycerols (TAG) in lesquerella oil were estimated using the electrospray ionization mass spectrometry of the lithium adducts of TAG in the HPLC fractions of lesquerella oil. The ratios of ion signal intensities (or relative abundances) of ...

  17. Wildfire smoke in the Siberian Arctic in summer: source characterization and plume evolution from airborne measurements

    NASA Astrophysics Data System (ADS)

    Paris, J.-D.; Stohl, A.; Nédélec, P.; Arshinov, M. Yu.; Panchenko, M. V.; Shmargunov, V. P.; Law, K. S.; Belan, B. D.; Ciais, P.

    2009-12-01

    We present airborne measurements of carbon dioxide (CO2), carbon monoxide (CO), ozone (O3), equivalent black carbon (EBC) and ultra fine particles over North-Eastern Siberia in July 2008 performed during the YAK-AEROSIB/POLARCAT experiment. During a "golden day" (11 July 2008) a number of biomass burning plumes were encountered with CO mixing ratio enhancements of up to 500 ppb relative to a background of 90 ppb. Number concentrations of aerosols in the size range 3.5-200 nm peaked at 4000 cm-3 and the EBC content reached 1.4 μg m-3. These high concentrations were caused by forest fires in the vicinity of the landing airport in Yakutsk where measurements in fresh smoke could be made during the descent. We estimate a combustion efficiency of 90 ± 3% based on CO and CO2 measurements and a CO emission factor of 65.5 ± 10.8 g CO per kilogram of dry matter burned. This suggests a potential increase in the average northern hemispheric CO mixing ratio of 3.0-7.2 ppb per million hectares of Siberian forest burned. For BC, we estimate an emission factor of 0.52 ± 0.07 g BC kg-1, comparable to values reported in the literature. The emission ratio of ultra-fine particles (3.5-200 nm) was 26 cm-3 (ppb CO)-1, consistent with other airborne studies. The transport of identified biomass burning plumes was investigated using the FLEXPART Lagrangian model. Based on sampling of wildfire plumes from the same source but with different atmospheric ages derived from FLEXPART, we estimate that the e-folding lifetimes of EBC and ultra fine particles (between 3.5 and 200 nm in size) against removal and growth processes are 5.1 and 5.5 days respectively, supporting lifetime estimates used in various modelling studies.

  18. Urinary 24-h creatinine excretion in adults and its use as a simple tool for the estimation of daily urinary analyte excretion from analyte/creatinine ratios in populations.

    PubMed

    Johner, S A; Boeing, H; Thamm, M; Remer, T

    2015-12-01

    The assessment of urinary excretion of specific nutrients (e.g. iodine, sodium) is frequently used to monitor a population's nutrient status. However, when only spot urines are available, always a risk of hydration-status-dependent dilution effects and related misinterpretations exists. The aim of the present study was to establish mean values of 24-h creatinine excretion widely applicable for an appropriate estimation of 24-h excretion rates of analytes from spot urines in adults. Twenty-four-hour creatinine excretion from the formerly representative cross-sectional German VERA Study (n=1463, 20-79 years old) was analysed. Linear regression analysis was performed to identify the most important influencing factors of creatinine excretion. In a subsample of the German DONALD Study (n=176, 20-29 years old), the applicability of the 24-h creatinine excretion values of VERA for the estimation of 24-h sodium and iodine excretion from urinary concentration measurements was tested. In the VERA Study, mean 24-h creatinine excretion was 15.4 mmol per day in men and 11.1 mmol per day in women, significantly dependent on sex, age, body weight and body mass index. Based on the established 24-h creatinine excretion values, mean 24-h iodine and sodium excretions could be estimated from respective analyte/creatinine concentrations, with average deviations <10% compared with the actual 24-h means. The present mean values of 24-h creatinine excretion are suggested as a useful tool to derive realistic hydration-status-independent average 24-h excretion rates from urinary analyte/creatinine ratios. We propose to apply these creatinine reference means routinely in biomarker-based studies aiming at characterizing the nutrient or metabolite status of adult populations by simply measuring metabolite/creatinine ratios in spot urines.

  19. Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning

    ERIC Educational Resources Information Center

    Li, Zhushan

    2014-01-01

    Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…

  20. Do factors related to combustion-based sources explain heterogeneity in PM-mortality associations across the United States?

    EPA Science Inventory

    Introduction: Spatial heterogeneity of effect estimates in associations between PM2.5 and total non-accidental mortality (TNA) in the United States (US), is an issue in epidemiology. This study uses rate ratios generated from the Multi-City/Multi-Pollutant study (1999-2005) for 3...

Top