Sample records for derived sampling rates

  1. Effect of sampling rate and record length on the determination of stability and control derivatives

    NASA Technical Reports Server (NTRS)

    Brenner, M. J.; Iliff, K. W.; Whitman, R. K.

    1978-01-01

    Flight data from five aircraft were used to assess the effects of sampling rate and record length reductions on estimates of stability and control derivatives produced by a maximum likelihood estimation method. Derivatives could be extracted from flight data with the maximum likelihood estimation method even if there were considerable reductions in sampling rate and/or record length. Small amplitude pulse maneuvers showed greater degradation of the derivative maneuvers than large amplitude pulse maneuvers when these reductions were made. Reducing the sampling rate was found to be more desirable than reducing the record length as a method of lessening the total computation time required without greatly degrading the quantity of the estimates.

  2. A robust variable sampling time BLDC motor control design based upon μ-synthesis.

    PubMed

    Hung, Chung-Wen; Yen, Jia-Yush

    2013-01-01

    The variable sampling rate system is encountered in many applications. When the speed information is derived from the position marks along the trajectory, one would have a speed dependent sampling rate system. The conventional fixed or multisampling rate system theory may not work in these cases because the system dynamics include the uncertainties which resulted from the variable sampling rate. This paper derived a convenient expression for the speed dependent sampling rate system. The varying sampling rate effect is then translated into multiplicative uncertainties to the system. The design then uses the popular μ-synthesis process to achieve a robust performance controller design. The implementation on a BLDC motor demonstrates the effectiveness of the design approach.

  3. A Robust Variable Sampling Time BLDC Motor Control Design Based upon μ-Synthesis

    PubMed Central

    Yen, Jia-Yush

    2013-01-01

    The variable sampling rate system is encountered in many applications. When the speed information is derived from the position marks along the trajectory, one would have a speed dependent sampling rate system. The conventional fixed or multisampling rate system theory may not work in these cases because the system dynamics include the uncertainties which resulted from the variable sampling rate. This paper derived a convenient expression for the speed dependent sampling rate system. The varying sampling rate effect is then translated into multiplicative uncertainties to the system. The design then uses the popular μ-synthesis process to achieve a robust performance controller design. The implementation on a BLDC motor demonstrates the effectiveness of the design approach. PMID:24327804

  4. Intra prediction using face continuity in 360-degree video coding

    NASA Astrophysics Data System (ADS)

    Hanhart, Philippe; He, Yuwen; Ye, Yan

    2017-09-01

    This paper presents a new reference sample derivation method for intra prediction in 360-degree video coding. Unlike the conventional reference sample derivation method for 2D video coding, which uses the samples located directly above and on the left of the current block, the proposed method considers the spherical nature of 360-degree video when deriving reference samples located outside the current face to which the block belongs, and derives reference samples that are geometric neighbors on the sphere. The proposed reference sample derivation method was implemented in the Joint Exploration Model 3.0 (JEM-3.0) for the cubemap projection format. Simulation results for the all intra configuration show that, when compared with the conventional reference sample derivation method, the proposed method gives, on average, luma BD-rate reduction of 0.3% in terms of the weighted spherical PSNR (WS-PSNR) and spherical PSNR (SPSNR) metrics.

  5. Reconfigurable radio receiver with fractional sample rate converter and multi-rate ADC based on LO-derived sampling clock

    NASA Astrophysics Data System (ADS)

    Park, Sungkyung; Park, Chester Sungchung

    2018-03-01

    A composite radio receiver back-end and digital front-end, made up of a delta-sigma analogue-to-digital converter (ADC) with a high-speed low-noise sampling clock generator, and a fractional sample rate converter (FSRC), is proposed and designed for a multi-mode reconfigurable radio. The proposed radio receiver architecture contributes to saving the chip area and thus lowering the design cost. To enable inter-radio access technology handover and ultimately software-defined radio reception, a reconfigurable radio receiver consisting of a multi-rate ADC with its sampling clock derived from a local oscillator, followed by a rate-adjustable FSRC for decimation, is designed. Clock phase noise and timing jitter are examined to support the effectiveness of the proposed radio receiver. A FSRC is modelled and simulated with a cubic polynomial interpolator based on Lagrange method, and its spectral-domain view is examined in order to verify its effect on aliasing, nonlinearity and signal-to-noise ratio, giving insight into the design of the decimation chain. The sampling clock path and the radio receiver back-end data path are designed in a 90-nm CMOS process technology with 1.2V supply.

  6. AFRRI (Armed Forces Radiobiology Research Institute) Annual Research Report 1 October 1980-30 September 1981.

    DTIC Science & Technology

    1981-09-30

    weight of either petroleum-derived jet propulsion fuel number 5 (JP5) or one of three samples of shale-derived JP5 (1). The surviving rats were...sacrificed at 14 days after dosing. In another study, rats were gavaged with one of the four fuel samples at the rate of 24 mI/kg body weight and sacrificed...at 1, 2, or 3 days postdosing. A significant difference was seen in the lethality of the three shale-derived samples , even though all originated from

  7. The use of interest rate swaps by nonprofit organizations: evidence from nonprofit health care providers.

    PubMed

    Stewart, Louis J; Trussel, John

    2006-01-01

    Although the use of derivatives, particularly interest rate swaps, has grown explosively over the past decade, derivative financial instrument use by nonprofits has received only limited attention in the research literature. Because little is known about the risk management activities of nonprofits, the impact of these instruments on the ability of nonprofits to raise capital may have significant public policy implications. The primary motivation of this study is to determine the types of derivatives used by nonprofits and estimate the frequency of their use among these organizations. Our study also extends contemporary finance theory by an empirical examination of the motivation for interest rate swap usage among nonprofits. Our empirical data came from 193 large nonprofit health care providers that issued debt to the public between 2000 and 2003. We used a univariate analysis and a multivariate analysis relying on logistic regression models to test alternative explanations of interest rate swaps usage by nonprofits, finding that more than 45 percent of our sample, 88 organizations, used interest rate swaps with an aggregate notional value in excess of $8.3 billion. Our empirical tests indicate the primary motive for nonprofits to use interest rate derivatives is to hedge their exposure to interest rate risk. Although these derivatives are a useful risk management tool, under conditions of falling bond market interest rates these derivatives may also expose a nonprofit swap user to the risk of a material unscheduled termination payment. Finally, we found considerable diversity in the informativeness of footnote disclosure among sample organizations that used interest rate swaps. Many nonprofits did not disclose these risks in their financial statements. In conclusion, we find financial managers in large nonprofits commonly use derivative financial instruments as risk management tools, but the use of interest rate swaps by nonprofits may expose them to other risks that are not adequately disclosed in their financial statements.

  8. Estimating the dim light melatonin onset of adolescents within a 6-h sampling window: the impact of sampling rate and threshold method

    PubMed Central

    Crowley, Stephanie J.; Suh, Christina; Molina, Thomas A.; Fogg, Louis F.; Sharkey, Katherine M.; Carskadon, Mary A.

    2016-01-01

    Objective/Background Circadian rhythm sleep-wake disorders often manifest during the adolescent years. Measurement of circadian phase such as the Dim Light Melatonin Onset (DLMO) improves diagnosis and treatment of these disorders, but financial and time costs limit the use of DLMO phase assessments in clinic. The current analysis aims to inform a cost-effective and efficient protocol to measure the DLMO in older adolescents by reducing the number of samples and total sampling duration. Patients/Methods A total of 66 healthy adolescents (26 males) aged 14.8 to 17.8 years participated in a study in which sleep was fixed for one week before they came to the laboratory for saliva collection in dim light (<20 lux). Two partial 6-h salivary melatonin profiles were derived for each participant. Both profiles began 5 h before bedtime and ended 1 h after bedtime, but one profile was derived from samples taken every 30 mins (13 samples) and the other from samples taken every 60 mins (7 samples). Three standard thresholds (first 3 melatonin values mean + 2 SDs, 3 pg/mL, and 4 pg/mL) were used to compute the DLMO. Agreement between DLMOs derived from 30-min and 60-min sampling rates was determined using a Bland-Altman analysis; agreement between sampling rate DLMOs was defined as ± 1 h. Results and Conclusions Within a 6-h sampling window, 60-min sampling provided DLMO estimates that were within ± 1 h of DLMO from 30-min sampling, but only when an absolute threshold (3 pg/mL or 4 pg/mL) was used to compute the DLMO. Future analyses should be extended to include adolescents with circadian rhythm sleep-wake disorders. PMID:27318227

  9. Derivative financial instruments and nonprofit health care providers.

    PubMed

    Stewart, Louis J; Owhoso, Vincent

    2004-01-01

    This article examines the extent of derivative financial instrument use among US nonprofit health systems and the impact of these financial instruments on their cash flows, reported operating results, and financial risks. Our examination is conducted through a case study of New Jersey hospitals and health systems. We review the existing literature on interest rate derivative instruments and US hospitals and health systems. This literature describes the design of these derivative financial instruments and the theoretical benefits of their use by large health care provider organizations. Our contribution to the literature is to provide an empirical evaluation of derivative financial instruments usage among a geographically limited sample of US nonprofit health systems. We reviewed the audited financial statements of the 49 community hospitals and multi-hospital health systems operating in the state of New Jersey. We found that 8 percent of New Jersey's nonprofit health providers utilized interest rate derivatives with an aggregate principle value of $229 million. These derivative users combine interest rate swaps and caps to lower the effective interest costs of their long-term debt while limiting their exposure to future interest rate increases. In addition, while derivative assets and liabilities have an immaterial balance sheet impact, derivative related gains and losses are a material component of their reported operating results. We also found that derivative usage among these four health systems was responsible for generating positive cash flows in the range of 1 percent to 2 percent of their total 2001 cash flows from operations. As a result of our admittedly limited samples we conclude that interest rate swaps and caps are effective risk management tools. However, we also found that while these derivative financial instruments are useful hedges against the risks of issuing long-term financing instruments, they also expose derivative users to credit, contract termination and interest rate volatility risks. In conclusion, we find that these financial instruments can also generate negative as well as positive cash flows and have both a positive and negative impact on reported operating results.

  10. Comparison of GPS and Quaternary slip rates: Insights from a new Quaternary fault database for Central Asia

    NASA Astrophysics Data System (ADS)

    Mohadjer, Solmaz; Ehlers, Todd; Bendick, Rebecca; Mutz, Sebastian

    2016-04-01

    Previous studies related to the kinematics of deformation within the India-Asia collision zone have relied on slip rate data for major active faults to test kinematic models that explain the deformation of the region. The slip rate data, however, are generally disputed for many of the first-order faults in the region (e.g., Altyn Tagh and Karakorum faults). Several studies have also challenged the common assumption that geodetic slip rates are representative of Quaternary slip rates. What has received little attention is the degree to which geodetic slip rates relate to Quaternary slip rates for active faults in the India-Asia collision zone. In this study, we utilize slip rate data from a new Quaternary fault database for Central Asia to determine the overall relationship between Quaternary and GPS-derived slip rates for 18 faults. The preliminary analysis investigating this relationship uses weighted least squares and a re-sampling analysis to test the sensitivity of this relationship to different data point attributes (e.g., faults associated with data points and dating methods used for estimating Quaternary slip rates). The resulting sample subsets of data points yield a maximum possible Pearson correlation coefficient of ~0.6, suggesting moderate correlation between Quaternary and GPS-derived slip rates for some faults (e.g., Kunlun and Longmen Shan faults). Faults with poorly correlated Quaternary and GPS-derived slip rates were identified and dating methods used for the Quaternary slip rates were examined. Results indicate that a poor correlation between Quaternary and GPS-derived slip rates exist for the Karakorum and Chaman faults. Large differences between Quaternary and GPS slip rates for these faults appear to be connected to qualitative dating of landforms used in the estimation of the Quaternary slip rates and errors in the geomorphic and structural reconstruction of offset landforms (e.g., offset terrace riser reconstructions for Altyn Tagh fault). Other factors such as a low density in the GPS network (e.g., GPS rate based on data from a single station for the Karakorum fault) appear to also contribute to the mismatch observed between the slip rates. Taken together, these results suggest that GPS-derived slip rates are often (but not always) representative of Quaternary slip rates and that the dating methods and sampling approaches used to identify transients in a fault slip rate history should be heavily scrutinized before interpreting the seismic hazards for a region.

  11. Estimation in a discrete tail rate family of recapture sampling models

    NASA Technical Reports Server (NTRS)

    Gupta, Rajan; Lee, Larry D.

    1990-01-01

    In the context of recapture sampling design for debugging experiments the problem of estimating the error or hitting rate of the faults remaining in a system is considered. Moment estimators are derived for a family of models in which the rate parameters are assumed proportional to the tail probabilities of a discrete distribution on the positive integers. The estimators are shown to be asymptotically normal and fully efficient. Their fixed sample properties are compared, through simulation, with those of the conditional maximum likelihood estimators.

  12. Calibration of polyurethane foam (PUF) disk passive air samplers for quantitative measurement of polychlorinated biphenyls (PCBs) and polybrominated diphenyl ethers (PBDEs): factors influencing sampling rates.

    PubMed

    Hazrati, Sadegh; Harrad, Stuart

    2007-03-01

    PUF disk passive air samplers are increasingly employed for monitoring of POPs in ambient air. In order to utilize them as quantitative sampling devices, a calibration experiment was conducted. Time integrated indoor air concentrations of PCBs and PBDEs were obtained from a low volume air sampler operated over a 50 d period alongside the PUF disk samplers in the same office microenvironment. Passive sampling rates for the fully-sheltered sampler design employed in our research were determined for the 51 PCB and 7 PBDE congeners detected in all calibration samples. These values varied from 0.57 to 1.55 m3 d(-1) for individual PCBs and from 1.1 to 1.9 m3 d(-1) for PBDEs. These values are appreciably lower than those reported elsewhere for different PUF disk sampler designs (e.g. partially sheltered) employed under different conditions (e.g. in outdoor air), and derived using different calibration experiment configurations. This suggests that sampling rates derived for a specific sampler configuration deployed under specific environmental conditions, should not be extrapolated to different sampler configurations. Furthermore, our observation of variable congener-specific sampling rates (consistent with other studies), implies that more research is required in order to understand fully the factors that influence sampling rates. Analysis of wipe samples taken from the inside of the sampler housing, revealed evidence that the housing surface scavenges particle bound PBDEs.

  13. Effects of Sampling and Spatio/Temporal Granularity in Traffic Monitoring on Anomaly Detectability

    NASA Astrophysics Data System (ADS)

    Ishibashi, Keisuke; Kawahara, Ryoichi; Mori, Tatsuya; Kondoh, Tsuyoshi; Asano, Shoichiro

    We quantitatively evaluate how sampling and spatio/temporal granularity in traffic monitoring affect the detectability of anomalous traffic. Those parameters also affect the monitoring burden, so network operators face a trade-off between the monitoring burden and detectability and need to know which are the optimal paramter values. We derive equations to calculate the false positive ratio and false negative ratio for given values of the sampling rate, granularity, statistics of normal traffic, and volume of anomalies to be detected. Specifically, assuming that the normal traffic has a Gaussian distribution, which is parameterized by its mean and standard deviation, we analyze how sampling and monitoring granularity change these distribution parameters. This analysis is based on observation of the backbone traffic, which exhibits spatially uncorrelated and temporally long-range dependence. Then we derive the equations for detectability. With those equations, we can answer the practical questions that arise in actual network operations: what sampling rate to set to find the given volume of anomaly, or, if the sampling is too high for actual operation, what granularity is optimal to find the anomaly for a given lower limit of sampling rate.

  14. Free radical kinetics on irradiated fennel

    NASA Astrophysics Data System (ADS)

    Yamaoki, Rumi; Kimura, Shojiro; Ohta, Masatoshi

    2008-09-01

    Herein, an electron spin resonance study on the behavior of organic radicals in fennel before and after irradiation is reported. The spectrum of irradiated fennel composed of the spectrum component derived from the un-irradiated sample (near g=2.005) and the spectra components derived from carbohydrates. The time decay of intensity spectral components was well explained by first-order kinetics with a variety of rate constants. Especially, the signal at near g=2.02 ascribed to stable cellulose-derivative components is expected to be a good indicator in the identification of irradiated plant samples.

  15. Global Kinetic Constants for Thermal Oxidative Degradation of a Cellulosic Paper

    NASA Technical Reports Server (NTRS)

    Kashiwagi, Takashi; Nambu, Hidesaburo

    1992-01-01

    Values of global kinetic constants for pyrolysis, thermal oxidative degradation, and char oxidation of a cellulosic paper were determined by a derivative thermal gravimetric study. The study was conducted at heating rates of 0.5, 1, 1.5, 3, and 5 C/min in ambient atmospheres of nitrogen, 0.28, 1.08, 5.2 percent oxygen concentrations, and air. Sample weight loss rate, concentrations of CO, CO2, and H2O in the degradation products, and oxygen consumption were continuously measured during the experiment. Values of activation energy, preexponential factor, orders of reaction, and yields of CO, CO2, H2O, total hydrocarbons, and char for each degradation reaction were derived from the results. Heat of reaction for each reaction was determined by differential scanning calorimetry. A comparison of the calculated CO, CO2, H2O, total hydrocarbons, sample weight loss rate, and oxygen consumption was made with the measured results using the derived kinetic constants, and the accuracy of the values of kinetic constants was discussed.

  16. Calculating the dim light melatonin onset: the impact of threshold and sampling rate.

    PubMed

    Molina, Thomas A; Burgess, Helen J

    2011-10-01

    The dim light melatonin onset (DLMO) is the most reliable circadian phase marker in humans, but the cost of assaying samples is relatively high. Therefore, the authors examined differences between DLMOs calculated from hourly versus half-hourly sampling and differences between DLMOs calculated with two recommended thresholds (a fixed threshold of 3 pg/mL and a variable "3k" threshold equal to the mean plus two standard deviations of the first three low daytime points). The authors calculated these DLMOs from salivary dim light melatonin profiles collected from 122 individuals (64 women) at baseline. DLMOs derived from hourly sampling occurred on average only 6-8 min earlier than the DLMOs derived from half-hourly saliva sampling, and they were highly correlated with each other (r ≥ 0.89, p < .001). However, in up to 19% of cases the DLMO derived from hourly sampling was >30 min from the DLMO derived from half-hourly sampling. The 3 pg/mL threshold produced significantly less variable DLMOs than the 3k threshold. However, the 3k threshold was significantly lower than the 3 pg/mL threshold (p < .001). The DLMOs calculated with the 3k method were significantly earlier (by 22-24 min) than the DLMOs calculated with the 3 pg/mL threshold, regardless of sampling rate. These results suggest that in large research studies and clinical settings, the more affordable and practical option of hourly sampling is adequate for a reasonable estimate of circadian phase. Although the 3 pg/mL fixed threshold is less variable than the 3k threshold, it produces estimates of the DLMO that are further from the initial rise of melatonin.

  17. Estimating the dim light melatonin onset of adolescents within a 6-h sampling window: the impact of sampling rate and threshold method.

    PubMed

    Crowley, Stephanie J; Suh, Christina; Molina, Thomas A; Fogg, Louis F; Sharkey, Katherine M; Carskadon, Mary A

    2016-04-01

    Circadian rhythm sleep-wake disorders (CRSWDs) often manifest during the adolescent years. Measurement of circadian phase such as the dim light melatonin onset (DLMO) improves diagnosis and treatment of these disorders, but financial and time costs limit the use of DLMO phase assessments in clinic. The current analysis aims to inform a cost-effective and efficient protocol to measure the DLMO in older adolescents by reducing the number of samples and total sampling duration. A total of 66 healthy adolescents (26 males) aged 14.8-17.8 years participated in a study; they were required to sleep on a fixed baseline schedule for a week before which they visited the laboratory for saliva collection in dim light (<20 lux). Two partial 6-h salivary melatonin profiles were derived for each participant. Both profiles began 5 h before bedtime and ended 1 h after bedtime, but one profile was derived from samples taken every 30 min (13 samples) and the other from samples taken every 60 min (seven samples). Three standard thresholds (first three melatonin values mean + 2 SDs, 3 pg/mL, and 4 pg/mL) were used to compute the DLMO. An agreement between DLMOs derived from 30-min and 60-min sampling rates was determined using Bland-Altman analysis; agreement between the sampling rate DLMOs was defined as ± 1 h. Within a 6-h sampling window, 60-min sampling provided DLMO estimates within ± 1 h of DLMO from 30-min sampling, but only when an absolute threshold (3 or 4 pg/mL) was used to compute the DLMO. Future analyses should be extended to include adolescents with CRSWDs. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. A decision tree to assess short-term mortality after an emergency department visit for an exacerbation of COPD: a cohort study.

    PubMed

    Esteban, Cristóbal; Arostegui, Inmaculada; Garcia-Gutierrez, Susana; Gonzalez, Nerea; Lafuente, Iratxe; Bare, Marisa; Fernandez de Larrea, Nerea; Rivas, Francisco; Quintana, José M

    2015-12-22

    Creating an easy-to-use instrument to identify predictors of short-term (30/60-day) mortality after an exacerbation of chronic obstructive pulmonary disease (eCOPD) could help clinicians choose specific measures of medical care to decrease mortality in these patients. The objective of this study was to develop and validate a classification and regression tree (CART) to predict short term mortality among patients evaluated in an emergency department (ED) for an eCOPD. We conducted a prospective cohort study including participants from 16 hospitals in Spain. COPD patients with an exacerbation attending the emergency department (ED) of any of the hospitals between June 2008 and September 2010 were recruited. Patients were randomly divided into derivation (50%) and validation samples (50%). A CART based on a recursive partitioning algorithm was created in the derivation sample and applied to the validation sample. Two thousand four hundred eighty-seven patients, 1252 patients in the derivation sample and 1235 in the validation sample, were enrolled in the study. Based on the results of the univariate analysis, five variables (baseline dyspnea, cardiac disease, the presence of paradoxical breathing or use of accessory inspiratory muscles, age, and Glasgow Coma Scale score) were used to build the CART. Mortality rates 30 days after discharge ranged from 0% to 55% in the five CART classes. The lowest mortality rate was for the branch composed of low baseline dyspnea and lack of cardiac disease. The highest mortality rate was in the branch with the highest baseline dyspnea level, use of accessory inspiratory muscles or paradoxical breathing upon ED arrival, and Glasgow score <15. The area under the receiver-operating curve (AUC) in the derivation sample was 0.835 (95% CI: 0.783, 0.888) and 0.794 (95% CI: 0.723, 0.865) in the validation sample. CART was improved to predict 60-days mortality risk by adding the Charlson Comorbidity Index, reaching an AUC in the derivation sample of 0.817 (95% CI: 0.776, 0.859) and 0.770 (95% CI: 0.716, 0.823) in the validation sample. We identified several easy-to-determine variables that allow clinicians to classify eCOPD patients by short term mortality risk, which can provide useful information for establishing appropriate clinical care. NCT02434536.

  19. Syndromes of Self-Reported Psychopathology for Ages 18-59 in 29 Societies.

    PubMed

    Ivanova, Masha Y; Achenbach, Thomas M; Rescorla, Leslie A; Tumer, Lori V; Ahmeti-Pronaj, Adelina; Au, Alma; Maese, Carmen Avila; Bellina, Monica; Caldas, J Carlos; Chen, Yi-Chuen; Csemy, Ladislav; da Rocha, Marina M; Decoster, Jeroen; Dobrean, Anca; Ezpeleta, Lourdes; Fontaine, Johnny R J; Funabiki, Yasuko; Guðmundsson, Halldór S; Harder, Valerie S; de la Cabada, Marie Leiner; Leung, Patrick; Liu, Jianghong; Mahr, Safia; Malykh, Sergey; Maras, Jelena Srdanovic; Markovic, Jasminka; Ndetei, David M; Oh, Kyung Ja; Petot, Jean-Michel; Riad, Geylan; Sakarya, Direnc; Samaniego, Virginia C; Sebre, Sandra; Shahini, Mimoza; Silvares, Edwiges; Simulioniene, Roma; Sokoli, Elvisa; Talcott, Joel B; Vazquez, Natalia; Zasepa, Ewa

    2015-06-01

    This study tested the multi-society generalizability of an eight-syndrome assessment model derived from factor analyses of American adults' self-ratings of 120 behavioral, emotional, and social problems. The Adult Self-Report (ASR; Achenbach and Rescorla 2003) was completed by 17,152 18-59-year-olds in 29 societies. Confirmatory factor analyses tested the fit of self-ratings in each sample to the eight-syndrome model. The primary model fit index (Root Mean Square Error of Approximation) showed good model fit for all samples, while secondary indices showed acceptable to good fit. Only 5 (0.06%) of the 8,598 estimated parameters were outside the admissible parameter space. Confidence intervals indicated that sampling fluctuations could account for the deviant parameters. Results thus supported the tested model in societies differing widely in social, political, and economic systems, languages, ethnicities, religions, and geographical regions. Although other items, societies, and analytic methods might yield different results, the findings indicate that adults in very diverse societies were willing and able to rate themselves on the same standardized set of 120 problem items. Moreover, their self-ratings fit an eight-syndrome model previously derived from self-ratings by American adults. The support for the statistically derived syndrome model is consistent with previous findings for parent, teacher, and self-ratings of 1½-18-year-olds in many societies. The ASR and its parallel collateral-report instrument, the Adult Behavior Checklist (ABCL), may offer mental health professionals practical tools for the multi-informant assessment of clinical constructs of adult psychopathology that appear to be meaningful across diverse societies.

  20. The instantaneous radial growth rate of stellar discs

    NASA Astrophysics Data System (ADS)

    Pezzulli, G.; Fraternali, F.; Boissier, S.; Muñoz-Mateos, J. C.

    2015-08-01

    We present a new and simple method to measure the instantaneous mass and radial growth rates of the stellar discs of spiral galaxies, based on their star formation rate surface density (SFRD) profiles. Under the hypothesis that discs are exponential with time-varying scalelengths, we derive a universal theoretical profile for the SFRD, with a linear dependence on two parameters: the specific mass growth rate ν _ M ≡ dot{M}_⋆ /M_⋆ and the specific radial growth rate ν _ R ≡ dot{R}_⋆ /R_⋆ of the disc. We test our theory on a sample of 35 nearby spiral galaxies, for which we derive a measurement of νM and νR. 32/35 galaxies show the signature of ongoing inside-out growth (νR > 0). The typical derived e-folding time-scales for mass and radial growth in our sample are ˜10 and ˜30 Gyr, respectively, with some systematic uncertainties. More massive discs have a larger scatter in νM and νR, biased towards a slower growth, both in mass and size. We find a linear relation between the two growth rates, indicating that our galaxy discs grow in size at ˜0.35 times the rate at which they grow in mass; this ratio is largely unaffected by systematics. Our results are in very good agreement with theoretical expectations if known scaling relations of disc galaxies are not evolving with time.

  1. Radon exhalation rates from building materials using electret ion chamber radon monitors in accumulators.

    PubMed

    Kotrappa, Payasada; Stieff, Frederick

    2009-08-01

    An electret ion chamber (EIC) radon monitor in a sealed accumulator measures the integrated average radon concentration at the end of the accumulation duration. Theoretical equations have been derived to relate such radon concentrations (Bq m(-3) ) to the radon emanation rate (Bq d(-1)) from building materials enclosed in the accumulator. As an illustration, a 4-L sealable glass jar has been used as an accumulator to calculate the radon emanation rate from different granite samples. The radon emanation rate was converted into radon flux (Bq mm(-2) d(-1)) by dividing the emanation rate by surface area of the sample. Fluxes measured on typical, commercially available granites ranged from 20-30 Bq m(-2) d(-1). These results are similar to the results reported in the literature. The lower limit of detection for a 2-d measurement works out to be 7 Bq m(-2) d(-1). Equations derived can also be used for other sealable accumulators and other integrating detectors, such as alpha track detectors.

  2. Understanding erosion rates in the Himalayan orogen: A case study from the Arun Valley

    NASA Astrophysics Data System (ADS)

    Olen, Stephanie M.; Bookhagen, Bodo; Hoffmann, Bernd; Sachse, Dirk; Adhikari, D. P.; Strecker, Manfred R.

    2015-10-01

    Understanding the rates and pattern of erosion is a key aspect of deciphering the impacts of climate and tectonics on landscape evolution. Denudation rates derived from terrestrial cosmogenic nuclides (TCNs) are commonly used to quantify erosion and bridge tectonic (Myr) and climatic (up to several kiloyears) time scales. However, how the processes of erosion in active orogens are ultimately reflected in 10Be TCN samples remains a topic of discussion. We investigate this problem in the Arun Valley of eastern Nepal with 34 new 10Be-derived catchment-mean denudation rates. The Arun Valley is characterized by steep north-south gradients in topography and climate. Locally, denudation rates increase northward, from <0.2 mm yr-1 to ~1.5 mm yr-1 in tributary samples, while main stem samples appear to increase downstream from ~0.2 mm yr-1 at the border with Tibet to 0.91 mm yr-1 in the foreland. Denudation rates most strongly correlate with normalized channel steepness (R2 = 0.67), which has been commonly interpreted to indicate tectonic activity. Significant downstream decrease of 10Be concentration in the main stem Arun suggests that upstream sediment grains are fining to the point that they are operationally excluded from the processed sample. This results in 10Be concentrations and denudation rates that do not uniformly represent the upstream catchment area. We observe strong impacts on 10Be concentrations from local, nonfluvial geomorphic processes, such as glaciation and landsliding coinciding with areas of peak rainfall rates, pointing toward climatic modulation of predominantly tectonically driven denudation rates.

  3. Rating long-term care facilities on pressure ulcer development: importance of case-mix adjustment.

    PubMed

    Berlowitz, D R; Ash, A S; Brandeis, G H; Brand, H K; Halpern, J L; Moskowitz, M A

    1996-03-15

    To determine the importance of case-mix adjustment in interpreting differences in rates of pressure ulcer development in Department of Veterans Affairs long- term care facilities. A sample assembled from the Patient Assessment File, a Veterans Affairs administrative database, was used to derive predictors of pressure ulcer development; the resulting model was validated in a separate sample. Facility-level rates of pressure ulcer development, both unadjusted and adjusted for case mix using the predictive model, were compared. Department of Veterans Affairs long-term care facilities. The derivation sample consisted of 31 150 intermediate medicine and nursing home residents who were initially free of pressure ulcers and were institutionalized between October 1991 and April 1993. The validation sample consisted of 17 946 residents institutionalized from April 1993 to October 1993. Development of a stage 2 or greater pressure ulcer. 11 factors predicted pressure ulcer development. Validated performance properties of the resulting model were good. Model-predicted rates of pressure ulcer development at individual long-term care facilities varied from 1.9% to 6.3%, and observed rates ranged from 0% to 10.9%. Case-mix-adjusted rates and ranks of facilities differed considerably from unadjusted ratings. For example, among five facilities that were identified as high outliers on the basis of unadjusted rates, two remained as outliers after adjustment for case mix. Long-term care facilities differ in case mix. Adjustments for case mix result in different judgments about facility performance and should be used when facility incidence rates are compared.

  4. Professor Gender, Age, and "Hotness" in Influencing College Students' Generation and Interpretation of Professor Ratings

    ERIC Educational Resources Information Center

    Sohr-Preston, Sara L.; Boswell, Stefanie S.; McCaleb, Kayla; Robertson, Deanna

    2016-01-01

    A sample of 230 undergraduate psychology students rated their expectations of a bogus professor (who was randomly designated a man or woman and "hot" versus "not hot") based on ratings and comments found on RateMyProfessors.com. Five professor qualities were derived using principal components analysis: dedication,…

  5. 0-6760 : improved trip generation data for Texas using workplace and special generator surveys.

    DOT National Transportation Integrated Search

    2014-08-01

    Trip generation rates play an important role in : transportation planning, which can help in : making informed decisions about future : transportation investment and design. However, : sometimes the rates are derived from small : sample sizes or may ...

  6. New approach to calibrating bed load samplers

    USGS Publications Warehouse

    Hubbell, D.W.; Stevens, H.H.; Skinner, J.V.; Beverage, J.P.

    1985-01-01

    Cyclic variations in bed load discharge at a point, which are an inherent part of the process of bed load movement, complicate calibration of bed load samplers and preclude the use of average rates to define sampling efficiencies. Calibration curves, rather than efficiencies, are derived by two independent methods using data collected with prototype versions of the Helley‐Smith sampler in a large calibration facility capable of continuously measuring transport rates across a 9 ft (2.7 m) width. Results from both methods agree. Composite calibration curves, based on matching probability distribution functions of samples and measured rates from different hydraulic conditions (runs), are obtained for six different versions of the sampler. Sampled rates corrected by the calibration curves agree with measured rates for individual runs.

  7. Using the Sampling Margin of Error to Assess the Interpretative Validity of Student Evaluations of Teaching

    ERIC Educational Resources Information Center

    James, David E.; Schraw, Gregory; Kuch, Fred

    2015-01-01

    We present an equation, derived from standard statistical theory, that can be used to estimate sampling margin of error for student evaluations of teaching (SETs). We use the equation to examine the effect of sample size, response rates and sample variability on the estimated sampling margin of error, and present results in four tables that allow…

  8. Intercoalescence time distribution of incomplete gene genealogies in temporally varying populations, and applications in population genetic inference.

    PubMed

    Chen, Hua

    2013-03-01

    Tracing back to a specific time T in the past, the genealogy of a sample of haplotypes may not have reached their common ancestor and may leave m lineages extant. For such an incomplete genealogy truncated at a specific time T in the past, the distribution and expectation of the intercoalescence times conditional on T are derived in an exact form in this paper for populations of deterministically time-varying sizes, specifically, for populations growing exponentially. The derived intercoalescence time distribution can be integrated to the coalescent-based joint allele frequency spectrum (JAFS) theory, and is useful for population genetic inference from large-scale genomic data, without relying on computationally intensive approaches, such as importance sampling and Markov Chain Monte Carlo (MCMC) methods. The inference of several important parameters relying on this derived conditional distribution is demonstrated: quantifying population growth rate and onset time, and estimating the number of ancestral lineages at a specific ancient time. Simulation studies confirm validity of the derivation and statistical efficiency of the methods using the derived intercoalescence time distribution. Two examples of real data are given to show the inference of the population growth rate of a European sample from the NIEHS Environmental Genome Project, and the number of ancient lineages of 31 mitochondrial genomes from Tibetan populations. © 2013 Blackwell Publishing Ltd/University College London.

  9. Adsorption of Pb(II) and Cu(II) by Ginkgo-Leaf-Derived Biochar Produced under Various Carbonization Temperatures and Times.

    PubMed

    Lee, Myoung-Eun; Park, Jin Hee; Chung, Jae Woo

    2017-12-07

    Ginkgo trees are common street trees in Korea, and the large amounts of leaves that fall onto the streets annually need to be cleaned and treated. Therefore, fallen gingko leaves have been used as a raw material to produce biochar for the removal of heavy metals from solutions. Gingko-leaf-derived biochar was produced under various carbonization temperatures and times. This study evaluated the physicochemical properties and adsorption characteristics of gingko-leaf-derived biochar samples produced under different carbonization conditions regarding Pb(II) and Cu(II). The biochar samples that were produced at 800 °C for 90 and 120 min contained the highest oxygen- and nitrogen-substituted carbons, which might contribute to a high metal-adsorption rate. The intensity of the phosphate bond was increased with the increasing of the carbonization temperature up to 800 °C and after 90 min of carbonization. The Pb(II) and Cu(II) adsorption capacities were the highest when the gingko-leaf-derived biochar was produced at 800 °C, and the removal rates were 99.2% and 34.2%, respectively. The highest removal rate was achieved when the intensity of the phosphate functional group in the biochar was the highest. Therefore, the gingko-leaf-derived biochar produced at 800 °C for 90 min can be used as an effective bio-adsorbent in the removal of metals from solutions.

  10. Analysis of ultrasonic effect on powder and application to radioactive sample compaction

    NASA Astrophysics Data System (ADS)

    Kim, Jungsoon; Sim, Minseop; Kim, Jihyang; Kim, Moojoon

    2018-07-01

    The effect of ultrasound on powder compaction was analyzed. The decreasing in the friction coefficient of the powder sample is derived theoretically. The compaction rate was improved by the ultrasound. We applied the effect to the compaction of environmental radioactive soil samples. From γ-ray spectroscopy analysis, more radionuclides could be detectable in the sample compacted with ultrasound.

  11. Syndromes of Self-Reported Psychopathology for Ages 18–59 in 29 Societies

    PubMed Central

    Achenbach, Thomas M.; Rescorla, Leslie A.; Tumer, Lori V.; Ahmeti-Pronaj, Adelina; Au, Alma; Maese, Carmen Avila; Bellina, Monica; Caldas, J. Carlos; Chen, Yi-Chuen; Csemy, Ladislav; da Rocha, Marina M.; Decoster, Jeroen; Dobrean, Anca; Ezpeleta, Lourdes; Fontaine, Johnny R. J.; Funabiki, Yasuko; Guðmundsson, Halldór S.; Harder, Valerie s; de la Cabada, Marie Leiner; Leung, Patrick; Liu, Jianghong; Mahr, Safia; Malykh, Sergey; Maras, Jelena Srdanovic; Markovic, Jasminka; Ndetei, David M.; Oh, Kyung Ja; Petot, Jean-Michel; Riad, Geylan; Sakarya, Direnc; Samaniego, Virginia C.; Sebre, Sandra; Shahini, Mimoza; Silvares, Edwiges; Simulioniene, Roma; Sokoli, Elvisa; Talcott, Joel B.; Vazquez, Natalia; Zasepa, Ewa

    2017-01-01

    This study tested the multi-society generalizability of an eight-syndrome assessment model derived from factor analyses of American adults’ self-ratings of 120 behavioral, emotional, and social problems. The Adult Self-Report (ASR; Achenbach and Rescorla 2003) was completed by 17,152 18–59-year-olds in 29 societies. Confirmatory factor analyses tested the fit of self-ratings in each sample to the eight-syndrome model. The primary model fit index (Root Mean Square Error of Approximation) showed good model fit for all samples, while secondary indices showed acceptable to good fit. Only 5 (0.06%) of the 8,598 estimated parameters were outside the admissible parameter space. Confidence intervals indicated that sampling fluctuations could account for the deviant parameters. Results thus supported the tested model in societies differing widely in social, political, and economic systems, languages, ethnicities, religions, and geographical regions. Although other items, societies, and analytic methods might yield different results, the findings indicate that adults in very diverse societies were willing and able to rate themselves on the same standardized set of 120 problem items. Moreover, their self-ratings fit an eight-syndrome model previously derived from self-ratings by American adults. The support for the statistically derived syndrome model is consistent with previous findings for parent, teacher, and self-ratings of 1½–18-year-olds in many societies. The ASR and its parallel collateral-report instrument, the Adult Behavior Checklist (ABCL), may offer mental health professionals practical tools for the multi-informant assessment of clinical constructs of adult psychopathology that appear to be meaningful across diverse societies. PMID:29805197

  12. A statistical analysis of seat belt effectiveness in 1973-1975 model cars involved in towaway crashes. Volume 1

    DOT National Transportation Integrated Search

    1976-09-01

    Standardized injury rates and seat belt effectiveness measures are derived from a probability sample of towaway accidents involving 1973-1975 model cars. The data were collected in five different geographic regions. Weighted sample size available for...

  13. Factor validity and norms for the aberrant behavior checklist in a community sample of children with mental retardation.

    PubMed

    Marshburn, E C; Aman, M G

    1992-09-01

    The Aberrant Behavior Checklist (ABC) is a 58-item rating scale that was developed primarily to measure the effects of pharmacological intervention in individuals living in residential facilities. This study investigated the use of the ABC in a sample of community children with mental retardation. Teacher ratings on the ABC were collected on 666 students attending special classes. The data were factor analyzed and compared with other studies using the ABC. In addition, subscales were analyzed as a function of age, sex, and classroom placement, and preliminary norms were derived. A four-factor solution of the ABC was obtained. Congruence between the four derived factors and corresponding factors from the original ABC was high (congruence coefficients ranged between .87 and .96). Classroom placement and age had significant effects on subscale scores, whereas sex failed to affect ratings. The current results are sufficiently close to the original factor solution that the original scoring method can be used with community samples, although further studies are needed to look at this in more detail.

  14. On the Berry-Esséen bound of frequency polygons for ϕ-mixing samples.

    PubMed

    Huang, Gan-Ji; Xing, Guodong

    2017-01-01

    Under some mild assumptions, the Berry-Esséen bound of frequency polygons for ϕ -mixing samples is presented. By the bound derived, we obtain the corresponding convergence rate of uniformly asymptotic normality, which is nearly [Formula: see text] under the given conditions.

  15. Unravelling the Mysteries of Slip Histories, Validating Cosmogenic 36Cl Derived Slip Rates on Normal Faults

    NASA Astrophysics Data System (ADS)

    Goodall, H.; Gregory, L. C.; Wedmore, L.; Roberts, G.; Shanks, R. P.; McCaffrey, K. J. W.; Amey, R.; Hooper, A. J.

    2017-12-01

    The cosmogenic isotope chlorine-36 (36Cl) is increasingly used as a tool to investigate normal fault slip rates over the last 10-20 thousand years. These slip histories are being used to address complex questions, including investigating slip clustering and understanding local and large scale fault interaction. Measurements are time consuming and expensive, and as a result there has been little work done validating these 36Cl derived slip histories. This study aims to investigate if the results are repeatable and therefore reliable estimates of how normal faults have been moving in the past. Our approach is to test if slip histories derived from 36Cl are the same when measured at different points along the same fault. As normal fault planes are progressively exhumed from the surface they accumulate 36Cl. Modelling these 36Cl concentrations allows estimation of a slip history. In a previous study, samples were collected from four sites on the Magnola fault in the Italian Apennines. Remodelling of the 36Cl data using a Bayesian approach shows that the sites produced disparate slip histories, which we interpret as being due to variable site geomorphology. In this study, multiple sites have been sampled along the Campo Felice fault in the central Italian Apennines. Initial results show strong agreement between the sites we have processed so far and a previous study. This indicates that if sample sites are selected taking the geomorphology into account, then 36Cl derived slip histories will be highly similar when sampled at any point along the fault. Therefore our study suggests that 36Cl derived slip histories are a consistent record of fault activity in the past.

  16. Effects of sample survey design on the accuracy of classification tree models in species distribution models

    USGS Publications Warehouse

    Edwards, T.C.; Cutler, D.R.; Zimmermann, N.E.; Geiser, L.; Moisen, Gretchen G.

    2006-01-01

    We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by resubstitution rates were similar for each lichen species irrespective of the underlying sample survey form. Cross-validation estimates of prediction accuracies were lower than resubstitution accuracies for all species and both design types, and in all cases were closer to the true prediction accuracies based on the EVALUATION data set. We argue that greater emphasis should be placed on calculating and reporting cross-validation accuracy rates rather than simple resubstitution accuracy rates. Evaluation of the DESIGN and PURPOSIVE tree models on the EVALUATION data set shows significantly lower prediction accuracy for the PURPOSIVE tree models relative to the DESIGN models, indicating that non-probabilistic sample surveys may generate models with limited predictive capability. These differences were consistent across all four lichen species, with 11 of the 12 possible species and sample survey type comparisons having significantly lower accuracy rates. Some differences in accuracy were as large as 50%. The classification tree structures also differed considerably both among and within the modelled species, depending on the sample survey form. Overlap in the predictor variables selected by the DESIGN and PURPOSIVE tree models ranged from only 20% to 38%, indicating the classification trees fit the two evaluated survey forms on different sets of predictor variables. The magnitude of these differences in predictor variables throws doubt on ecological interpretation derived from prediction models based on non-probabilistic sample surveys. ?? 2006 Elsevier B.V. All rights reserved.

  17. Reference natural radionuclide concentrations in Australian soils and derived terrestrial air kerma rate.

    PubMed

    Kleinschmidt, R

    2017-06-01

    Sediment from drainage catchment outlets has been shown to be a useful means of sampling large land masses for soil composition. Naturally occurring radioactive material concentrations (uranium, thorium and potassium-40) in soil have been collated and converted to activity concentrations using data collected from the National Geochemistry Survey of Australia. Average terrestrial air kerma rate data are derived using the elemental concentration data, and is tabulated for Australia and states for use as baseline reference information. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  18. Compressive power spectrum sensing for vibration-based output-only system identification of structural systems in the presence of noise

    NASA Astrophysics Data System (ADS)

    Tau Siesakul, Bamrung; Gkoktsi, Kyriaki; Giaralis, Agathoklis

    2015-05-01

    Motivated by the need to reduce monetary and energy consumption costs of wireless sensor networks in undertaking output-only/operational modal analysis of engineering structures, this paper considers a multi-coset analog-toinformation converter for structural system identification from acceleration response signals of white noise excited linear damped structures sampled at sub-Nyquist rates. The underlying natural frequencies, peak gains in the frequency domain, and critical damping ratios of the vibrating structures are estimated directly from the sub-Nyquist measurements and, therefore, the computationally demanding signal reconstruction step is by-passed. This is accomplished by first employing a power spectrum blind sampling (PSBS) technique for multi-band wide sense stationary stochastic processes in conjunction with deterministic non-uniform multi-coset sampling patterns derived from solving a weighted least square optimization problem. Next, modal properties are derived by the standard frequency domain peak picking algorithm. Special attention is focused on assessing the potential of the adopted PSBS technique, which poses no sparsity requirements to the sensed signals, to derive accurate estimates of modal structural system properties from noisy sub- Nyquist measurements. To this aim, sub-Nyquist sampled acceleration response signals corrupted by various levels of additive white noise pertaining to a benchmark space truss structure with closely spaced natural frequencies are obtained within an efficient Monte Carlo simulation-based framework. Accurate estimates of natural frequencies and reasonable estimates of local peak spectral ordinates and critical damping ratios are derived from measurements sampled at about 70% below the Nyquist rate and for SNR as low as 0db demonstrating that the adopted approach enjoys noise immunity.

  19. A rapid chemiluminescent slot blot immunoassay for the detection and quantification of Clostridium botulinum neurotoxin type E, in cultures.

    PubMed

    Cadieux, Brigitte; Blanchfield, Burke; Smith, James P; Austin, John W

    2005-05-01

    A simple, rapid, cost-effective in vitro slot blot immunoassay was developed for the detection and quantification of botulinum neurotoxin type E (BoNT/E) in cultures. Culture supernatants of 36 strains of clostridia, including 12 strains of Clostridium botulinum type E, 12 strains of other C. botulinum neurotoxin serotypes, and 12 strains of other clostridial species were tested. Samples containing BoNT/E were detected using affinity-purified polyclonal rabbit antisera prepared against BoNT/E with subsequent detection of secondary antibodies using chemiluminescence. All strains of C. botulinum type E tested positive, while all non C. botulinum type E strains tested negative. The sensitivity of the slot blot immunoassay for detection of BoNT/E was approximately four mouse lethal doses (MLD). The intensity of chemiluminescence was directly correlated with the concentration of BoNT/E up to 128 MLD, allowing quantification of BoNT/E between 4 and 128 MLD. The slot blot immunoassay was compared to the mouse bioassay for detection of BoNT/E using cultures derived from fish samples inoculated with C. botulinum type E, and cultures derived from naturally contaminated environmental samples. A total of 120 primary enrichment cultures derived from fish samples, of which 103 were inoculated with C. botulinum type E, and 17 were uninoculated controls, were assayed. Of the 103 primary enrichment cultures derived from inoculated fish samples, all were positive by mouse bioassay, while 94 were also positive by slot blot immunoassay, resulting in a 7.5% false-negative rate. All 17 primary enrichment cultures derived from the uninoculated fish samples were negative by both mouse bioassay and slot blot immunoassay. A total of twenty-six primary enrichment cultures derived from environmental samples were tested by mouse bioassay and slot blot immunoassay. Of 13 primary enrichment cultures positive by mouse bioassay, 12 were also positive by slot blot immunoassay, resulting in a 3.8% false-negative rate. All 13 primary enrichment cultures that tested negative by mouse bioassay also tested negative by slot blot immunoassay. The slot blot immunoassay could be used routinely as a positive screen for BoNT/E in primary enrichment cultures, and could be used as a replacement for the mouse bioassay for pure cultures.

  20. Estimating pesticide sampling rates by the polar organic chemical integrative sampler (POCIS) in the presence of natural organic matter and varying hydrodynamic conditions

    USGS Publications Warehouse

    Charlestra, Lucner; Amirbahman, Aria; Courtemanch, David L.; Alvarez, David A.; Patterson, Howard

    2012-01-01

    The polar organic chemical integrative sampler (POCIS) was calibrated to monitor pesticides in water under controlled laboratory conditions. The effect of natural organic matter (NOM) on the sampling rates (Rs) was evaluated in microcosms containing -1 of total organic carbon (TOC). The effect of hydrodynamics was studied by comparing Rs values measured in stirred (SBE) and quiescent (QBE) batch experiments and a flow-through system (FTS). The level of NOM in the water used in these experiments had no effect on the magnitude of the pesticide sampling rates (p > 0.05). However, flow velocity and turbulence significantly increased the sampling rates of the pesticides in the FTS and SBE compared to the QBE (p < 0.001). The calibration data generated can be used to derive pesticide concentrations in water from POCIS deployed in stagnant and turbulent environmental systems without correction for NOM.

  1. Passage of scrapie to deer results in a new phenotype upon return passage to sheep

    USDA-ARS?s Scientific Manuscript database

    Aims: We previously demonstrated that scrapie has a 100% attack rate in white-tailed deer after either intracranial or oral inoculation. Samples from deer that developed scrapie had two different western blot patterns: samples derived from cerebrum had a banding pattern similar to the scrapie inocu...

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. L. Lewicki; G. E. Hilley; L. Dobeck

    A set of CO2 flux, geochemical, and hydrologic measurement techniques was used to characterize the source of and quantify gaseous and dissolved CO2 discharges from the area of Soda Springs, southeastern Idaho. An eddy covariance system was deployed for approximately one month near a bubbling spring and measured net CO2 fluxes from - 74 to 1147 g m- 2 d- 1. An inversion of measured eddy covariance CO2 fluxes and corresponding modeled source weight functions mapped the surface CO2 flux distribution within and quantified CO2 emission rate (24.9 t d- 1) from a 0.05 km2 area surrounding the spring. Soilmore » CO2 fluxes (< 1 to 52,178 g m- 2 d- 1) were measured within a 0.05 km2 area of diffuse degassing using the accumulation chamber method. The estimated CO2 emission rate from this area was 49 t d- 1. A carbon mass balance approach was used to estimate dissolved CO2 discharges from contributing sources at nine springs and the Soda Springs geyser. Total dissolved inorganic carbon (as CO2) discharge for all sampled groundwater features was 57.1 t d- 1. Of this quantity, approximately 3% was derived from biogenic carbon dissolved in infiltrating groundwater, 35% was derived from carbonate mineral dissolution within the aquifer(s), and 62% was derived from deep source(s). Isotopic compositions of helium (1.74–2.37 Ra) and deeply derived carbon (d13C approximately 3‰) suggested contribution of volatiles from mantle and carbonate sources. Assuming that the deeply derived CO2 discharge estimated for sampled groundwater features (approximately 35 t d- 1) is representative of springs throughout the study area, the total rate of deeply derived CO2 input into the groundwater system within this area could be ~ 350 t d- 1, similar to CO2 emission rates from a number of quiescent volcanoes.« less

  3. Oxidation of urea-derived nitrogen by thaumarchaeota-dominated marine nitrifying communities.

    PubMed

    Tolar, Bradley B; Wallsgrove, Natalie J; Popp, Brian N; Hollibaugh, James T

    2017-12-01

    Urea nitrogen has been proposed to contribute significantly to nitrification by marine thaumarchaeotes. These inferences are based on distributions of thaumarchaeote urease genes rather than activity measurements. We found that ammonia oxidation rates were always higher than oxidation rates of urea-derived N in samples from coastal Georgia, USA (means ± SEM: 382 ± 35 versus 73 ± 24 nmol L -1  d -1 , Mann-Whitney U-test p < 0.0001), and the South Atlantic Bight (20 ± 8.8 versus 2.2 ± 1.7 nmol L -1  d -1 , p = 0.026) but not the Gulf of Alaska (8.8 ± 4.0 versus 1.5 ± 0.6, p > 0.05). Urea-derived N was relatively more important in samples from Antarctic continental shelf waters, though the difference was not statistically significant (19.4 ± 4.8 versus 12.0 ± 2.7 nmol L -1  d -1 , p > 0.05). We found only weak correlations between oxidation rates of urea-derived N and the abundance or transcription of putative Thaumarchaeota ureC genes. Dependence on urea-derived N does not appear to be directly related to pH or ammonium concentrations. Competition experiments and release of 15 NH 3 suggest that urea is hydrolyzed to ammonia intracellularly, then a portion is lost to the dissolved pool. The contribution of urea-derived N to nitrification appears to be minor in temperate coastal waters, but may represent a significant portion of the nitrification flux in Antarctic coastal waters. © 2016 The Authors. Environmental Microbiology published by Society for Applied Microbiology and John Wiley & Sons Ltd.

  4. Evaluating Understanding of Popular Press Reports of Health Research.

    ERIC Educational Resources Information Center

    Yeaton, William H.; And Others

    1990-01-01

    A sample of 144 college students responded to content- and application-based questions derived from popular newspaper and magazine articles on contemporary health topics. Overall rate of reader misunderstanding was nearly 40 percent, with a uniform error rate for each of 16 articles, leading to a conclusion that consumer misunderstanding of…

  5. System and method for measuring permeability of materials

    DOEpatents

    Hallman, Jr., Russell Louis; Renner, Michael John

    2013-07-09

    Systems and methods are provided for measuring the permeance of a material. The permeability of the material may also be derived. Systems typically provide a liquid or high concentration fluid bath on one side of a material test sample, and a gas flow across the opposing side of the material test sample. The mass flow rate of permeated fluid as a fraction of the combined mass flow rate of gas and permeated fluid is used to calculate the permeance of the material. The material test sample may be a sheet, a tube, or a solid shape. Operational test conditions may be varied, including concentration of the fluid, temperature of the fluid, strain profile of the material test sample, and differential pressure across the material test sample.

  6. Fluid permeability measurement system and method

    DOEpatents

    Hallman, Jr., Russell Louis; Renner, Michael John [Oak Ridge, TN

    2008-02-05

    A system for measuring the permeance of a material. The permeability of the material may also be derived. The system provides a liquid or high concentration fluid bath on one side of a material test sample, and a gas flow across the opposing side of the material test sample. The mass flow rate of permeated fluid as a fraction of the combined mass flow rate of gas and permeated fluid is used to calculate the permeance of the material. The material test sample may be a sheet, a tube, or a solid shape. Operational test conditions may be varied, including concentration of the fluid, temperature of the fluid, strain profile of the material test sample, and differential pressure across the material test sample.

  7. Material permeance measurement system and method

    DOEpatents

    Hallman, Jr., Russell Louis; Renner, Michael John [Oak Ridge, TN

    2012-05-08

    A system for measuring the permeance of a material. The permeability of the material may also be derived. The system provides a liquid or high concentration fluid bath on one side of a material test sample, and a gas flow across the opposing side of the material test sample. The mass flow rate of permeated fluid as a fraction of the combined mass flow rate of gas and permeated fluid is used to calculate the permeance of the material. The material test sample may be a sheet, a tube, or a solid shape. Operational test conditions may be varied, including concentration of the fluid, temperature of the fluid, strain profile of the material test sample, and differential pressure across the material test sample.

  8. Representativeness and response rates from the Domestic/International Gastroenterology Surveillance Study (DIGEST).

    PubMed

    Tijssen, J G

    1999-01-01

    The Domestic/international Gastroenterology Surveillance Study (DIGEST) examined the prevalence of upper gastrointestinal symptoms among the general population in 10 countries, and the impact of these symptoms on healthcare usage and quality of life. This report discusses the validation of the DIGEST sample and reviews the response rates from the survey. External validation of the DIGEST sample was conducted by comparing the age, age by gender and annual household incomes of the sample with census-derived data. A comparison was also made between Psychological General Well-Being Index (PGWBI) scores from study subjects in the Scandinavian countries and the USA and the total sample population norms. Under- and oversampling, defined as > or =5% difference from the population norms, was evident in eight out of 10 countries, but no systematic bias was evident. The final distribution of the sample by gender was 51% female and 49% male. Although differences in PGWBI scores were noted between DIGEST subjects and population norms, these differences were <0.30 standard deviations--markedly below the difference considered as relevant for the PGWBI. Response for the survey in individual countries ranged from 17% in the USA to 61% in Norway, with a survey-wide rate of 27%. The overall response rate, including primary non-respondents, was 13.4%. The majority of nonresponse (51.4%) was attributed to failure to establish contact with the subjects, with 41.7% of subjects declining to be interviewed and the remaining 6.9% of subjects not meeting the age and sex criteria used for the survey. The DIGEST sample exhibited good external validity, providing a foundation for comparison between data derived from individual countries in the survey.

  9. Digital-computer normal shock position and restart control of a Mach 2.5 axisymmetric mixed-compression inlet

    NASA Technical Reports Server (NTRS)

    Neiner, G. H.; Cole, G. L.; Arpasi, D. J.

    1972-01-01

    Digital computer control of a mixed-compression inlet is discussed. The inlet was terminated with a choked orifice at the compressor face station to dynamically simulate a turbojet engine. Inlet diffuser exit airflow disturbances were used. A digital version of a previously tested analog control system was used for both normal shock and restart control. Digital computer algorithms were derived using z-transform and finite difference methods. Using a sample rate of 1000 samples per second, the digital normal shock and restart controls essentially duplicated the inlet analog computer control results. At a sample rate of 100 samples per second, the control system performed adequately but was less stable.

  10. Variations in the source, metal content and bioreactivity of technogenic aerosols: a case study from Port Talbot, Wales, UK.

    PubMed

    Moreno, Teresa; Merolla, Luciano; Gibbons, Wes; Greenwell, Leona; Jones, Tim; Richards, Roy

    2004-10-15

    Atmospheric aerosol samples were collected during different prevailing wind directions from a site located close to a busy motorway, a major steelworks, and the town of Port Talbot (Wales, UK). A high-volume collector was used (1100 l/min), enabling relatively large amounts of particulate matter (PM(10-2.5) and PM(2.5)) samples to be obtained on a polyurethane foam [PUF, H(2)N-C(O)O-CH(2)CH(3)] substrate over periods of 2-7 days. Four samples were chosen to exemplify different particle mixtures: SE- and NE-derived samples for particles moving along and across the motorway, a NW-derived sample from the town, and a mixed SW/SE-derived sample containing a mixture of particles from both steelworks and motorway. The latter sample showed the highest average collection rate (0.9 mg/h, 13 microg/m(3)) and included a prominent pollution episode when rainy winds were blowing from the direction of the steelworks. Both NW and SE samples were collected under dry conditions and show the same collection rate (0.7 mg/h, 10 microg/m(3)), whereas the NE sample was collected during wetter weather and shows the lowest rate (0.3 mg/h, 5 microg/m(3)). Scanning electron microscopy (SEM) and energy-dispersive X-ray microanalysis system (EDX) analyses show all samples are dominated by elemental and organic carbon compounds (EOCC) and nitrates, with lesser amounts of sulphates, felsic silicates, chlorides and metals. ICP-MS analyses show the SW/SE sample to be richest in metals, especially Fe, Zn, Ni, and Mn, these being attributed to an origin from the steelworks. The SE sample, blown along the motorway corridor, shows enhanced levels of Pb, V, Ti, As, and Ce, these metals being interpreted as defining a traffic-related chemical fingerprint. The NW sample shows a very low metal content. DNA plasmid assay data on the samples show TM(50) values varying from 66 to 175 microg/ml for the adjusted whole sample and 89 to 203 microg/ml for the soluble fraction. The SW/SE-mixed metalliferous sample is the most bioreactive (both whole and soluble) and the soluble fraction of the metal-depleted NW sample is the least bioreactive. The metal content of the aerosol samples, especially soluble metals such as Zn, is suggested to be the primary component responsible for oxidative damage of the DNA, and therefore most implicated in any health effects arising from the inhalation of these particulate cocktails.

  11. On the Accretion Rates of SW Sextantis Nova-like Variables

    NASA Astrophysics Data System (ADS)

    Ballouz, Ronald-Louis; Sion, Edward M.

    2009-06-01

    We present accretion rates for selected samples of nova-like variables having IUE archival spectra and distances uniformly determined using an infrared method by Knigge. A comparison with accretion rates derived independently with a multiparametric optimization modeling approach by Puebla et al. is carried out. The accretion rates of SW Sextantis nova-like systems are compared with the accretion rates of non-SW Sextantis systems in the Puebla et al. sample and in our sample, which was selected in the orbital period range of three to four and a half hours, with all systems having distances using the method of Knigge. Based upon the two independent modeling approaches, we find no significant difference between the accretion rates of SW Sextantis systems and non-SW Sextantis nova-like systems insofar as optically thick disk models are appropriate. We find little evidence to suggest that the SW Sex stars have higher accretion rates than other nova-like cataclysmic variables (CVs) above the period gap within the same range of orbital periods.

  12. Using (137)Cs measurements to estimate soil erosion rates in the Pčinja and South Morava River Basins, southeastern Serbia.

    PubMed

    Petrović, Jelena; Dragović, Snežana; Dragović, Ranko; Đorđević, Milan; Đokić, Mrđan; Zlatković, Bojan; Walling, Desmond

    2016-07-01

    The need for reliable assessments of soil erosion rates in Serbia has directed attention to the potential for using (137)Cs measurements to derive estimates of soil redistribution rates. Since, to date, this approach has not been applied in southeastern Serbia, a reconnaissance study was undertaken to confirm its viability. The need to take account of the occurrence of substantial Chernobyl fallout was seen as a potential problem. Samples for (137)Cs measurement were collected from a zone of uncultivated soils in the watersheds of Pčinja and South Morava Rivers, an area with known high soil erosion rates. Two theoretical conversion models, the profile distribution (PD) model and diffusion and migration (D&M) model were used to derive estimates of soil erosion and deposition rates from the (137)Cs measurements. The estimates of soil redistribution rates derived by using the PD and D&M models were found to differ substantially and this difference was ascribed to the assumptions of the simpler PD model that cause it to overestimate rates of soil loss. The results provided by the D&M model were judged to more reliable. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Dependability of Data Derived from Time Sampling Methods with Multiple Observation Targets

    ERIC Educational Resources Information Center

    Johnson, Austin H.; Chafouleas, Sandra M.; Briesch, Amy M.

    2017-01-01

    In this study, generalizability theory was used to examine the extent to which (a) time-sampling methodology, (b) number of simultaneous behavior targets, and (c) individual raters influenced variance in ratings of academic engagement for an elementary-aged student. Ten graduate-student raters, with an average of 7.20 hr of previous training in…

  14. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    PubMed

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  15. On incomplete sampling under birth-death models and connections to the sampling-based coalescent.

    PubMed

    Stadler, Tanja

    2009-11-07

    The constant rate birth-death process is used as a stochastic model for many biological systems, for example phylogenies or disease transmission. As the biological data are usually not fully available, it is crucial to understand the effect of incomplete sampling. In this paper, we analyze the constant rate birth-death process with incomplete sampling. We derive the density of the bifurcation events for trees on n leaves which evolved under this birth-death-sampling process. This density is used for calculating prior distributions in Bayesian inference programs and for efficiently simulating trees. We show that the birth-death-sampling process can be interpreted as a birth-death process with reduced rates and complete sampling. This shows that joint inference of birth rate, death rate and sampling probability is not possible. The birth-death-sampling process is compared to the sampling-based population genetics model, the coalescent. It is shown that despite many similarities between these two models, the distribution of bifurcation times remains different even in the case of very large population sizes. We illustrate these findings on an Hepatitis C virus dataset from Egypt. We show that the transmission times estimates are significantly different-the widely used Gamma statistic even changes its sign from negative to positive when switching from the coalescent to the birth-death process.

  16. The effect of rate denominator source on US fatal occupational injury rate estimates.

    PubMed

    Richardson, David; Loomis, Dana; Bailer, A John; Bena, James

    2004-09-01

    The Current Population Survey (CPS) is often used as a source of denominator information for analyses of US fatal occupational injury rates. However, given the relatively small sample size of the CPS, analyses that examine the cross-classification of occupation or industry with demographic or geographic characteristics will often produce highly imprecise rate estimates. The Decennial Census of Population provides an alternative source for rate denominator information. We investigate the comparability of fatal injury rates derived using these two sources of rate denominator information. Information on fatal occupational injuries that occurred between January 1, 1983 and December 31, 1994 was obtained from the National Traumatic Occupational Fatality surveillance system. Annual estimates of employment by occupation, industry, age, and sex were derived from the CPS, and by linear interpolation and extrapolation from the 1980 and 1990 Census of Population. Fatal injury rates derived using these denominator data were compared. Fatal injury rates calculated using Census-based denominator data were within 10% of rates calculated using CPS data for all major occupation groups except farming/forestry/fishing, for which the fatal injury rate calculated using Census-based denominator data was 24.69/100,000 worker-years and the rate calculated using CPS data was 19.97/100,000 worker-years. The choice of denominator data source had minimal influence on estimates of trends over calendar time in the fatal injury rates for most major occupation and industry groups. The Census offers a reasonable source for deriving fatal injury rate denominator data in situations where the CPS does not provide sufficiently precise data, although the Census may underestimate the population-at-risk in some industries as a consequence of seasonal variation in employment.

  17. A computer program for sample size computations for banding studies

    USGS Publications Warehouse

    Wilson, K.R.; Nichols, J.D.; Hines, J.E.

    1989-01-01

    Sample sizes necessary for estimating survival rates of banded birds, adults and young, are derived based on specified levels of precision. The banding study can be new or ongoing. The desired coefficient of variation (CV) for annual survival estimates, the CV for mean annual survival estimates, and the length of the study must be specified to compute sample sizes. A computer program is available for computation of the sample sizes, and a description of the input and output is provided.

  18. Likelihood inference of non-constant diversification rates with incomplete taxon sampling.

    PubMed

    Höhna, Sebastian

    2014-01-01

    Large-scale phylogenies provide a valuable source to study background diversification rates and investigate if the rates have changed over time. Unfortunately most large-scale, dated phylogenies are sparsely sampled (fewer than 5% of the described species) and taxon sampling is not uniform. Instead, taxa are frequently sampled to obtain at least one representative per subgroup (e.g. family) and thus to maximize diversity (diversified sampling). So far, such complications have been ignored, potentially biasing the conclusions that have been reached. In this study I derive the likelihood of a birth-death process with non-constant (time-dependent) diversification rates and diversified taxon sampling. Using simulations I test if the true parameters and the sampling method can be recovered when the trees are small or medium sized (fewer than 200 taxa). The results show that the diversification rates can be inferred and the estimates are unbiased for large trees but are biased for small trees (fewer than 50 taxa). Furthermore, model selection by means of Akaike's Information Criterion favors the true model if the true rates differ sufficiently from alternative models (e.g. the birth-death model is recovered if the extinction rate is large and compared to a pure-birth model). Finally, I applied six different diversification rate models--ranging from a constant-rate pure birth process to a decreasing speciation rate birth-death process but excluding any rate shift models--on three large-scale empirical phylogenies (ants, mammals and snakes with respectively 149, 164 and 41 sampled species). All three phylogenies were constructed by diversified taxon sampling, as stated by the authors. However only the snake phylogeny supported diversified taxon sampling. Moreover, a parametric bootstrap test revealed that none of the tested models provided a good fit to the observed data. The model assumptions, such as homogeneous rates across species or no rate shifts, appear to be violated.

  19. Calibration and field application of passive sampling for episodic exposure to polar organic pesticides in streams.

    PubMed

    Fernández, Diego; Vermeirssen, Etiënne L M; Bandow, Nicole; Muñoz, Katherine; Schäfer, Ralf B

    2014-11-01

    Rainfall-triggered runoff is a major driver of pesticide input in streams. Only few studies have examined the suitability of passive sampling to quantify such episodic exposures. In this study, we used Empore™ styrene-divinylbenzene reverse phase sulfonated disks (SDB disks) and event-driven water samples (EDS) to assess exposure to 15 fungicides and 4 insecticides in 17 streams in a German vineyard area during 4 rainfall events. We also conducted a microcosm experiment to determine the SDB-disk sampling rates and provide a free-software solution to derive sampling rates under time-variable exposure. Sampling rates ranged from 0.26 to 0.77 L d(-1) and time-weighted average (TWA) concentrations from 0.05 to 2.11 μg/L. The 2 sampling systems were in good agreement and EDS exceeded TWA concentrations on average by a factor of 3. Our study demonstrates that passive sampling is suitable to quantify episodic exposures from polar organic pesticides. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Immersion freezing of internally and externally mixed mineral dust species analyzed by stochastic and deterministic models

    NASA Astrophysics Data System (ADS)

    Wong, B.; Kilthau, W.; Knopf, D. A.

    2017-12-01

    Immersion freezing is recognized as the most important ice crystal formation process in mixed-phase cloud environments. It is well established that mineral dust species can act as efficient ice nucleating particles. Previous research has focused on determination of the ice nucleation propensity of individual mineral dust species. In this study, the focus is placed on how different mineral dust species such as illite, kaolinite and feldspar, initiate freezing of water droplets when present in internal and external mixtures. The frozen fraction data for single and multicomponent mineral dust droplet mixtures are recorded under identical cooling rates. Additionally, the time dependence of freezing is explored. Externally and internally mixed mineral dust droplet samples are exposed to constant temperatures (isothermal freezing experiments) and frozen fraction data is recorded based on time intervals. Analyses of single and multicomponent mineral dust droplet samples include different stochastic and deterministic models such as the derivation of the heterogeneous ice nucleation rate coefficient (J­­het), the single contact angle (α) description, the α-PDF model, active sites representation, and the deterministic model. Parameter sets derived from freezing data of single component mineral dust samples are evaluated for prediction of cooling rate dependent and isothermal freezing of multicomponent externally or internally mixed mineral dust samples. The atmospheric implications of our findings are discussed.

  1. The early-type strong emission-line supergiants of the Magellanic Clouds - A spectroscopic zoology

    NASA Technical Reports Server (NTRS)

    Shore, S. N.; Sanduleak, N.

    1984-01-01

    The results of a spectroscopic survey of 21 early-type extreme emission line supergiants of the Large and Small Magellanic Clouds using IUE and optical spectra are presented. The combined observations are discussed and the literature on each star in the sample is summarized. The classification procedures and the methods by which effective temperatures, bolometric magnitudes, and reddenings were assigned are discussed. The derived reddening values are given along with some results concerning anomalous reddening among the sample stars. The derived mass, luminosity, and radius for each star are presented, and the ultraviolet emission lines are described. Mass-loss rates are derived and discussed, and the implications of these observations for the evolution of the most massive stars in the Local Group are addressed.

  2. Porous carbons prepared by direct carbonization of MOFs for supercapacitors

    NASA Astrophysics Data System (ADS)

    Yan, Xinlong; Li, Xuejin; Yan, Zifeng; Komarneni, Sridhar

    2014-07-01

    Three porous carbons were prepared by direct carbonization of HKUST-1, MOF-5 and Al-PCP without additional carbon precursors. The carbon samples obtained by carbonization at 1073 K were characterized by XRD, TEM and N2 physisorption techniques followed by testing for electrochemical performance. The BET surface areas of the three carbons were in the range of 50-1103 m2/g. As electrode materials for supercapacitor, the MOF-5 and Al-PCP derived carbons displayed the ideal capacitor behavior, whereas the HKUST-1 derived carbon showed poor capacitive behavior at various sweep rates and current densities. Among those carbon samples, Al-PCP derived carbons exhibited highest specific capacitance (232.8 F/g) in 30% KOH solution at the current density of 100 mA/g.

  3. Linear discriminant analysis with misallocation in training samples

    NASA Technical Reports Server (NTRS)

    Chhikara, R. (Principal Investigator); Mckeon, J.

    1982-01-01

    Linear discriminant analysis for a two-class case is studied in the presence of misallocation in training samples. A general appraoch to modeling of mislocation is formulated, and the mean vectors and covariance matrices of the mixture distributions are derived. The asymptotic distribution of the discriminant boundary is obtained and the asymptotic first two moments of the two types of error rate given. Certain numerical results for the error rates are presented by considering the random and two non-random misallocation models. It is shown that when the allocation procedure for training samples is objectively formulated, the effect of misallocation on the error rates of the Bayes linear discriminant rule can almost be eliminated. If, however, this is not possible, the use of Fisher rule may be preferred over the Bayes rule.

  4. SPECTROSCOPY OF HIGH-REDSHIFT SUPERNOVAE FROM THE ESSENCE PROJECT: THE FIRST FOUR YEARS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foley, R. J.; Chornock, R.; Silverman, J. M.

    We present the results of spectroscopic observations from the ESSENCE high-redshift supernova (SN) survey during its first four years of operation. This sample includes spectra of all SNe Ia whose light curves were presented by Miknaitis et al. and used in the cosmological analyses of Davis et al. and Wood-Vasey et al. The sample represents 273 hr of spectroscopic observations with 6.5-10 m class telescopes of objects detected and selected for spectroscopy by the ESSENCE team. We present 184 spectra of 156 objects. Combining this sample with that of Matheson et al., we have a total sample of 329 spectramore » of 274 objects. From this, we are able to spectroscopically classify 118 Type Ia SNe. As the survey has matured, the efficiency of classifying SNe Ia has remained constant while we have observed both higher-redshift SNe Ia and SNe Ia farther from maximum brightness. Examining the subsample of SNe Ia with host-galaxy redshifts shows that redshifts derived from only the SN Ia spectra are consistent with redshifts found from host-galaxy spectra. Moreover, the phases derived from only the SN Ia spectra are consistent with those derived from light-curve fits. By comparing our spectra to local templates, we find that the rate of objects similar to the overluminous SN 1991T and the underluminous SN 1991bg in our sample are consistent with that of the local sample. We do note, however, that we detect no object spectroscopically or photometrically similar to SN 1991bg. Although systematic effects could reduce the high-redshift rate we expect based on the low-redshift surveys, it is possible that SN 1991bg-like SNe Ia are less prevalent at high redshift.« less

  5. Estimating pesticide sampling rates by the polar organic chemical integrative sampler (POCIS) in the presence of natural organic matter and varying hydrodynamic conditions.

    PubMed

    Charlestra, Lucner; Amirbahman, Aria; Courtemanch, David L; Alvarez, David A; Patterson, Howard

    2012-10-01

    The polar organic chemical integrative sampler (POCIS) was calibrated to monitor pesticides in water under controlled laboratory conditions. The effect of natural organic matter (NOM) on the sampling rates (R(s)) was evaluated in microcosms containing <0.1-5 mg L(-1) of total organic carbon (TOC). The effect of hydrodynamics was studied by comparing R(s) values measured in stirred (SBE) and quiescent (QBE) batch experiments and a flow-through system (FTS). The level of NOM in the water used in these experiments had no effect on the magnitude of the pesticide sampling rates (p > 0.05). However, flow velocity and turbulence significantly increased the sampling rates of the pesticides in the FTS and SBE compared to the QBE (p < 0.001). The calibration data generated can be used to derive pesticide concentrations in water from POCIS deployed in stagnant and turbulent environmental systems without correction for NOM. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Activation Energy of Tantalum-Tungsten Oxide Thermite Reaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cervantes, O; Kuntz, J; Gash, A

    2010-02-25

    The activation energy of a high melting temperature sol-gel (SG) derived tantalum-tungsten oxide thermite composite was determined using the Kissinger isoconversion method. The SG derived powder was consolidated using the High Pressure Spark Plasma Sintering (HPSPS) technique to 300 and 400 C to produce pellets with dimensions of 5 mm diameter by 1.5 mm height. A custom built ignition setup was developed to measure ignition temperatures at high heating rates (500-2000 C {center_dot} min{sup -1}). Such heating rates were required in order to ignite the thermite composite. Unlike the 400 C samples, results show that the samples consolidated to 300more » C undergo an abrupt change in temperature response prior to ignition. This change in temperature response has been attributed to the crystallization of the amorphous WO{sub 3} in the SG derived Ta-WO{sub 3} thermite composite and not to a pre-ignition reaction between the constituents. Ignition temperatures for the Ta-WO{sub 3} thermite ranged from approximately 465-670 C. The activation energy of the SG derived Ta-WO{sup 3} thermite composite consolidated to 300 and 400 C were determined to be 37.787 {+-} 1.58 kJ {center_dot} mol{sup -1} and 57.381 {+-} 2.26 kJ {center_dot} mol{sup -1}, respectively.« less

  7. Sampling Strategies for Evaluating the Rate of Adventitious Transgene Presence in Non-Genetically Modified Crop Fields.

    PubMed

    Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine

    2017-09-01

    According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model. © 2017 Society for Risk Analysis.

  8. In vitro physicochemical, phytochemical and functional properties of fiber rich fractions derived from by-products of six fruits.

    PubMed

    Saikia, Sangeeta; Mahanta, Charu Lata

    2016-03-01

    A comparative study was done on the health promoting and functional properties of the fibers obtained as by-products from six fruits viz., pomace of carambola (Averrhoa carambola L.) and pineapple (Ananas comosus L. Merr), peels of watermelon (Citrullus lanatus), Burmese grape (Baccurea sapida Muell. Arg) and Khasi mandarin orange (Citrus reticulata Blanco), and blossom of seeded banana (Musa balbisiana, ABB). Highest yield of fiber was obtained from Burmese grape peel (BGPL, 79.94 ± 0.41 g/100 g) and seeded banana blossom (BB 77.18 ± 0.20 g/100 g). The total dietary fiber content (TDF) was highest in fiber fraction derived from pineapple pomace (PNPM, 79.76 ± 0.42 g/100 g) and BGPL (67.27 ± 0.39 g/100 g). All the samples contained insoluble dietary fiber as the major fiber fraction. The fiber samples showed good water holding, oil holding and swelling capacities. The fiber samples exhibited antioxidant activity. All the samples showed good results for glucose adsorption, amylase activity inhibition, glucose diffusion rate and glucose diffusion reduction rate index.

  9. VizieR Online Data Catalog: CCD Hα and R photometry of 334 galaxies (James+, 2004)

    NASA Astrophysics Data System (ADS)

    James, P. A.; Shane, N. S.; Beckman, J. E.; Cardwell, A.; Collins, C. A.; Etherton, J.; de Jong, R. S.; Fathi, K.; Knapen, J. H.; Peletier, R. F.; Percival, S. M.; Pollacco, D. L.; Seigar, M. S.; Stedman, S.; Steele, I. A.

    2004-01-01

    Hα plus [NII] and R-band CCD photometry and derived parameters are presented for the full sample of 334 spiral and irregular galaxies. Galaxy distances are derived using a Virgo-infall corrected model which is described in the paper, and star formation rates are derived from Hα plus [NII] fluxes using the conversion defined by Kennicutt et al. (1994ApJ...435...22K). The entries are arranged within five bins in recession velocity, and in order of increasing Right Ascension within these bins. (1 data file).

  10. A Report on the Moffitt Undergraduate Library Book Theft Study.

    ERIC Educational Resources Information Center

    Kaske, Neal K.; Thompson, Donald D.

    A study was conducted at the Moffitt Undergraduate Library of the University of California at Berkeley to determine the extent and the cost of book losses due to theft and to determine the cost-effectiveness of book security systems. A sample inventory was taken and the theft rate (13.7%) was statistically derived. The rate of loss was translated…

  11. Quantization noise in digital speech. M.S. Thesis- Houston Univ.

    NASA Technical Reports Server (NTRS)

    Schmidt, O. L.

    1972-01-01

    The amount of quantization noise generated in a digital-to-analog converter is dependent on the number of bits or quantization levels used to digitize the analog signal in the analog-to-digital converter. The minimum number of quantization levels and the minimum sample rate were derived for a digital voice channel. A sample rate of 6000 samples per second and lowpass filters with a 3 db cutoff of 2400 Hz are required for 100 percent sentence intelligibility. Consonant sounds are the first speech components to be degraded by quantization noise. A compression amplifier can be used to increase the weighting of the consonant sound amplitudes in the analog-to-digital converter. An expansion network must be installed at the output of the digital-to-analog converter to restore the original weighting of the consonant sounds. This technique results in 100 percent sentence intelligibility for a sample rate of 5000 samples per second, eight quantization levels, and lowpass filters with a 3 db cutoff of 2000 Hz.

  12. An Empirically Derived Taxonomy for Personality Diagnosis: Bridging Science and Practice in Conceptualizing Personality

    PubMed Central

    Westen, Drew; Shedler, Jonathan; Bradley, Bekh; DeFife, Jared A.

    2013-01-01

    Objective The authors describe a system for diagnosing personality pathology that is empirically derived, clinically relevant, and practical for day-to-day use. Method A random national sample of psychiatrists and clinical psychologists (N=1,201) described a randomly selected current patient with any degree of personality dysfunction (from minimal to severe) using the descriptors in the Shedler-Westen Assessment Procedure–II and completed additional research forms. Results The authors applied factor analysis to identify naturally occurring diagnostic groupings within the patient sample. The analysis yielded 10 clinically coherent personality diagnoses organized into three higher-order clusters: internalizing, externalizing, and borderline-dysregulated. The authors selected the most highly rated descriptors to construct a diagnostic prototype for each personality syndrome. In a second, independent sample, research interviewers and patients’ treating clinicians were able to diagnose the personality syndromes with high agreement and minimal comorbidity among diagnoses. Conclusions The empirically derived personality prototypes described here provide a framework for personality diagnosis that is both empirically based and clinically relevant. PMID:22193534

  13. Hydration rate of obsidian.

    PubMed

    Friedman, I; Long, W

    1976-01-30

    The hydration rates of 12 obsidian samples of different chemical compositions were measured at temperatures from 95 degrees to 245 degrees C. An expression relating hydration rate to temperature was derived for each sample. The SiO(2) content and refractive index are related to the hydration rate, as are the CaO, MgO, and original water contents. With this information it is possible to calculate the hydration rate of a sample from its silica content, refractive index, or chemical index and a knowledge of the effective temperature at which the hydration occurred. The effective hydration temperature can be either measured or approximated from weather records. Rates have been calculated by both methods, and the results show that weather records can give a good approximation to the true EHT, particularly in tropical and subtropical climates. If one determines the EHT by any of the methods suggested, and also measures or knows the rate of hydration of the particular obsidian used, it should be possible to carry out absolute dating to +/- 10 percent of the true age over periods as short as several years and as long as millions of years.

  14. Thermogravimetric study and kinetic analysis of fungal pretreated corn stover using the distributed activation energy model.

    PubMed

    Ma, Fuying; Zeng, Yelin; Wang, Jinjin; Yang, Yang; Yang, Xuewei; Zhang, Xiaoyu

    2013-01-01

    Non-isothermal thermogravimetry/derivative thermogravimetry (TG/DTG) measurements are used to determine pyrolytic characteristics and kinetics of lignocellulose. TG/DTG experiments at different heating rates with corn stover pretreated with monocultures of Irpex lacteus CD2 and Auricularia polytricha AP and their cocultures were conducted. Heating rates had little effect on the pyrolysis process, but the peak of weight loss rate in the DTG curves shifted towards higher temperature with heating rate. The maximum weight loss of biopretreated samples was 1.25-fold higher than that of the control at the three heating rates, and the maximum weight loss rate of the co-culture pretreated samples was intermediate between that of the two mono-cultures. The activation energies of the co-culture pretreated samples were 16-72 kJ mol(-1) lower than that of the mono-culture at the conversion rate range from 10% to 60%. This suggests that co-culture pretreatment can decrease activation energy and accelerate pyrolysis reaction thus reducing energy consumption. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Sediment redistribution and grainsize effects on 230Th-normalized mass accumulation rates and focusing factors in the Panama Basin

    NASA Astrophysics Data System (ADS)

    Loveley, Matthew R.; Marcantonio, Franco; Lyle, Mitchell; Ibrahim, Rami; Hertzberg, Jennifer E.; Schmidt, Matthew W.

    2017-12-01

    Here, we examine how redistribution of differing grain sizes by sediment focusing processes in Panama Basin sediments affects the use of 230Th as a constant-flux proxy. We study representative sediments of Holocene and Last Glacial Maximum (LGM) time slices from four sediment cores from two different localities close to the ridges that bound the Panama Basin. Each locality contains paired sites that are seismically interpreted to have undergone extremes in sediment redistribution, i.e., focused versus winnowed sites. Both Holocene and LGM samples from sites where winnowing has occurred contain significant amounts (up to 50%) of the 230Th within the >63 μm grain size fraction, which makes up 40-70% of the bulk sediment analyzed. For sites where focusing has occurred, Holocene and LGM samples contain the greatest amounts of 230Th (up to 49%) in the finest grain-sized fraction (<4 μm), which makes up 26-40% of the bulk sediment analyzed. There are slight underestimations of 230Th-derived mass accumulation rates (MARs) and overestimations of 230Th-derived focusing factors at focused sites, while the opposite is true for winnowed sites. Corrections made using a model by Kretschmer et al. (2010) suggest a maximum change of about 30% in 230Th-derived MARs and focusing factors at focused sites, except for our most focused site which requires an approximate 70% correction in one sample. Our 230Th-corrected 232Th flux results suggest that the boundary between hemipelagically- and pelagically-derived sediments falls between 350 and 600 km from the continental margin.

  16. The window of visibility: A psychological theory of fidelity in time-sampled visual motion displays

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Ahumada, A. J., Jr.; Farrell, J. E.

    1983-01-01

    Many visual displays, such as movies and television, rely upon sampling in the time domain. The spatiotemporal frequency spectra for some simple moving images are derived and illustrations of how these spectra are altered by sampling in the time domain are provided. A simple model of the human perceiver which predicts the critical sample rate required to render sampled and continuous moving images indistinguishable is constructed. The rate is shown to depend upon the spatial and temporal acuity of the observer, and upon the velocity and spatial frequency content of the image. Several predictions of this model are tested and confirmed. The model is offered as an explanation of many of the phenomena known as apparent motion. Finally, the implications of the model for computer-generated imagery are discussed.

  17. WEIGHTED LIKELIHOOD ESTIMATION UNDER TWO-PHASE SAMPLING

    PubMed Central

    Saegusa, Takumi; Wellner, Jon A.

    2013-01-01

    We develop asymptotic theory for weighted likelihood estimators (WLE) under two-phase stratified sampling without replacement. We also consider several variants of WLEs involving estimated weights and calibration. A set of empirical process tools are developed including a Glivenko–Cantelli theorem, a theorem for rates of convergence of M-estimators, and a Donsker theorem for the inverse probability weighted empirical processes under two-phase sampling and sampling without replacement at the second phase. Using these general results, we derive asymptotic distributions of the WLE of a finite-dimensional parameter in a general semiparametric model where an estimator of a nuisance parameter is estimable either at regular or nonregular rates. We illustrate these results and methods in the Cox model with right censoring and interval censoring. We compare the methods via their asymptotic variances under both sampling without replacement and the more usual (and easier to analyze) assumption of Bernoulli sampling at the second phase. PMID:24563559

  18. Comparing basal area growth models, consistency of parameters, and accuracy of prediction

    Treesearch

    J.J. Colbert; Michael Schuckers; Desta Fekedulegn

    2002-01-01

    We fit alternative sigmoid growth models to sample tree basal area historical data derived from increment cores and disks taken at breast height. We examine and compare the estimated parameters for these models across a range of sample sites. Models are rated on consistency of parameters and on their ability to fit growth data from four sites that are located across a...

  19. The allele-frequency spectrum in a decoupled Moran model with mutation, drift, and directional selection, assuming small mutation rates.

    PubMed

    Vogl, Claus; Clemente, Florian

    2012-05-01

    We analyze a decoupled Moran model with haploid population size N, a biallelic locus under mutation and drift with scaled forward and backward mutation rates θ(1)=μ(1)N and θ(0)=μ(0)N, and directional selection with scaled strength γ=sN. With small scaled mutation rates θ(0) and θ(1), which is appropriate for single nucleotide polymorphism data in highly recombining regions, we derive a simple approximate equilibrium distribution for polymorphic alleles with a constant of proportionality. We also put forth an even simpler model, where all mutations originate from monomorphic states. Using this model we derive the sojourn times, conditional on the ancestral and fixed allele, and under equilibrium the distributions of fixed and polymorphic alleles and fixation rates. Furthermore, we also derive the distribution of small samples in the diffusion limit and provide convenient recurrence relations for calculating this distribution. This enables us to give formulas analogous to the Ewens-Watterson estimator of θ for biased mutation rates and selection. We apply this theory to a polymorphism dataset of fourfold degenerate sites in Drosophila melanogaster. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. [Study on Different Parts of Wild and Cultivated Gentiana Rigescens with Fourier Transform Infrared Spectroscopy].

    PubMed

    Shen, Yun-xia; Zhao, Yan-li; Zhang, Ji; Zuo, Zhi-tian; Wang, Yuan-zhong; Zhang, Qing-zhi

    2016-03-01

    The application of traditional Chinese medicine (TCM) and their preparations have a long history. With the deepening of the research, the market demand is increasing. However, wild resources are so limited that it can not meet the needs of the market. The development of wild and cultivated samples and research on accumulation dynamics of chemical component are of great significance. In order to compare composition difference of different parts (root, stem, and leaf) of wild and cultivated G. rigescens, Fourier infrared spectroscopy (FTIR) and second derivative spectra were used to analyze and evaluate. The second derivative spectra of 60 samples and the rate of affinity (the match values) were measured automatically using the appropriate software (Omnic 8.0). The results showed that the various parts of wild and cultivated G. rigescens. were high similar the peaks at 1732, 1 643, 1 613, 1 510, 1 417, 1 366, 1 322, 1 070 cm(-1) were the characteristic peak of esters, terpenoids and saccharides, respectively. Moreover, the shape and peak intensity were more distinct in the second derivative spectrum of samples. In the second derivative spectrum range of 1 800-600 cm(-1), the fingerprint characteristic peak of samples and gentiopicroside standards were 1 679, 1 613, 1 466, 1 272, 1 204, 1 103, 1 074, 985, 935 cm(-1). The characteristic peak intensity of gentiopicroside of roots of wild and cultivated samples at 1 613 cm(-1) (C-C) was higher than stems and leaves which indicated the higher content of gentiopicroside in root than in stem and leaves. Stems of wild samples at 1 521, 1 462 and 1 452 cm(-1) are the skeletal vibration peak of benzene ring of lignin, and the stem of cultivated sample have stronger peak than other samples which showed that rich lignin in stems. The iInfrared spectrum of samples were similar with the average spectral of root of wild samples, and significant difference was found for the correlation between second derivative spectrum of samples and average spectral of wild samples root, and the sequence of similarity was root > stem > leaf. Therefore, FTIR combined with second derivative spectra was an express and comprehensive approach to analyze and evaluate in the imperceptible differences among different parts of wild and cultivated of G. rigescens.

  1. A Theory-Based Model for Understanding Faculty Intention to Use Students Ratings to Improve Teaching in a Health Sciences Institution in Puerto Rico

    ERIC Educational Resources Information Center

    Collazo, Andrés A.

    2018-01-01

    A model derived from the theory of planned behavior was empirically assessed for understanding faculty intention to use student ratings for teaching improvement. A sample of 175 professors participated in the study. The model was statistically significant and had a very large explanatory power. Instrumental attitude, affective attitude, perceived…

  2. Deep neural networks for texture classification-A theoretical analysis.

    PubMed

    Basu, Saikat; Mukhopadhyay, Supratik; Karki, Manohar; DiBiano, Robert; Ganguly, Sangram; Nemani, Ramakrishna; Gayaka, Shreekant

    2018-01-01

    We investigate the use of Deep Neural Networks for the classification of image datasets where texture features are important for generating class-conditional discriminative representations. To this end, we first derive the size of the feature space for some standard textural features extracted from the input dataset and then use the theory of Vapnik-Chervonenkis dimension to show that hand-crafted feature extraction creates low-dimensional representations which help in reducing the overall excess error rate. As a corollary to this analysis, we derive for the first time upper bounds on the VC dimension of Convolutional Neural Network as well as Dropout and Dropconnect networks and the relation between excess error rate of Dropout and Dropconnect networks. The concept of intrinsic dimension is used to validate the intuition that texture-based datasets are inherently higher dimensional as compared to handwritten digits or other object recognition datasets and hence more difficult to be shattered by neural networks. We then derive the mean distance from the centroid to the nearest and farthest sampling points in an n-dimensional manifold and show that the Relative Contrast of the sample data vanishes as dimensionality of the underlying vector space tends to infinity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. The influence of taxon sampling on Bayesian divergence time inference under scenarios of rate heterogeneity among lineages.

    PubMed

    Soares, André E R; Schrago, Carlos G

    2015-01-07

    Although taxon sampling is commonly considered an important issue in phylogenetic inference, it is rarely considered in the Bayesian estimation of divergence times. In fact, the studies conducted to date have presented ambiguous results, and the relevance of taxon sampling for molecular dating remains unclear. In this study, we developed a series of simulations that, after six hundred Bayesian molecular dating analyses, allowed us to evaluate the impact of taxon sampling on chronological estimates under three scenarios of among-lineage rate heterogeneity. The first scenario allowed us to examine the influence of the number of terminals on the age estimates based on a strict molecular clock. The second scenario imposed an extreme example of lineage specific rate variation, and the third scenario permitted extensive rate variation distributed along the branches. We also analyzed empirical data on selected mitochondrial genomes of mammals. Our results showed that in the strict molecular-clock scenario (Case I), taxon sampling had a minor impact on the accuracy of the time estimates, although the precision of the estimates was greater with an increased number of terminals. The effect was similar in the scenario (Case III) based on rate variation distributed among the branches. Only under intensive rate variation among lineages (Case II) taxon sampling did result in biased estimates. The results of an empirical analysis corroborated the simulation findings. We demonstrate that taxonomic sampling affected divergence time inference but that its impact was significant if the rates deviated from those derived for the strict molecular clock. Increased taxon sampling improved the precision and accuracy of the divergence time estimates, but the impact on precision is more relevant. On average, biased estimates were obtained only if lineage rate variation was pronounced. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Near-identical star formation rate densities from Hα and FUV at redshift zero

    NASA Astrophysics Data System (ADS)

    Audcent-Ross, Fiona M.; Meurer, Gerhardt R.; Wong, O. I.; Zheng, Z.; Hanish, D.; Zwaan, M. A.; Bland-Hawthorn, J.; Elagali, A.; Meyer, M.; Putman, M. E.; Ryan-Weber, E. V.; Sweet, S. M.; Thilker, D. A.; Seibert, M.; Allen, R.; Dopita, M. A.; Doyle-Pegg, M. T.; Drinkwater, M.; Ferguson, H. C.; Freeman, K. C.; Heckman, T. M.; Kennicutt, R. C.; Kilborn, V. A.; Kim, J. H.; Knezek, P. M.; Koribalski, B.; Smith, R. C.; Staveley-Smith, L.; Webster, R. L.; Werk, J. K.

    2018-06-01

    For the first time both Hα and far-ultraviolet (FUV) observations from an H I-selected sample are used to determine the dust-corrected star formation rate density (SFRD: \\dot{ρ }) in the local Universe. Applying the two star formation rate indicators on 294 local galaxies we determine log(\\dot{ρ } _{Hα }) = -1.68 ^{+0.13}_{-0.05} [M⊙ yr-1 Mpc-3] and log(\\dot{ρ }_{FUV}) = -1.71 ^{+0.12}_{-0.13} [M⊙ yr-1 Mpc-3]. These values are derived from scaling Hα and FUV observations to the H I mass function. Galaxies were selected to uniformly sample the full H I mass (M_{H I}) range of the H I Parkes All-Sky Survey (M_{H I} ˜ 107 to ˜1010.7 M⊙). The approach leads to relatively larger sampling of dwarf galaxies compared to optically-selected surveys. The low H I mass, low luminosity and low surface brightness galaxy populations have, on average, lower Hα/FUV flux ratios than the remaining galaxy populations, consistent with the earlier results of Meurer. The near-identical Hα- and FUV-derived SFRD values arise with the low Hα/FUV flux ratios of some galaxies being offset by enhanced Hα from the brightest and high mass galaxy populations. Our findings confirm the necessity to fully sample the H I mass range for a complete census of local star formation to include lower stellar mass galaxies which dominate the local Universe.

  5. A comparison of methods for deriving solute flux rates using long-term data from streams in the mirror lake watershed

    USGS Publications Warehouse

    Bukaveckas, P.A.; Likens, G.E.; Winter, T.C.; Buso, D.C.

    1998-01-01

    Calculation of chemical flux rates for streams requires integration of continuous measurements of discharge with discrete measurements of solute concentrations. We compared two commonly used methods for interpolating chemistry data (time-averaging and flow-weighting) to determine whether discrepancies between the two methods were large relative to other sources of error in estimating flux rates. Flux rates of dissolved Si and SO42- were calculated from 10 years of data (1981-1990) for the NW inlet and Outlet of Mirror Lake and for a 40-day period (March 22 to April 30, 1993) during which we augmented our routine (weekly) chemical monitoring with collection of daily samples. The time-averaging method yielded higher estimates of solute flux during high-flow periods if no chemistry samples were collected corresponding to peak discharge. Concentration-discharge relationships should be used to interpolate stream chemistry during changing flow conditions if chemical changes are large. Caution should be used in choosing the appropriate time-scale over which data are pooled to derive the concentration-discharge regressions because the model parameters (slope and intercept) were found to be sensitive to seasonal and inter-annual variation. Both methods approximated solute flux to within 2-10% for a range of solutes that were monitored during the intensive sampling period. Our results suggest that errors arising from interpolation of stream chemistry data are small compared with other sources of error in developing watershed mass balances.

  6. A class of least-squares filtering and identification algorithms with systolic array architectures

    NASA Technical Reports Server (NTRS)

    Kalson, Seth Z.; Yao, Kung

    1991-01-01

    A unified approach is presented for deriving a large class of new and previously known time- and order-recursive least-squares algorithms with systolic array architectures, suitable for high-throughput-rate and VLSI implementations of space-time filtering and system identification problems. The geometrical derivation given is unique in that no assumption is made concerning the rank of the sample data correlation matrix. This method utilizes and extends the concept of oblique projections, as used previously in the derivations of the least-squares lattice algorithms. Exponentially weighted least-squares criteria are considered for both sliding and growing memory.

  7. Deriving Stellar Masses for the ALFALFA α.100 Sample

    NASA Astrophysics Data System (ADS)

    Hess, Logan; Cornell 2017 Summer REU

    2018-01-01

    For this project, we explore different methods of deriving the stellar masses of galaxies in the ALFALFA (Arecibo Legacy Fast ALFA) α.100 survey. In particular, we measure the effectiveness of SED (Spectral Energy Distribution) on the sample. SED fitting was preformed by MAGPHYS (Multi-wavelength Analysis of Galaxy Physical Properties), utilizing a wide range of photometry in the UV, optical, and IR bands. Photometry was taken from GALAX GR6/7 (UV), SDSS DR13 (optical), WISE All-Sky (near-IR), and Herschel PACS/SPIRE (far-IR). The efficiency of SED fitting increases with a broader range of photometry, however detection rates varied significantly across the different bands. Using a more “comprehensive” sample of galaxies, the GSWLC-A (GALAX, SDSS, WISE Legacy Catalog All-Sky Survey), we aimed to measure which combination of bands provided the largest sample return with the lowest amount of uncertainty, which could then be used to estimate the masses of the galaxies in the α.100 sample.

  8. A satellite digital controller or 'play that PID tune again, Sam'. [Position, Integral, Derivative feedback control algorithm for design strategy

    NASA Technical Reports Server (NTRS)

    Seltzer, S. M.

    1976-01-01

    The problem discussed is to design a digital controller for a typical satellite. The controlled plant is considered to be a rigid body acting in a plane. The controller is assumed to be a digital computer which, when combined with the proposed control algorithm, can be represented as a sampled-data system. The objective is to present a design strategy and technique for selecting numerical values for the control gains (assuming position, integral, and derivative feedback) and the sample rate. The technique is based on the parameter plane method and requires that the system be amenable to z-transform analysis.

  9. The accretion rate of extraterrestrial 3He based on oceanic 230Th flux and the relation to Os isotope variation over the past 200,000 years in an Indian Ocean core

    NASA Astrophysics Data System (ADS)

    Marcantonio, Franco; Turekian, Karl K.; Higgins, Sean; Anderson, Robert F.; Stute, Martin; Schlosser, Peter

    1999-07-01

    In the eastern equatorial Indian Ocean, the flux of extraterrestrial 3He, a proxy of interplanetary dust particles (IDPs), has been relatively constant over the past 200 ka. The flux is equal to (1.1±0.4)×10 -12 cm 3 STP cm -2 ka -1, a value obtained using the xs 230Th profiling method. Variations in mass accumulation rates (MARs) derived assuming a constant extraterrestrial 3He flux have a 40-ka periodicity similar to that observed in the δ 18O-derived MARs. This frequency is similar to that of the Earth's obliquity. Measured 187Os/ 188Os ratios are less radiogenic than present-day seawater (0.49-0.98), reflecting the mixing of Os derived from extraterrestrial, terrigenous and hydrogenous sources. When coupled with He data measured on the same samples, Os isotope data yield important information about the terrigenous component supplied to the eastern equatorial Indian Ocean. The amount of Os in the sample derived from the extraterrestrial component can be deduced with the help of the helium systematics. Once corrected for the extraterrestrial component of Os, Os isotope signatures, in conjunction with the 4He concentrations, suggest a supply of terrigenous material from Indonesian ultramafic and Himalayan crustal sources that clearly varies through time.

  10. Growth rate and age distribution of deep-sea black corals in the Gulf of Mexico

    USGS Publications Warehouse

    Prouty, N.G.; Roark, E.B.; Buster, N.A.; Ross, Steve W.

    2011-01-01

    Black corals (order Antipatharia) are important long-lived, habitat-forming, sessile, benthic suspension feeders that are found in all oceans and are usually found in water depths greater than 30 m. Deep-water black corals are some of the slowest-growing, longest-lived deep-sea corals known. Previous age dating of a limited number of black coral samples in the Gulf of Mexico focused on extrapolated ages and growth rates based on skeletal 210Pb dating. Our results greatly expand the age and growth rate data of black corals from the Gulf of Mexico. Radiocarbon analysis of the oldest Leiopathes sp. specimen from the upper De Soto Slope at 300 m water depth indicates that these animals have been growing continuously for at least the last 2 millennia, with growth rates ranging from 8 to 22 µm yr–1. Visual growth ring counts based on scanning electron microscopy images were in good agreement with the 14C-derived ages, suggestive of annual ring formation. The presence of bomb-derived 14C in the outermost samples confirms sinking particulate organic matter as the dominant carbon source and suggests a link between the deep-sea and surface ocean. There was a high degree of reproducibility found between multiple discs cut from the base of each specimen, as well as within duplicate subsamples. Robust 14C-derived chronologies and known surface ocean 14C reservoir age constraints in the Gulf of Mexico provided reliable calendar ages with future application to the development of proxy records.

  11. The contemporary degassing rate of 40Ar from the solid Earth.

    PubMed

    Bender, Michael L; Barnett, Bruce; Dreyfus, Gabrielle; Jouzel, Jean; Porcelli, Don

    2008-06-17

    Knowledge of the outgassing history of radiogenic (40)Ar, derived over geologic time from the radioactive decay of (40)K, contributes to our understanding of the geodynamic history of the planet and the origin of volatiles on Earth's surface. The (40)Ar inventory of the atmosphere equals total (40)Ar outgassing during Earth history. Here, we report the current rate of (40)Ar outgassing, accessed by measuring the Ar isotope composition of trapped gases in samples of the Vostok and Dome C deep ice cores dating back to almost 800 ka. The modern outgassing rate (1.1 +/- 0.1 x 10(8) mol/yr) is in the range of values expected by summing outgassing from the continental crust and the upper mantle, as estimated from simple calculations and models. The measured outgassing rate is also of interest because it allows dating of air trapped in ancient ice core samples of unknown age, although uncertainties are large (+/-180 kyr for a single sample or +/-11% of the calculated age, whichever is greater).

  12. Fact book : a summary of information about towaway accidents involving 1973-1975 model cars. Volume 2

    DOT National Transportation Integrated Search

    1976-09-01

    Standardized injury rates and seat belt effectiveness measures are derived from a probability sample of towaway accidents involving 1973-1975 model cars. The data were collected by NHTSA-sponsored teams in five different geographic regions. Weighted ...

  13. A Bayesian-frequentist two-stage single-arm phase II clinical trial design.

    PubMed

    Dong, Gaohong; Shih, Weichung Joe; Moore, Dirk; Quan, Hui; Marcella, Stephen

    2012-08-30

    It is well-known that both frequentist and Bayesian clinical trial designs have their own advantages and disadvantages. To have better properties inherited from these two types of designs, we developed a Bayesian-frequentist two-stage single-arm phase II clinical trial design. This design allows both early acceptance and rejection of the null hypothesis ( H(0) ). The measures (for example probability of trial early termination, expected sample size, etc.) of the design properties under both frequentist and Bayesian settings are derived. Moreover, under the Bayesian setting, the upper and lower boundaries are determined with predictive probability of trial success outcome. Given a beta prior and a sample size for stage I, based on the marginal distribution of the responses at stage I, we derived Bayesian Type I and Type II error rates. By controlling both frequentist and Bayesian error rates, the Bayesian-frequentist two-stage design has special features compared with other two-stage designs. Copyright © 2012 John Wiley & Sons, Ltd.

  14. Association between heart rate variability and manual pulse rate.

    PubMed

    Hart, John

    2013-09-01

    One model for neurological assessment in chiropractic pertains to autonomic variability, tested commonly with heart rate variability (HRV). Since HRV may not be convenient to use on all patient visits, more user-friendly methods may help fill-in the gaps. Accordingly, this study tests the association between manual pulse rate and heart rate variability. The manual rates were also compared to the heart rate derived from HRV. Forty-eight chiropractic students were examined with heart rate variability (SDNN and mean heart rate) and two manual radial pulse rate measurements. Inclusion criteria consisted of participants being chiropractic students. Exclusion criteria for 46 of the participants consisted of a body mass index being greater than 30, age greater than 35, and history of: a) dizziness upon standing, b) treatment of psychiatric disorders, and c) diabetes. No exclusion criteria were applied to the remaining two participants who were also convenience sample volunteers. Linear associations between the manual pulse rate methods and the two heart rate variability measures (SDNN and mean heart) were tested with Pearson's correlation and simple linear regression. Moderate strength inverse (expected) correlations were observed between both manual pulse rate methods and SDNN (r = -0.640, 95% CI -0.781, -0.435; r = -0.632, 95% CI -0.776, -0.425). Strong direct (expected) relationships were observed between the manual pulse rate methods and heart rate derived from HRV technology (r = 0.934, 95% CI 0.885, 0.962; r = 0.941, 95% CI 0.897, 0.966). Manual pulse rates may be a useful option for assessing autonomic variability. Furthermore, this study showed a strong relationship between manual pulse rates and heart rate derived from HRV technology.

  15. NEAR-INFRARED ADAPTIVE OPTICS IMAGING OF INFRARED LUMINOUS GALAXIES: THE BRIGHTEST CLUSTER MAGNITUDE-STAR FORMATION RATE RELATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Randriamanakoto, Z.; Väisänen, P.; Escala, A.

    2013-10-01

    We have established a relation between the brightest super star cluster (SSC) magnitude in a galaxy and the host star formation rate (SFR) for the first time in the near-infrared (NIR). The data come from a statistical sample of ∼40 luminous IR galaxies (LIRGs) and starbursts utilizing K-band adaptive optics imaging. While expanding the observed relation to longer wavelengths, less affected by extinction effects, it also pushes to higher SFRs. The relation we find, M{sub K} ∼ –2.6log SFR, is similar to that derived previously in the optical and at lower SFRs. It does not, however, fit the optical relationmore » with a single optical to NIR color conversion, suggesting systematic extinction and/or age effects. While the relation is broadly consistent with a size-of-sample explanation, we argue physical reasons for the relation are likely as well. In particular, the scatter in the relation is smaller than expected from pure random sampling strongly suggesting physical constraints. We also derive a quantifiable relation tying together cluster-internal effects and host SFR properties to possibly explain the observed brightest SSC magnitude versus SFR dependency.« less

  16. Estimating residual fault hitting rates by recapture sampling

    NASA Technical Reports Server (NTRS)

    Lee, Larry; Gupta, Rajan

    1988-01-01

    For the recapture debugging design introduced by Nayak (1988) the problem of estimating the hitting rates of the faults remaining in the system is considered. In the context of a conditional likelihood, moment estimators are derived and are shown to be asymptotically normal and fully efficient. Fixed sample properties of the moment estimators are compared, through simulation, with those of the conditional maximum likelihood estimators. Properties of the conditional model are investigated such as the asymptotic distribution of linear functions of the fault hitting frequencies and a representation of the full data vector in terms of a sequence of independent random vectors. It is assumed that the residual hitting rates follow a log linear rate model and that the testing process is truncated when the gaps between the detection of new errors exceed a fixed amount of time.

  17. Analysis of activation and shutdown contact dose rate for EAST neutral beam port

    NASA Astrophysics Data System (ADS)

    Chen, Yuqing; Wang, Ji; Zhong, Guoqiang; Li, Jun; Wang, Jinfang; Xie, Yahong; Wu, Bin; Hu, Chundong

    2017-12-01

    For the safe operation and maintenance of neutral beam injector (NBI), specific activity and shutdown contact dose rate of the sample material SS316 are estimated around the experimental advanced superconducting tokamak (EAST) neutral beam port. Firstly, the neutron emission intensity is calculated by TRANSP code while the neutral beam is co-injected to EAST. Secondly, the neutron activation and shutdown contact dose rates for the neutral beam sample materials SS316 are derived by the Monte Carlo code MCNP and the inventory code FISPACT-2007. The simulations indicate that the primary radioactive nuclides of SS316 are 58Co and 54Mn. The peak contact dose rate is 8.52 × 10-6 Sv/h after EAST shutdown one second. That is under the International Thermonuclear Experimental Reactor (ITER) design values 1 × 10-5 Sv/h.

  18. Topographic Metric Predictions of Soil redistribution and Organic Carbon Distribution in Croplands

    NASA Astrophysics Data System (ADS)

    Mccarty, G.; Li, X.

    2017-12-01

    Landscape topography is a key factor controlling soil redistribution and soil organic carbon (SOC) distribution in Iowa croplands (USA). In this study, we adopted a combined approach based on carbon () and cesium (137Cs) isotope tracers, and digital terrain analysis to understand patterns of SOC redistribution and carbon sequestration dynamics as influenced by landscape topography in tilled cropland under long term corn/soybean management. The fallout radionuclide 137Cs was used to estimate soil redistribution rates and a Lidar-derived DEM was used to obtain a set of topographic metrics for digital terrain analysis. Soil redistribution rates and patterns of SOC distribution were examined across 560 sampling locations at two field sites as well as at larger scale within the watershed. We used δ13C content in SOC to partition C3 and C4 plant derived C density at 127 locations in one of the two field sites with corn being the primary source of C4 C. Topography-based models were developed to simulate SOC distribution and soil redistribution using stepwise ordinary least square regression (SOLSR) and stepwise principal component regression (SPCR). All topography-based models developed through SPCR and SOLSR demonstrated good simulation performance, explaining more than 62% variability in SOC density and soil redistribution rates across two field sites with intensive samplings. However, the SOLSR models showed lower reliability than the SPCR models in predicting SOC density at the watershed scale. Spatial patterns of C3-derived SOC density were highly related to those of SOC density. Topographic metrics exerted substantial influence on C3-derived SOC density with the SPCR model accounting for 76.5% of the spatial variance. In contrast C4 derived SOC density had poor spatial structure likely reflecting the substantial contribution of corn vegetation to recently sequestered SOC density. Results of this study highlighted the utility of topographic SPCR models for scaling field measurements of SOC density and soil redistribution rates to watershed scale which will allow watershed model to better predict fate of ecosystem C on agricultural landscapes.

  19. Role of intestinal microbiota in transformation of bismuth and other metals and metalloids into volatile methyl and hydride derivatives in humans and mice.

    PubMed

    Michalke, Klaus; Schmidt, Annette; Huber, Britta; Meyer, Jörg; Sulkowski, Margareta; Hirner, Alfred V; Boertz, Jens; Mosel, Frank; Dammann, Philip; Hilken, Gero; Hedrich, Hans J; Dorsch, Martina; Rettenmeier, Albert W; Hensel, Reinhard

    2008-05-01

    The present study shows that feces samples of 14 human volunteers and isolated gut segments of mice (small intestine, cecum, and large intestine) are able to transform metals and metalloids into volatile derivatives ex situ during anaerobic incubation at 37 degrees C and neutral pH. Human feces and the gut of mice exhibit highly productive mechanisms for the formation of the toxic volatile derivative trimethylbismuth [(CH(3))(3)Bi] at rather low concentrations of bismuth (0.2 to 1 mumol kg(-1) [dry weight]). An increase of bismuth up to 2 to 14 mmol kg(-1) (dry weight) upon a single (human volunteers) or continuous (mouse study) administration of colloidal bismuth subcitrate resulted in an average increase of the derivatization rate from approximately 4 pmol h(-1) kg(-1) (dry weight) to 2,100 pmol h(-1) kg(-1) (dry weight) in human feces samples and from approximately 5 pmol h(-1) kg(-1) (dry weight) to 120 pmol h(-1) kg(-1) (dry weight) in mouse gut samples, respectively. The upshift of the bismuth content also led to an increase of derivatives of other elements (such as arsenic, antimony, and lead in human feces or tellurium and lead in the murine large intestine). The assumption that the gut microbiota plays a dominant role for these transformation processes, as indicated by the production of volatile derivatives of various elements in feces samples, is supported by the observation that the gut segments of germfree mice are unable to transform administered bismuth to (CH(3))(3)Bi.

  20. Development and Psychometric Evaluation of the Brief Adolescent Gambling Screen (BAGS)

    PubMed Central

    Stinchfield, Randy; Wynne, Harold; Wiebe, Jamie; Tremblay, Joel

    2017-01-01

    The purpose of this study was to develop and evaluate the initial reliability, validity and classification accuracy of a new brief screen for adolescent problem gambling. The three-item Brief Adolescent Gambling Screen (BAGS) was derived from the nine-item Gambling Problem Severity Subscale (GPSS) of the Canadian Adolescent Gambling Inventory (CAGI) using a secondary analysis of existing CAGI data. The sample of 105 adolescents included 49 females and 56 males from Canada who completed the CAGI, a self-administered measure of DSM-IV diagnostic criteria for Pathological Gambling, and a clinician-administered diagnostic interview including the DSM-IV diagnostic criteria for Pathological Gambling (both of which were adapted to yield DSM-5 Gambling Disorder diagnosis). A stepwise multivariate discriminant function analysis selected three GPSS items as the best predictors of a diagnosis of Gambling Disorder. The BAGS demonstrated satisfactory estimates of reliability, validity and classification accuracy and was equivalent to the nine-item GPSS of the CAGI and the BAGS was more accurate than the SOGS-RA. The BAGS estimates of classification accuracy include hit rate = 0.95, sensitivity = 0.88, specificity = 0.98, false positive rate = 0.02, and false negative rate = 0.12. Since these classification estimates are preliminary, derived from a relatively small sample size, and based upon the same sample from which the items were selected, it will be important to cross-validate the BAGS with larger and more diverse samples. The BAGS should be evaluated for use as a screening tool in both clinical and school settings as well as epidemiological surveys. PMID:29312064

  1. LFlGRB: Luminosity function of long gamma-ray bursts

    NASA Astrophysics Data System (ADS)

    Paul, Debdutta

    2018-04-01

    LFlGRB models the luminosity function (LF) of long Gamma Ray Bursts (lGRBs) by using a sample of Swift and Fermi lGRBs to re-derive the parameters of the Yonetoku correlation and self-consistently estimate pseudo-redshifts of all the bursts with unknown redshifts. The GRB formation rate is modeled as the product of the cosmic star formation rate and a GRB formation efficiency for a given stellar mass.

  2. Analytical Expressions for the Mixed-Order Kinetics Parameters of TL Glow Peaks Based on the two Heating Rates Method.

    PubMed

    Maghrabi, Mufeed; Al-Abdullah, Tariq; Khattari, Ziad

    2018-03-24

    The two heating rates method (originally developed for first-order glow peaks) was used for the first time to evaluate the activation energy (E) from glow peaks obeying mixed-order (MO) kinetics. The derived expression for E has an insignificant additional term (on the scale of a few meV) when compared with the first-order case. Hence, the original expression for E using the two heating rates method can be used with excellent accuracy in the case of MO glow peaks. In addition, we derived a simple analytical expression for the MO parameter. The present procedure has the advantage that the MO parameter can now be evaluated using analytical expression instead of using the graphical representation between the geometrical factor and the MO parameter as given by the existing peak shape methods. The applicability of the derived expressions for real samples was demonstrated for the glow curve of Li 2 B 4 O 7 :Mn single crystal. The obtained parameters compare very well with those obtained by glow curve fitting and with the available published data.

  3. Noninferiority trial designs for odds ratios and risk differences.

    PubMed

    Hilton, Joan F

    2010-04-30

    This study presents constrained maximum likelihood derivations of the design parameters of noninferiority trials for binary outcomes with the margin defined on the odds ratio (ψ) or risk-difference (δ) scale. The derivations show that, for trials in which the group-specific response rates are equal under the point-alternative hypothesis, the common response rate, π(N), is a fixed design parameter whose value lies between the control and experimental rates hypothesized at the point-null, {π(C), π(E)}. We show that setting π(N) equal to the value of π(C) that holds under H(0) underestimates the overall sample size requirement. Given {π(C), ψ} or {π(C), δ} and the type I and II error rates, or algorithm finds clinically meaningful design values of π(N), and the corresponding minimum asymptotic sample size, N=n(E)+n(C), and optimal allocation ratio, γ=n(E)/n(C). We find that optimal allocations are increasingly imbalanced as ψ increases, with γ(ψ)<1 and γ(δ)≈1/γ(ψ), and that ranges of allocation ratios map to the minimum sample size. The latter characteristic allows trialists to consider trade-offs between optimal allocation at a smaller N and a preferred allocation at a larger N. For designs with relatively large margins (e.g. ψ>2.5), trial results that are presented on both scales will differ in power, with more power lost if the study is designed on the risk-difference scale and reported on the odds ratio scale than vice versa. 2010 John Wiley & Sons, Ltd.

  4. An Automated Heart Rate Detection Platform in Wild-Type Zebrafish for Cardiotoxicity Screening of Fine Particulate Matter Air Pollution

    EPA Science Inventory

    Exposure to air pollution-derived particulate matter (PM) causes adverse cardiovascular health outcomes, with increasing evidence implicating soluble components of PM; however, the enormous number of unique PM samples from different air sheds far exceeds the capacity of conventio...

  5. Interactions of Task and Subject Variables among Continuous Performance Tests

    ERIC Educational Resources Information Center

    Denney, Colin B.; Rapport, Mark D.; Chung, Kyong-Mee

    2005-01-01

    Background: Contemporary models of working memory suggest that target paradigm (TP) and target density (TD) should interact as influences on error rates derived from continuous performance tests (CPTs). The present study evaluated this hypothesis empirically in a typically developing, ethnically diverse sample of children. The extent to which…

  6. Perturbation analysis of queueing systems with a time-varying arrival rate

    NASA Technical Reports Server (NTRS)

    Cassandras, Christos G.; Pan, Jie

    1991-01-01

    The authors consider an M/G/1 queuing with a time-varying arrival rate. The objective is to obtain infinitesimal perturbation analysis (IPA) gradient estimates for various performance measures of interest with respect to certain system parameters. In particular, the authors consider the mean system time over n arrivals and an arrival rate alternating between two values. By choosing a convenient sample path representation of this system, they derive an unbiased IPA gradient estimator which, however, is not consistent, and investigate the nature of this problem.

  7. Development of a case-mix funding system for adults with combined vision and hearing loss.

    PubMed

    Guthrie, Dawn M; Poss, Jeffrey W

    2013-04-15

    Adults with vision and hearing loss, or dual sensory loss (DSL), present with a wide range of needs and abilities. This creates many challenges when attempting to set the most appropriate and equitable funding levels. Case-mix (CM) funding models represent one method for understanding client characteristics that correlate with resource intensity. A CM model was developed based on a derivation sample (n = 182) and tested with a replication sample (n = 135) of adults aged 18+ with known DSL who were living in the community. All items within the CM model came from a standardized, multidimensional assessment, the interRAI Community Health Assessment and the Deafblind Supplement. The main outcome was a summary of formal and informal service costs which included intervenor and interpreter support, in-home nursing, personal support and rehabilitation services. Informal costs were estimated based on a wage rate of half that for a professional service provider ($10/hour). Decision-tree analysis was used to create groups with homogeneous resource utilization. The resulting CM model had 9 terminal nodes. The CM index (CMI) showed a 35-fold range for total costs. In both the derivation and replication sample, 4 groups (out of a total of 18 or 22.2%) had a coefficient of variation value that exceeded the overall level of variation. Explained variance in the derivation sample was 67.7% for total costs versus 28.2% in the replication sample. A strong correlation was observed between the CMI values in the two samples (r = 0.82; p = 0.006). The derived CM funding model for adults with DSL differentiates resource intensity across 9 main groups and in both datasets there is evidence that these CM groups appropriately identify clients based on need for formal and informal support.

  8. A comprehensive study of sampling-based optimum signal detection in concentration-encoded molecular communication.

    PubMed

    Mahfuz, Mohammad U; Makrakis, Dimitrios; Mouftah, Hussein T

    2014-09-01

    In this paper, a comprehensive analysis of the sampling-based optimum signal detection in ideal (i.e., free) diffusion-based concentration-encoded molecular communication (CEMC) system has been presented. A generalized amplitude-shift keying (ASK)-based CEMC system has been considered in diffusion-based noise and intersymbol interference (ISI) conditions. Information is encoded by modulating the amplitude of the transmission rate of information molecules at the TN. The critical issues involved in the sampling-based receiver thus developed are addressed in detail, and its performance in terms of the number of samples per symbol, communication range, and transmission data rate is evaluated. ISI produced by the residual molecules deteriorates the performance of the CEMC system significantly, which further deteriorates when the communication range and/or the transmission data rate increase(s). In addition, the performance of the optimum receiver depends on the receiver's ability to compute the ISI accurately, thus providing a trade-off between receiver complexity and achievable bit error rate (BER). Exact and approximate detection performances have been derived. Finally, it is found that the sampling-based signal detection scheme thus developed can be applied to both binary and multilevel (M-ary) ASK-based CEMC systems, although M-ary systems suffer more from higher BER.

  9. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1978-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  10. Risk score to predict the outcome of patients with cerebral vein and dural sinus thrombosis.

    PubMed

    Ferro, José M; Bacelar-Nicolau, Helena; Rodrigues, Teresa; Bacelar-Nicolau, Leonor; Canhão, Patrícia; Crassard, Isabelle; Bousser, Marie-Germaine; Dutra, Aurélio Pimenta; Massaro, Ayrton; Mackowiack-Cordiolani, Marie-Anne; Leys, Didier; Fontes, João; Stam, Jan; Barinagarrementeria, Fernando

    2009-01-01

    Around 15% of patients die or become dependent after cerebral vein and dural sinus thrombosis (CVT). We used the International Study on Cerebral Vein and Dural Sinus Thrombosis (ISCVT) sample (624 patients, with a median follow-up time of 478 days) to develop a Cox proportional hazards regression model to predict outcome, dichotomised by a modified Rankin Scale score >2. From the model hazard ratios, a risk score was derived and a cut-off point selected. The model and the score were tested in 2 validation samples: (1) the prospective Cerebral Venous Thrombosis Portuguese Collaborative Study Group (VENOPORT) sample with 91 patients; (2) a sample of 169 consecutive CVT patients admitted to 5 ISCVT centres after the end of the ISCVT recruitment period. Sensitivity, specificity, c statistics and overall efficiency to predict outcome at 6 months were calculated. The model (hazard ratios: malignancy 4.53; coma 4.19; thrombosis of the deep venous system 3.03; mental status disturbance 2.18; male gender 1.60; intracranial haemorrhage 1.42) had overall efficiencies of 85.1, 84.4 and 90.0%, in the derivation sample and validation samples 1 and 2, respectively. Using the risk score (range from 0 to 9) with a cut-off of >or=3 points, overall efficiency was 85.4, 84.4 and 90.1% in the derivation sample and validation samples 1 and 2, respectively. Sensitivity and specificity in the combined samples were 96.1 and 13.6%, respectively. The CVT risk score has a good estimated overall rate of correct classifications in both validation samples, but its specificity is low. It can be used to avoid unnecessary or dangerous interventions in low-risk patients, and may help to identify high-risk CVT patients. (c) 2009 S. Karger AG, Basel.

  11. Novel search algorithms for a mid-infrared spectral library of cotton contaminants.

    PubMed

    Loudermilk, J Brian; Himmelsbach, David S; Barton, Franklin E; de Haseth, James A

    2008-06-01

    During harvest, a variety of plant based contaminants are collected along with cotton lint. The USDA previously created a mid-infrared, attenuated total reflection (ATR), Fourier transform infrared (FT-IR) spectral library of cotton contaminants for contaminant identification as the contaminants have negative impacts on yarn quality. This library has shown impressive identification rates for extremely similar cellulose based contaminants in cases where the library was representative of the samples searched. When spectra of contaminant samples from crops grown in different geographic locations, seasons, and conditions and measured with a different spectrometer and accessories were searched, identification rates for standard search algorithms decreased significantly. Six standard algorithms were examined: dot product, correlation, sum of absolute values of differences, sum of the square root of the absolute values of differences, sum of absolute values of differences of derivatives, and sum of squared differences of derivatives. Four categories of contaminants derived from cotton plants were considered: leaf, stem, seed coat, and hull. Experiments revealed that the performance of the standard search algorithms depended upon the category of sample being searched and that different algorithms provided complementary information about sample identity. These results indicated that choosing a single standard algorithm to search the library was not possible. Three voting scheme algorithms based on result frequency, result rank, category frequency, or a combination of these factors for the results returned by the standard algorithms were developed and tested for their capability to overcome the unpredictability of the standard algorithms' performances. The group voting scheme search was based on the number of spectra from each category of samples represented in the library returned in the top ten results of the standard algorithms. This group algorithm was able to identify correctly as many test spectra as the best standard algorithm without relying on human choice to select a standard algorithm to perform the searches.

  12. The effect of input DNA copy number on genotype call and characterising SNP markers in the humpback whale genome using a nanofluidic array.

    PubMed

    Bhat, Somanath; Polanowski, Andrea M; Double, Mike C; Jarman, Simon N; Emslie, Kerry R

    2012-01-01

    Recent advances in nanofluidic technologies have enabled the use of Integrated Fluidic Circuits (IFCs) for high-throughput Single Nucleotide Polymorphism (SNP) genotyping (GT). In this study, we implemented and validated a relatively low cost nanofluidic system for SNP-GT with and without Specific Target Amplification (STA). As proof of principle, we first validated the effect of input DNA copy number on genotype call rate using well characterised, digital PCR (dPCR) quantified human genomic DNA samples and then implemented the validated method to genotype 45 SNPs in the humpback whale, Megaptera novaeangliae, nuclear genome. When STA was not incorporated, for a homozygous human DNA sample, reaction chambers containing, on average 9 to 97 copies, showed 100% call rate and accuracy. Below 9 copies, the call rate decreased, and at one copy it was 40%. For a heterozygous human DNA sample, the call rate decreased from 100% to 21% when predicted copies per reaction chamber decreased from 38 copies to one copy. The tightness of genotype clusters on a scatter plot also decreased. In contrast, when the same samples were subjected to STA prior to genotyping a call rate and a call accuracy of 100% were achieved. Our results demonstrate that low input DNA copy number affects the quality of data generated, in particular for a heterozygous sample. Similar to human genomic DNA, a call rate and a call accuracy of 100% was achieved with whale genomic DNA samples following multiplex STA using either 15 or 45 SNP-GT assays. These calls were 100% concordant with their true genotypes determined by an independent method, suggesting that the nanofluidic system is a reliable platform for executing call rates with high accuracy and concordance in genomic sequences derived from biological tissue.

  13. Spatial and temporal variations in landscape evolution: historic and longer-term sediment flux through global catchments

    USGS Publications Warehouse

    Covault, Jacob A.; Craddock, William H.; Romans, Brian W.; Fildani, Andrea; Gosai, Mayur

    2013-01-01

    Sediment generation and transport through terrestrial catchments influence soil distribution, geochemical cycling of particulate and dissolved loads, and the character of the stratigraphic record of Earth history. To assess the spatiotemporal variation in landscape evolution, we compare global compilations of stream gauge–derived () and cosmogenic radionuclide (CRN)–derived (predominantly 10Be; ) denudation of catchments (mm/yr) and sediment load of rivers (Mt/yr). Stream gauges measure suspended sediment loads of rivers during several to tens of years, whereas CRNs provide catchment-integrated denudation rates at 102–105-yr time scales. Stream gauge–derived and CRN-derived sediment loads in close proximity to one another (<500 km) exhibit broad similarity ( stream gauge samples; CRN samples). Nearly two-thirds of CRN-derived sediment loads exceed historic loads measured at the same locations (). Excessive longer-term sediment loads likely are a result of longer-term recurrence of large-magnitude sediment-transport events. Nearly 80% of sediment loads measured at approximately the same locations exhibit stream gauge loads that are within an order of magnitude of CRN loads, likely as a result of the buffering capacity of large flood plains. Catchments in which space for deposition exceeds sediment supply have greater buffering capacity. Superior locations in which to evaluate anthropogenic influences on landscape evolution might be buffered catchments, in which temporary storage of sediment in flood plains can provide stream gauge–based sediment loads and denudation rates that are applicable over longer periods than the durations of gauge measurements. The buffering capacity of catchments also has implications for interpreting the stratigraphic record; delayed sediment transfer might complicate the stratigraphic record of external forcings and catchment modification.

  14. The contemporary degassing rate of 40Ar from the solid Earth

    PubMed Central

    Bender, Michael L.; Barnett, Bruce; Dreyfus, Gabrielle; Jouzel, Jean; Porcelli, Don

    2008-01-01

    Knowledge of the outgassing history of radiogenic 40Ar, derived over geologic time from the radioactive decay of 40K, contributes to our understanding of the geodynamic history of the planet and the origin of volatiles on Earth's surface. The 40Ar inventory of the atmosphere equals total 40Ar outgassing during Earth history. Here, we report the current rate of 40Ar outgassing, accessed by measuring the Ar isotope composition of trapped gases in samples of the Vostok and Dome C deep ice cores dating back to almost 800 ka. The modern outgassing rate (1.1 ± 0.1 × 108 mol/yr) is in the range of values expected by summing outgassing from the continental crust and the upper mantle, as estimated from simple calculations and models. The measured outgassing rate is also of interest because it allows dating of air trapped in ancient ice core samples of unknown age, although uncertainties are large (±180 kyr for a single sample or ±11% of the calculated age, whichever is greater). PMID:18550816

  15. Proposing an Empirically Justified Reference Threshold for Blood Culture Sampling Rates in Intensive Care Units

    PubMed Central

    Castell, Stefanie; Schwab, Frank; Geffers, Christine; Bongartz, Hannah; Brunkhorst, Frank M.; Gastmeier, Petra; Mikolajczyk, Rafael T.

    2014-01-01

    Early and appropriate blood culture sampling is recommended as a standard of care for patients with suspected bloodstream infections (BSI) but is rarely taken into account when quality indicators for BSI are evaluated. To date, sampling of about 100 to 200 blood culture sets per 1,000 patient-days is recommended as the target range for blood culture rates. However, the empirical basis of this recommendation is not clear. The aim of the current study was to analyze the association between blood culture rates and observed BSI rates and to derive a reference threshold for blood culture rates in intensive care units (ICUs). This study is based on data from 223 ICUs taking part in the German hospital infection surveillance system. We applied locally weighted regression and segmented Poisson regression to assess the association between blood culture rates and BSI rates. Below 80 to 90 blood culture sets per 1,000 patient-days, observed BSI rates increased with increasing blood culture rates, while there was no further increase above this threshold. Segmented Poisson regression located the threshold at 87 (95% confidence interval, 54 to 120) blood culture sets per 1,000 patient-days. Only one-third of the investigated ICUs displayed blood culture rates above this threshold. We provided empirical justification for a blood culture target threshold in ICUs. In the majority of the studied ICUs, blood culture sampling rates were below this threshold. This suggests that a substantial fraction of BSI cases might remain undetected; reporting observed BSI rates as a quality indicator without sufficiently high blood culture rates might be misleading. PMID:25520442

  16. Growth rate of YBCO-Ag superconducting single grains

    NASA Astrophysics Data System (ADS)

    Congreve, J. V. J.; Shi, Y. H.; Dennis, A. R.; Durrell, J. H.; Cardwell, D. A.

    2017-12-01

    The large scale use of (RE)Ba2Cu3O7 bulk superconductors, where RE=Y, Gd, Sm, is, in part, limited by the relatively poor mechanical properties of these inherently brittle ceramic materials. It is reported that alloying of (RE)Ba2Cu3O7 with silver enables a significant improvement in the mechanical strength of bulk, single grain samples without any detrimental effect on their superconducting properties. However, due to the complexity and number of inter-related variables involved in the top seeded melt growth (TSMG) process, the growth of large single grains is difficult and the addition of silver makes it even more difficult to achieve successful growth reliably. The key processing variables in the TSMG process include the times and temperatures of the stages within the heating profile, which can be derived from the growth rate during the growth process. To date, the growth rate of the YBa2Cu3O7-Ag system has not been reported in detail and it is this lacuna that we have sought to address. In this work we measure the growth rate of the YBCO-Ag system using a method based on continuous cooling and isothermal holding (CCIH). We have determined the growth rate by measuring the side length of the crystallised region for a number of samples for specified isothermal hold temperatures and periods. This has enabled the growth rate to be modelled and from this an optimized heating profile for the successful growth of YBCO-Ag single grains to be derived.

  17. The Distant Type Ia Supernova Rate

    DOE R&D Accomplishments Database

    Pain, R.; Fabbro, S.; Sullivan, M.; Ellis, R. S.; Aldering, G.; Astier, P.; Deustua, S. E.; Fruchter, A. S.; Goldhaber, G.; Goobar, A.; Groom, D. E.; Hardin, D.; Hook, I. M.; Howell, D. A.; Irwin, M. J.; Kim, A. G.; Kim, M. Y.; Knop, R. A.; Lee, J. C.; Perlmutter, S.; Ruiz-Lapuente, P.; Schahmaneche, K.; Schaefer, B.; Walton, N. A.

    2002-05-28

    We present a measurement of the rate of distant Type Ia supernovae derived using 4 large subsets of data from the Supernova Cosmology Project. Within this fiducial sample, which surveyed about 12 square degrees, thirty-eight supernovae were detected at redshifts 0.25--0.85. In a spatially flat cosmological model consistent with the results obtained by the Supernova Cosmology Project, we derive a rest-frame Type Ia supernova rate at a mean red shift z {approx_equal} 0.55 of 1.53 {sub -0.25}{sub -0.31}{sup 0.28}{sup 0.32} x 10{sup -4} h{sup 3} Mpc{sup -3} yr{sup -1} or 0.58{sub -0.09}{sub -0.09}{sup +0.10}{sup +0.10} h{sup 2} SNu(1 SNu = 1 supernova per century per 10{sup 10} L{sub B}sun), where the first uncertainty is statistical and the second includes systematic effects. The dependence of the rate on the assumed cosmological parameters is studied and the redshift dependence of the rate per unit comoving volume is contrasted with local estimates in the context of possible cosmic star formation histories and progenitor models.

  18. Toward the Development of an Objective Index of Dysphonia Severity: A Four-Factor Acoustic Model

    ERIC Educational Resources Information Center

    Awan, Shaheen N.; Roy, Nelson

    2006-01-01

    During assessment and management of individuals with voice disorders, clinicians routinely attempt to describe or quantify the severity of a patient's dysphonia. This investigation used acoustic measures derived from sustained vowel samples to predict dysphonia severity (as determined by auditory-perceptual ratings), for a diverse set of voice…

  19. On the Spectrum of the Plenoptic Function.

    PubMed

    Gilliam, Christopher; Dragotti, Pier-Luigi; Brookes, Mike

    2014-02-01

    The plenoptic function is a powerful tool to analyze the properties of multi-view image data sets. In particular, the understanding of the spectral properties of the plenoptic function is essential in many computer vision applications, including image-based rendering. In this paper, we derive for the first time an exact closed-form expression of the plenoptic spectrum of a slanted plane with finite width and use this expression as the elementary building block to derive the plenoptic spectrum of more sophisticated scenes. This is achieved by approximating the geometry of the scene with a set of slanted planes and evaluating the closed-form expression for each plane in the set. We then use this closed-form expression to revisit uniform plenoptic sampling. In this context, we derive a new Nyquist rate for the plenoptic sampling of a slanted plane and a new reconstruction filter. Through numerical simulations, on both real and synthetic scenes, we show that the new filter outperforms alternative existing filters.

  20. A simple method for the enrichment of bisphenols using boron nitride.

    PubMed

    Fischnaller, Martin; Bakry, Rania; Bonn, Günther K

    2016-03-01

    A simple solid-phase extraction method for the enrichment of 5 bisphenol derivatives using hexagonal boron nitride (BN) was developed. BN was applied to concentrate bisphenol derivatives in spiked water samples and the compounds were analyzed using HPLC coupled to fluorescence detection. The effect of pH and organic solvents on the extraction efficiency was investigated. An enrichment factor up to 100 was achieved without evaporation and reconstitution. The developed method was applied for the determination of bisphenol A migrated from some polycarbonate plastic products. Furthermore, bisphenol derivatives were analyzed in spiked and non-spiked canned food and beverages. None of the analyzed samples exceeded the migration limit set by the European Union of 0.6mg/kg food. The method showed good recovery rates ranging from 80% to 110%. Validation of the method was performed in terms of accuracy and precision. The applied method is robust, fast, efficient and easily adaptable to different analytical problems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Alpine Cliff Backwearing Rates Derived From Cosmogenic 10-Be in Active Medial Moraines

    NASA Astrophysics Data System (ADS)

    Ward, D. J.; Anderson, R. S.

    2008-12-01

    We use cosmogenic 10Be concentrations in rock samples from an active, ice-cored medial moraine to constrain glacial valley sidewall backwearing rates in the Kichatna Mountains, Alaska Range, Alaska. Kilometer-tall granite walls that tower over active glaciers are some of the most dramatic landscape features of the Alaska Range. The sheer scale of the relief speaks to the relative rates of valley incision by glaciers and rockwall retreat, but these rates are difficult to determine independently of one another. We present a method that uses cosmogenic nuclides to measure rockwall backwearing rates in glaciated settings on timescales of 103 yr, with a straightforward sampling strategy that exploits active medial moraines. Ablation-dominated medial moraines form by exhumation of debris-rich ice in the ablation zone of a glacier. Exhumed debris insulates the underlying ice and reduces its ablation rate relative to bare ice, promoting formation of a ridge-like, ice cored moraine. The rock debris is primarily derived from supraglacial rockfalls, which become incorporated in the ice along the glacier margins in the accumulation area. These lateral bands of debris-rich ice merge to form a medial debris band when glacial tributaries converge. The debris is minimally mixed until it is exhumed on the moraine crest. In the simplest case, such a system serves as a conveyor belt, bringing material from a specific part of the ablation zone valley wall to a specific point on a medial moraine in the ablation zone. We collected 5 grab samples, each consisting of ~30 2-10 cm rock fragments of the same lithology, from a 4.5 km longitudinal transect on the crest of the medial moraine of the Shadows glacier. We sampled the crest to minimize the amount of post-exhumation transport and mixing that may have occurred; each sample probably contains rocks from only one to a few rockfall events. Measured 10Be concentrations range from 1.5x104 to 3x104 at/g-qtz and are higher downvalley. First-order interpretation of these results yields minimum erosion rates of 0.2 to 0.5 mm/yr, consistent with erosion rates measured by various means in other glacial environments. This interpretation assumes a simple source area geometry and 10Be production rate scaling. To interpret these measurements in their full geological and topographic context, we present numerical models to describe how the expected distribution of 10Be concentrations should vary with erosion rate. This relationship is affected by source area hypsography and the distributions of size and recurrence interval of rockfall events. We randomly sample events based on a power-law size-recurrence relationship (constrained by field observations) from a numerical grid of production rates derived from a DEM of the source area. This yields the expected probability distribution of 10Be concentrations in the rockfall debris for a given mean erosion rate, weighted by event volume and source hypsography. The measured 10Be concentrations are low enough that accumulation during burial, exhumation, and transport in the medial moraine could account for up to ~1/4 of the signal, given our best estimates of glacier's surface speed (~30 m/yr). The slight downvalley increase in the concentrations supports a component of exposure in the moraine during transport. The amount of exposure depends on factors such as the entry and exit points of debris incorporated into the glacial ice, and the glacial mass balance pattern, and downvalley surface speed. We assess these effects with analytical and numerical models of debris transport in medial moraines, following Anderson (2000).

  2. Trauma and conditional risk of posttraumatic stress disorder in two American Indian reservation communities.

    PubMed

    Beals, Janette; Belcourt-Dittloff, Annjeanette; Garroutte, Eva M; Croy, Calvin; Jervis, Lori L; Whitesell, Nancy Rumbaugh; Mitchell, Christina M; Manson, Spero M

    2013-06-01

    To determine conditional risk of posttraumatic stress disorder (PTSD) in two culturally distinct American Indian reservation communities. Data derived from the American Indian Service Utilization, Psychiatric Epidemiology, Risk and Protective Factors Project, a cross-sectional population-based survey that was completed between 1997 and 2000. This study focused on 1,967 participants meeting the DSM-IV criteria for trauma exposure. Traumas were grouped into interpersonal, non-interpersonal, witnessed, and "trauma to close others" categories. Analyses examined distribution of worst traumas, conditional rates of PTSD following exposure, and distributions of PTSD cases deriving from these events. Bivariate and multivariate logistic regressions estimated associations of lifetime PTSD with trauma type. Overall, 15.9 % of those exposed to DSM-IV trauma qualified for lifetime PTSD, a rate comparable to similar US studies. Women were more likely to develop PTSD than were men. The majority (60 %) of cases of PTSD among women derived from interpersonal trauma exposure (in particular, sexual and physical abuse); among men, cases were more evenly distributed across trauma categories. Previous research has demonstrated higher rates of both trauma exposure and PTSD in American Indian samples compared to other Americans. This study shows that conditional rates of PTSD are similar to those reported elsewhere, suggesting that the elevated prevalence of this disorder in American Indian populations is largely due to higher rates of trauma exposure.

  3. Atomic oxygen effects on metals

    NASA Technical Reports Server (NTRS)

    Fromhold, Albert T.

    1987-01-01

    The effect of specimen geometry on the attack of metals by atomic oxygen is addressed. This is done by extending the coupled-currents approach in metal oxidation to spherical and cylindrical geometries. Kinetic laws are derived for the rates of oxidation of samples having these geometries. It is found that the burn-up time for spherical particles of a given diameter can be as much as a factor of 3 shorter than the time required to completely oxidize a planar sample of the same thickness.

  4. In-situ temperature-controllable shear flow device for neutron scattering measurement--an example of aligned bicellar mixtures.

    PubMed

    Xia, Yan; Li, Ming; Kučerka, Norbert; Li, Shutao; Nieh, Mu-Ping

    2015-02-01

    We have designed and constructed a temperature-controllable shear flow cell for in-situ study on flow alignable systems. The device has been tested in the neutron diffraction and has the potential to be applied in the small angle neutron scattering configuration to characterize the nanostructures of the materials under flow. The required sample amount is as small as 1 ml. The shear rate on the sample is controlled by the flow rate produced by an external pump and can potentially vary from 0.11 to 3.8 × 10(5) s(-1). Both unidirectional and oscillational flows are achievable by the setting of the pump. The instrument is validated by using a lipid bicellar mixture, which yields non-alignable nanodisc-like bicelles at low T and shear-alignable membranes at high T. Using the shear cell, the bicellar membranes can be aligned at 31 °C under the flow with a shear rate of 11.11 s(-1). Multiple high-order Bragg peaks are observed and the full width at half maximum of the "rocking curve" around the Bragg's condition is found to be 3.5°-4.1°. It is noteworthy that a portion of the membranes remains aligned even after the flow stops. Detailed and comprehensive intensity correction for the rocking curve has been derived based on the finite rectangular sample geometry and the absorption of the neutrons as a function of sample angle [See supplementary material at http://dx.doi.org/10.1063/1.4908165 for the detailed derivation of the absorption correction]. The device offers a new capability to study the conformational or orientational anisotropy of the solvated macromolecules or aggregates induced by the hydrodynamic interaction in a flow field.

  5. Parametric analyses of summative scores may lead to conflicting inferences when comparing groups: A simulation study.

    PubMed

    Khan, Asaduzzaman; Chien, Chi-Wen; Bagraith, Karl S

    2015-04-01

    To investigate whether using a parametric statistic in comparing groups leads to different conclusions when using summative scores from rating scales compared with using their corresponding Rasch-based measures. A Monte Carlo simulation study was designed to examine between-group differences in the change scores derived from summative scores from rating scales, and those derived from their corresponding Rasch-based measures, using 1-way analysis of variance. The degree of inconsistency between the 2 scoring approaches (i.e. summative and Rasch-based) was examined, using varying sample sizes, scale difficulties and person ability conditions. This simulation study revealed scaling artefacts that could arise from using summative scores rather than Rasch-based measures for determining the changes between groups. The group differences in the change scores were statistically significant for summative scores under all test conditions and sample size scenarios. However, none of the group differences in the change scores were significant when using the corresponding Rasch-based measures. This study raises questions about the validity of the inference on group differences of summative score changes in parametric analyses. Moreover, it provides a rationale for the use of Rasch-based measures, which can allow valid parametric analyses of rating scale data.

  6. Preliminary data suggest rates of male military sexual trauma may be higher than previously reported.

    PubMed

    Sheppard, Sean C; Hickling, Edward J; Earleywine, Mitch; Hoyt, Tim; Russo, Amanda R; Donati, Matthew R; Kip, Kevin E

    2015-11-01

    Stigma associated with disclosing military sexual trauma (MST) makes estimating an accurate base rate difficult. Anonymous assessment may help alleviate stigma. Although anonymous research has found higher rates of male MST, no study has evaluated whether providing anonymity sufficiently mitigates the impact of stigma on accurate reporting. This study used the unmatched count technique (UCT), a form of randomized response techniques, to gain information about the accuracy of base rate estimates of male MST derived via anonymous assessment of Operation Enduring Freedom (OEF)/Operation Iraqi Freedom (OIF) combat veterans. A cross-sectional convenience sample of 180 OEF/OIF male combat veterans, recruited via online websites for military populations, provided data about history of MST via traditional anonymous self-report and the UCT. The UCT revealed a rate of male MST more than 15 times higher than the rate derived via traditional anonymous assessment (1.1% vs. 17.2%). These data suggest that anonymity does not adequately mitigate the impact of stigma on disclosure of male MST. Results, though preliminary, suggest that published rates of male MST may substantially underestimate the true rate of this problem. The UCT has significant potential to improve base rate estimation of sensitive behaviors in the military. (c) 2015 APA, all rights reserved).

  7. The Rayleigh-Taylor instability in a self-gravitating two-layer viscous sphere

    NASA Astrophysics Data System (ADS)

    Mondal, Puskar; Korenaga, Jun

    2018-03-01

    The dispersion relation of the Rayleigh-Taylor instability in the spherical geometry is of profound importance in the context of the Earth's core formation. Here we present a complete derivation of this dispersion relation for a self-gravitating two-layer viscous sphere. Such relation is, however, obtained through the solution of a complex transcendental equation, and it is difficult to gain physical insights directly from the transcendental equation itself. We thus also derive an empirical formula to compute the growth rate, by combining the Monte Carlo sampling of the relevant model parameter space with linear regression. Our analysis indicates that the growth rate of Rayleigh-Taylor instability is most sensitive to the viscosity of inner layer in a physical setting that is most relevant to the core formation.

  8. The History and Rate of Star Formation within the G305 Complex

    NASA Astrophysics Data System (ADS)

    Faimali, Alessandro Daniele

    2013-07-01

    Within this thesis, we present an extended multiwavelength analysis of the rich massive Galactic star-forming complex G305. We have focused our attention on studying the both the embedded massive star-forming population within G305, while also identifying the intermediate-, to lowmass content of the region also. Though massive stars play an important role in the shaping and evolution of their host galaxies, the physics of their formation still remains unclear. We have therefore set out to studying the nature of star formation within this complex, and also identify the impact that such a population has on the evolution of G305. We firstly present a Herschel far-infrared study towards G305, utilising PACS 70, 160 micron and SPIRE 250, 350, and 500 micron observations from the Hi-GAL survey of the Galactic plane. The focus of this study is to identify the embedded massive star-forming population within G305, by combining far-infrared data with radio continuum, H2O maser, methanol maser, MIPS, and Red MSX Source survey data available from previous studies. From this sample we identify some 16 candidate associations are identified as embedded massive star-forming regions, and derive a two-selection colour criterion from this sample of log(F70/F500) >= 1 and log(F160/F350) >= 1.6 to identify an additional 31 embedded massive star candidates with no associated star-formation tracers. Using this result, we are able to derive a star formation rate (SFR) of 0.01 - 0.02 Msun/yr. Comparing this resolved star formation rate, to extragalactic star formation rate tracers (based on the Kennicutt-Schmidt relation), we find the star formation activity is underestimated by a factor of >=2 in comparison to the SFR derived from the YSO population. By next combining data available from 2MASS and VVV, Spitzer GLIMPSE and MIPSGAL, MSX, and Herschel Hi-GAL, we are able to identify the low-, to intermediate-mass YSOs present within the complex. Employing a series of stringent colour selection criteria and fitting reddened stellar atmosphere models, we are able remove a significant amount of contaminating sources from our sample, leaving us with a highly reliable sample of some 599 candidate YSOs. From this sample, we derive a present-day SFR of 0.005±0.001 Msun/yr, and find the YSO mass function (YMF) of G305 to be significantly steeper than the standard Salpeter-Kroupa IMF. We find evidence of mass segregation towards G305, with a significant variation of the YMF both with the active star-forming region, and the outer region. The spatial distribution, and age gradient, of our 601 candidate YSOs also seem to rule out the scenario of propagating star formation within G305, with a more likely scenario of punctuated star formation over the lifetime of the complex.

  9. Sampled-Data Consensus of Linear Multi-agent Systems With Packet Losses.

    PubMed

    Zhang, Wenbing; Tang, Yang; Huang, Tingwen; Kurths, Jurgen

    In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.

  10. Coupling of physical erosion and chemical weathering after phases of intense human activity

    NASA Astrophysics Data System (ADS)

    Schoonejans, Jerome; Vanacker, Veerle; Opfergelt, Sophie; Ameijeiras-Mariño, Yolanda; Kubik, Peter W.

    2014-05-01

    Anthropogenic disturbance of natural vegetation profoundly alters the lateral and vertical fluxes of soil nutrients and particles at the land surface. Human-induced acceleration of soil erosion can thereby result in an imbalance between physical erosion, soil production and chemical weathering. The (de-)coupling between physical erosion and chemical weathering in ecosystems with strong anthropogenic disturbances is not yet fully understood, as earlier studies mostly focused on natural ecosystems. In this study, we explore the chemical weathering intensity for four study sites located in the Internal Zone of the Spanish Betic Cordillera. Most of the sites belong to the Nevado-Filabres complex, but are characterized by different rates of long-term exhumation, 10Be catchment-wide denudation and hill slope morphology. Denudation rates are generally low, but show large variation between the three sites (from 23 to 246 mm kyr-1). The magnitude of denudation rates is consistent with longer-term uplift rates derived from marine deposits, fission-track measurements and vertical fault slip rates. Two to three soil profiles were sampled per study site at exposed ridge tops. All soils overly fractured mica schist, and are very thin (< 60cm). In each soil profile, we sampled 5 depth slices, rock fragments and the (weathered) bedrock. In total, 38 soil and 20 rock samples were analyzed for their chemical composition. The chemical weathering intensity is constrained by the Chemical Depletion Fraction that is based on a chemical mass balance approach using Zr as an immobile element. Chemical weathering accounts for 5 to 35% of the total mass lost due to denudation. We observe systematically higher chemical weathering intensities (CDFs) in sites with lower denudation rates (and vice versa), suggesting that weathering is supply-limited. Our measurements of soil elemental losses from 10 soil profiles suggest that the observed variation in chemical weathering is strongly associated with long-term 10Be derived denudation rates, and tectonic uplift rates. Our data do not provide direct evidence of an imbalance between soil production and chemical weathering, despite more than 2000 years of intense human activity.

  11. [Studies on origin of illicit methamphetamine. I. The relationship of enantiomeric compositions between methamphetamine and its raw material (ephedrine)].

    PubMed

    Kikura, R; Shimamine, M; Nakahara, Y; Terao, T

    1992-01-01

    In order to elucidate the relationship of enantiomeric compositions between methamphetamine (MA) and its raw materials, ephedrine (EP) enantiomers, commercial EP samples and MA samples prepared from them were analyzed by HPLC using GITC-prelabeling. The GITC derivatives were separated on ODS column using methanol-water-acetic acid (45:54:1) at a flow rate of 1.2 ml/min for EP and tetrahydrofuran-water-acetic acid (29:70:1) at a flow rate of 1 ml/min for MA. The chromatographic conditions resulted in such a good separation of four EP and two MA enantiomers that 1/1000 enantiomeric impurities could be detected and discriminated from the major enantiomer with good reproducibility. Moreover, it was demonstrated that the asymmetric center at alpha-position of amino group was entirely retained throughout the reductive reaction of the EP samples, and that the MA samples inherited the enantiomeric character from the EP samples used. This method was applied to discriminative analysis of MA samples seized in Japan.

  12. Empirical Derivation and Validation of a Clinical Case Definition for Neuropsychological Impairment in Children and Adolescents.

    PubMed

    Beauchamp, Miriam H; Brooks, Brian L; Barrowman, Nick; Aglipay, Mary; Keightley, Michelle; Anderson, Peter; Yeates, Keith O; Osmond, Martin H; Zemek, Roger

    2015-09-01

    Neuropsychological assessment aims to identify individual performance profiles in multiple domains of cognitive functioning; however, substantial variation exists in how deficits are defined and what cutoffs are used, and there is no universally accepted definition of neuropsychological impairment. The aim of this study was to derive and validate a clinical case definition rule to identify neuropsychological impairment in children and adolescents. An existing normative pediatric sample was used to calculate base rates of abnormal functioning on eight measures covering six domains of neuropsychological functioning. The dataset was analyzed by varying the range of cutoff levels [1, 1.5, and 2 standard deviations (SDs) below the mean] and number of indicators of impairment. The derived rule was evaluated by bootstrap, internal and external clinical validation (orthopedic and traumatic brain injury). Our neuropsychological impairment (NPI) rule was defined as "two or more test scores that fall 1.5 SDs below the mean." The rule identifies 5.1% of the total sample as impaired in the assessment battery and consistently targets between 3 and 7% of the population as impaired even when age, domains, and number of tests are varied. The NPI rate increases in groups known to exhibit cognitive deficits. The NPI rule provides a psychometrically derived method for interpreting performance across multiple tests and may be used in children 6-18 years. The rule may be useful to clinicians and scientists who wish to establish whether specific individuals or clinical populations present within expected norms versus impaired function across a battery of neuropsychological tests.

  13. The drainage of the Baltic Ice Lake and a new Scandinavian reference 10Be production rate

    NASA Astrophysics Data System (ADS)

    Stroeven, Arjen P.; Heyman, Jakob; Fabel, Derek; Björck, Svante; Caffee, Marc W.; Fredin, Ola; Harbor, Jonathan M.

    2015-04-01

    An important constraint on the reliability of cosmogenic nuclide exposure dating is the derivation of tightly controlled production rates. We present a new dataset for 10Be production rate calibration from Mount Billingen, southern Sweden, the site of the final drainage of the Baltic Ice Lake, an event dated to 11,620 ± 100 cal yr BP. Nine samples of flood-scoured bedrock surfaces and depositional boulders and cobbles unambiguously connected to the drainage event yield a reference 10Be production rate of 4.09 ± 0.22 atoms g-1 yr-1 for the CRONUS Lm scaling and 3.93 ± 0.21 atoms g-1 yr-1 for the LSD general spallation scaling. We also recalibrate the reference 10Be production rates for four sites in Norway and combine these with the Billingen results to derive a tightly clustered Scandinavian reference 10Be production rate of 4.12 ± 0.10 (4.12 ± 0.25 for altitude scaling) atoms g-1 yr-1 for the Lm scaling scheme and 3.96 ± 0.10 (3.96 ± 0.24 for altitude scaling) atoms g-1 yr-1 for the LSD scaling scheme.

  14. Supernovae in the Subaru Deep Field: the rate and delay-time distribution of Type Ia supernovae out to redshift 2

    NASA Astrophysics Data System (ADS)

    Graur, O.; Poznanski, D.; Maoz, D.; Yasuda, N.; Totani, T.; Fukugita, M.; Filippenko, A. V.; Foley, R. J.; Silverman, J. M.; Gal-Yam, A.; Horesh, A.; Jannuzi, B. T.

    2011-10-01

    The Type Ia supernova (SN Ia) rate, when compared to the cosmic star formation history (SFH), can be used to derive the delay-time distribution (DTD; the hypothetical SN Ia rate versus time following a brief burst of star formation) of SNe Ia, which can distinguish among progenitor models. We present the results of a supernova (SN) survey in the Subaru Deep Field (SDF). Over a period of 3 years, we have observed the SDF on four independent epochs with Suprime-Cam on the Subaru 8.2-m telescope, with two nights of exposure per epoch, in the R, i'and z' bands. We have discovered 150 SNe out to redshift z≈ 2. Using 11 photometric bands from the observer-frame far-ultraviolet to the near-infrared, we derive photometric redshifts for the SN host galaxies (for 24 we also have spectroscopic redshifts). This information is combined with the SN photometry to determine the type and redshift distribution of the SN sample. Our final sample includes 28 SNe Ia in the range 1.0 < z < 1.5 and 10 in the range 1.5 < z < 2.0. As our survey is largely insensitive to core-collapse SNe (CC SNe) at z > 1, most of the events found in this range are likely SNe Ia. Our SN Ia rate measurements are consistent with those derived from the Hubble Space Telescope (HST) Great Observatories Origins Deep Survey (GOODS) sample, but the overall uncertainty of our 1.5 < z < 2.0 measurement is a factor of 2 smaller, of 35-50 per cent. Based on this sample, we find that the SN Ia rate evolution levels off at 1.0 < z < 2.0, but shows no sign of declining. Combining our SN Ia rate measurements and those from the literature, and comparing to a wide range of possible SFHs, the best-fitting DTD (with a reduced χ2= 0.7) is a power law of the form Ψ(t) ∝tβ, with index β=-1.1 ± 0.1 (statistical) ±0.17 (systematic). This result is consistent with other recent DTD measurements at various redshifts and environments, and is in agreement with a generic prediction of the double-degenerate progenitor scenario for SNe Ia. Most single-degenerate models predict different DTDs. By combining the contribution from CC SNe, based on the wide range of SFHs, with that from SNe Ia, calculated with the best-fitting DTD, we predict that the mean present-day cosmic iron abundance is in the range ZFe= (0.09-0.37) ZFe, ⊙. We further predict that the high-z SN searches now beginning with HST will discover 2-11 SNe Ia at z > 2.

  15. Optimal Inspection of Imports to Prevent Invasive Pest Introduction.

    PubMed

    Chen, Cuicui; Epanchin-Niell, Rebecca S; Haight, Robert G

    2018-03-01

    The United States imports more than 1 billion live plants annually-an important and growing pathway for introduction of damaging nonnative invertebrates and pathogens. Inspection of imports is one safeguard for reducing pest introductions, but capacity constraints limit inspection effort. We develop an optimal sampling strategy to minimize the costs of pest introductions from trade by posing inspection as an acceptance sampling problem that incorporates key features of the decision context, including (i) simultaneous inspection of many heterogeneous lots, (ii) a lot-specific sampling effort, (iii) a budget constraint that limits total inspection effort, (iv) inspection error, and (v) an objective of minimizing cost from accepted defective units. We derive a formula for expected number of accepted infested units (expected slippage) given lot size, sample size, infestation rate, and detection rate, and we formulate and analyze the inspector's optimization problem of allocating a sampling budget among incoming lots to minimize the cost of slippage. We conduct an empirical analysis of live plant inspection, including estimation of plant infestation rates from historical data, and find that inspections optimally target the largest lots with the highest plant infestation rates, leaving some lots unsampled. We also consider that USDA-APHIS, which administers inspections, may want to continue inspecting all lots at a baseline level; we find that allocating any additional capacity, beyond a comprehensive baseline inspection, to the largest lots with the highest infestation rates allows inspectors to meet the dual goals of minimizing the costs of slippage and maintaining baseline sampling without substantial compromise. © 2017 Society for Risk Analysis.

  16. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    PubMed

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  17. The effect of sampling rate and lowpass filters on saccades - A modeling approach.

    PubMed

    Mack, David J; Belfanti, Sandro; Schwarz, Urs

    2017-12-01

    The study of eye movements has become popular in many fields of science. However, using the preprocessed output of an eye tracker without scrutiny can lead to low-quality or even erroneous data. For example, the sampling rate of the eye tracker influences saccadic peak velocity, while inadequate filters fail to suppress noise or introduce artifacts. Despite previously published guiding values, most filter choices still seem motivated by a trial-and-error approach, and a thorough analysis of filter effects is missing. Therefore, we developed a simple and easy-to-use saccade model that incorporates measured amplitude-velocity main sequences and produces saccades with a similar frequency content to real saccades. We also derived a velocity divergence measure to rate deviations between velocity profiles. In total, we simulated 155 saccades ranging from 0.5° to 60° and subjected them to different sampling rates, noise compositions, and various filter settings. The final goal was to compile a list with the best filter settings for each of these conditions. Replicating previous findings, we observed reduced peak velocities at lower sampling rates. However, this effect was highly non-linear over amplitudes and increasingly stronger for smaller saccades. Interpolating the data to a higher sampling rate significantly reduced this effect. We hope that our model and the velocity divergence measure will be used to provide a quickly accessible ground truth without the need for recording and manually labeling saccades. The comprehensive list of filters allows one to choose the correct filter for analyzing saccade data without resorting to trial-and-error methods.

  18. [Study on the genuineness and producing area of Panax notoginseng based on infrared spectroscopy combined with discriminant analysis].

    PubMed

    Liu, Fei; Wang, Yuan-zhong; Yang, Chun-yan; Jin, Hang

    2015-01-01

    The genuineness and producing area of Panax notoginseng were studied based on infrared spectroscopy combined with discriminant analysis. The infrared spectra of 136 taproots of P. notoginseng from 13 planting point in 11 counties were collected and the second derivate spectra were calculated by Omnic 8. 0 software. The infrared spectra and their second derivate spectra in the range 1 800 - 700 cm-1 were used to build model by stepwise discriminant analysis, which was in order to distinguish study on the genuineness of P. notoginseng. The model built based on the second derivate spectra showed the better recognition effect for the genuineness of P. notoginseng. The correct rate of returned classification reached to 100%, and the prediction accuracy was 93. 4%. The stability of model was tested by cross validation and the method was performed extrapolation validation. The second derivate spectra combined with the same discriminant analysis method were used to distinguish the producing area of P. notoginseng. The recognition effect of models built based on different range of spectrum and different numbers of samples were compared and found that when the model was built by collecting 8 samples from each planting point as training sample and the spectrum in the range 1 500 - 1 200 cm-1 , the recognition effect was better, with the correct rate of returned classification reached to 99. 0%, and the prediction accuracy was 76. 5%. The results indicated that infrared spectroscopy combined with discriminant analysis showed good recognition effect for the genuineness of P. notoginseng. The method might be a hopeful new method for identification of genuineness of P. notoginseng in practice. The method could recognize the producing area of P. notoginseng to some extent and could be a new thought for identification of the producing area of P. natoginseng.

  19. Commercial video frame rates can produce reliable results for both normal and CP spastic gait's spatiotemporal, angular, and linear displacement variables.

    PubMed

    Nikodelis, Thomas; Moscha, Dimitra; Metaxiotis, Dimitris; Kollias, Iraklis

    2011-08-01

    To investigate what sampling frequency is adequate for gait, the correlation of spatiotemporal parameters and the kinematic differences, between normal and CP spastic gait, for three sampling frequencies (100 Hz, 50 Hz, 25 Hz) were assessed. Spatiotemporal, angular, and linear displacement variables in the sagittal plane along with their 1st and 2nd derivatives were analyzed. Spatiotemporal stride parameters were highly correlated among the three sampling frequencies. The statistical model (2 × 3 ANOVA) gave no interactions between the factors group and frequency, indicating that group differences were invariant of sampling frequency. Lower frequencies led to smoother curves for all the variables, with a loss of information though, especially for the 2nd derivatives, having a homologous effect as the one of oversmoothing. It is proposed that in the circumstance that only spatiotemporal stride parameters, as well as angular and linear displacements are to be used, in gait reports, then commercial video camera speeds (25/30 Hz, 50/60 Hz when deinterlaced) can be considered as a low-cost solution to produce acceptable results.

  20. Development of T m -shift genotyping method for detection of cat-derived Giardia lamblia.

    PubMed

    Pan, Weida; Fu, Yeqi; Abdullahi, Auwalu Yusuf; Wang, Mingwei; Shi, Xianli; Yang, Fang; Yu, Xingang; Yan, Xinxin; Zhang, Pan; Hang, Jianxiong; Li, Guoqing

    2017-04-01

    To develop T m -shift genotyping method for detection of cat-derived Giardia lamblia, two sets of primers with two GC-rich tails of unequal length attached to their 5'-end were designed according to two SNPs (BG434 and BG170) of β-giardin (bg) gene, and specific PCR products were identified by inspection of a melting curve on real-time PCR thermocycler. A series of experiments on the stability, sensitivity, and accuracy of T m -shift method was tested, and clinical samples were also detected. The results showed that two sets of primers based on SNP could distinguish accurately between assemblages A and F. Coefficient of variation of T m values of assemblage A and F was 0.14 and 0.07% in BG434 and 0.10 and 0.11% in BG170, respectively. The lowest detection concentration was 4.52 × 10 -5 and 4.88 × 10 -5  ng/μL samples of assemblage A and F standard plasmids. The T m -shift genotyping results of ten DNA samples from the cat-derived G. lamblia were consistent with their known genotypes. The detection rate of clinical samples by T m -shift was higher than that by microscopy, and their genotyping results were in complete accordance with sequencing results. It is concluded that the T m -shift genotyping method is rapid, specific, and sensitive and may provide a new technological mean for molecular detection and epidemiological investigation of the cat-derived G. lamblia.

  1. Graphene-sensitized microporous membrane/solvent microextraction for the preconcentration of cinnamic acid derivatives in Rhizoma Typhonii.

    PubMed

    Xing, Rongrong; Hu, Shuang; Chen, Xuan; Bai, Xiaohong

    2014-09-01

    A novel graphene-sensitized microporous membrane/solvent microextraction method named microporous membrane/graphene/solvent synergistic microextraction, coupled with high-performance liquid chromatography and UV detection, was developed and introduced for the extraction and determination of three cinnamic acid derivatives in Rhizoma Typhonii. Several factors affecting performance were investigated and optimized, including the types of graphene and extraction solvent, concentration of graphene dispersed in octanol, sample phase pH, ionic strength, stirring rate, extraction time, extraction temperature, and sample volume. Under optimized conditions, the enrichment factors of cinnamic acid derivatives ranged from 75 to 269. Good linearities were obtained from 0.01 to 10 μg/mL for all analytes with regression coefficients between 0.9927 and 0.9994. The limits of quantification were <1 ng/mL, and satisfactory recoveries (99-104%) and precision (1.1-10.8%) were also achieved. The synergistic microextraction mechanism based on graphene sensitization was analyzed and described. The experimental results showed that the method was simple, sensitive, practical, and effective for the preconcentration and determination of cinnamic acid derivatives in Rhizoma Typhonii. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. The Type Ia Supernova Rate and Delay-Time Distribution

    NASA Astrophysics Data System (ADS)

    Graur, Or

    2013-11-01

    The nature of the progenitor stellar systems of thermonuclear, or Type Ia, supernovae (SNe Ia) remains unknown. Unlike core-collapse (CC) SNe, which have been successfully linked, at least partially, to various types of massive stars, the progenitors of SNe Ia are to date undetected in pre-explosion images and the nature of these progenitors can only be probed using indirect methods. In this thesis, I present three SN surveys aimed at measuring the rates at which SNe Ia explode at different times throughout the Universe's history and in different types of galaxies. I use these rates to re-construct the SN Ia delay-time distribution (DTD), a function that connects between the star-formation history (SFH) of a specific stellar environment and its SN Ia rate, and I use it to constrain different progenitor models. In Chapter 1, I provide a brief introduction of the field. This is followed, in Chapter 2, by a description of the Subaru Deep Field (SDF) SN Survey. Over a period of three years between 2005-2008, the SDF was observed on four independent epochs with Suprime-Cam on the Subaru 8.2-m telescope, with two nights of exposure per epoch, in the R, i', and z' bands. In this survey, I discover 150 SNe out to redshift z ~ 2, including 27 SNe Ia in the range 1.0 < z < 1.5 and 10 in the range 1.5 < z < 2.0. The SN Ia rate measurements from this sample are consistent with those derived from the Hubble Space Telescope (HST) GOODS sample, but the overall uncertainty of the 1.5 < z < 2.0 measurement is a factor of 2 smaller, of 35-50%. Based on this sample, we find that the SN Ia rate evolution levels off at 1.0 < z < 2.0, but shows no sign of declining. Combining our SN Ia rate measurements and those from the literature, and comparing to a wide range of possible SFHs, the best-fitting DTD is a power law of the form Psi(t) ~ t^beta, with index beta = -1.1 ± 0.1 (statistical) ± 0.17 (systematic). By combining the contribution from CC SNe, based on the wide range of SFHs, with that from SNe Ia, calculated with the best-fitting DTD, we map the cosmic history of iron accumulation and predict that the mean present-day cosmic iron abundance is in the range Z_Fe = (0.09-0.37) Z_Fe,solar. Most SNe have been discovered in dedicated imaging surveys and have been classified by means of follow-up spectroscopy. However, it is also possible to combine the discovery and classification stages by means of a spectroscopic SN survey. In Chapter 3, I develop a method to detect SN spectra buried in galaxy spectra acquired by large-scale spectroscopic galaxy surveys. Applying this procedure to the ~700,000 galaxy spectra in the 7th Data Release of the Sloan Digital Sky Survey (SDSS) that have SFHs derived with the VErsatile SPectral Analysis code (VESPA), I detect 90 SNe Ia and 10 Type II SNe. I use the SN Ia sample to measure SN Ia rates per unit stellar mass and confirm, at the median redshift of the sample, z = 0.1, the inverse dependence on galaxy mass of the SN Ia rate per unit mass, previously reported by Li et al. (2011a) for a local sample. I further confirm, following Kistler et al. (2013), that this relation can be explained by the combination of galaxy "downsizing" and a power-law DTD with an index of -1. Finally, I use the SN sample, combined with the individual galaxy SFHs, to derive the late component of the DTD, finding a value consistent with previous derivations. Chapter 4 presents the near-final SN sample and SN Ia rates from the Cluster Lensing And Supernova survey with Hubble (CLASH). Using the Advanced Camera for Surveys and the Wide Field Camera 3 on HST, we image 25 galaxy clusters and blank fields of galaxies. I report a sample of 22 SNe discovered in the blank fields around 20 of the 25 galaxy clusters. Of these, 11 are classified as SNe Ia, including four SNe Ia at redshifts z > 1.2. I measure volumetric SN Ia rates out to redshift z = 1.8 and add the first upper limit on the SN Ia rate in the range 1.8 < z < 2.4. The results are consistent with the rates I measure in Chapter 2 and with those from the HST/GOODS survey. Together with the most accurate and precise measurements at redshifts z < 1, they result in a best-fitting power-law DTD with an index of -0.93 +0.05(0.11) -0.06(0.12) (statistical) +0.12 -0.08 (systematic). The results of Chapters 2-4, summarized in Chapter 5, join other recent evidence suggestive of the double-degenerate progenitor scenario. A power-law DTD with an index of ~-1 can be explained by the gravitational merger of two carbon-oxygen white dwarfs. However, this form of the DTD does not necessarily exclude other progenitor scenarios or the possibility that there is more than one SN Ia production channel. In Chapter 5, I describe ongoing and future work that addresses this problem. Specifically, it may be possible to infer the existence of multiple production channels by studying the prompt component of the DTD. This can be achieved either by measuring volumetric SN Ia rates at higher redshifts than presented here, or by measuring SN Ia rates per unit mass in low-mass, dwarf galaxies. I present an initial sample of four SNe Ia discovered among ~52,000 SDSS galaxy spectra using the procedure developed in Chapter 3. The rate measured with this sample is not accurate enough to distinguish between DTD models, but it shows that with a larger galaxy sample, such as is being acquired by future iterations of the SDSS, such distinction will be possible. Finally, I show in Chapter 5 initial results from a program to obtain spectroscopic redshifts for the SN host galaxies in Chapter 2 with the highest photometric-based redshifts. This will eventually reduce the systematic error in the high-redshift SN Ia rate.

  3. Prediction of near-term increases in suicidal ideation in recently depressed patients with bipolar II disorder using intensive longitudinal data.

    PubMed

    Depp, Colin A; Thompson, Wesley K; Frank, Ellen; Swartz, Holly A

    2017-01-15

    There are substantial gaps in understanding near-term precursors of suicidal ideation in bipolar II disorder. We evaluated whether repeated patient-reported mood and energy ratings predicted subsequent near-term increases in suicide ideation. Secondary data were used from 86 depressed adults with bipolar II disorder enrolled in one of 3 clinical trials evaluating Interpersonal and Social Rhythm Therapy and/or pharmacotherapy as treatments for depression. Twenty weeks of daily mood and energy ratings and weekly Hamilton Depression Rating Scale (HDRS) were obtained. Penalized regression was used to model trajectories of daily mood and energy ratings in the 3 week window prior to HDRS Suicide Item ratings. Participants completed an average of 68.6 (sd=52) days of mood and energy ratings. Aggregated across the sample, 22% of the 1675 HDRS Suicide Item ratings were non-zero, indicating presence of at least some suicidal thoughts. A cross-validated model with longitudinal ratings of energy and depressed mood within the three weeks prior to HDRS ratings resulted in an AUC of 0.91 for HDRS Suicide item >2, accounting for twice the variation when compared to baseline HDRS ratings. Energy, both at low and high levels, was an earlier predictor than mood. Data derived from a heterogeneous treated sample may not generalize to naturalistic samples. Identified suicidal behavior was absent from the sample so it could not be predicted. Prediction models coupled with intensively gathered longitudinal data may shed light on the dynamic course of near-term risk factors for suicidal ideation in bipolar II disorder. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Radar-derived asteroid shapes point to a 'zone of stability' for topography slopes and surface erosion rates

    NASA Astrophysics Data System (ADS)

    Richardson, J.; Graves, K.; Bowling, T.

    2014-07-01

    Previous studies of the combined effects of asteroid shape, spin, and self-gravity have focused primarily upon the failure limits for bodies with a variety of standard shapes, friction, and cohesion values [1,2,3]. In this study, we look in the opposite direction and utilize 22 asteroid shape-models derived from radar inversion [4] and 7 small body shape-models derived from spacecraft observations [5] to investigate the region in shape/spin space [1,2] wherein self-gravity and rotation combine to produce a stable minimum state with respect to surface potential differences, dynamic topography, slope magnitudes, and erosion rates. This erosional minimum state is self-correcting, such that changes in the body's rotation rate, either up or down, will increase slope magnitudes across the body, thereby driving up erosion rates non-linearly until the body has once again reached a stable, minimized surface state [5]. We investigated this phenomenon in a systematic fashion using a series of synthesized, increasingly prolate spheroid shape models. Adjusting the rotation rate of each synthetic shape to minimize surface potential differences, dynamic topography, and slope magnitudes results in the magenta curve of the figure (right side), defining the zone of maximum surface stability (MSS). This MSS zone is invariant both with respect to body size (gravitational potential and rotational potential scale together with radius), and density when the scaled-spin of [2] is used. Within our sample of observationally derived small-body shape models, slow rotators (Group A: blue points), that are not in the maximum surface stability (MSS) zone and where gravity dominates the slopes, will generally experience moderate erosion rates (left plot) and will tend to move up and to the right in shape/spin space as the body evolves (right plot). Fast rotators (Group C: red points), that are not in the MSS zone and where spin dominates the slopes, will generally experience high erosion rates (left plot) and will tend to move down and to the left in shape/spin space as the body evolves (right plot), barring other influences such as YORP spin-up [6]. Moderate rotators (Group B: green points) have slopes that are influenced equally by gravity and spin, lie in or near the self-correcting MSS zone (right plot), and will generally experience the lowest erosion rates (left plot). These objects comprise 12 (43%) of the 28 bodies studied, perhaps indicating some prevalence for the MSS zone. On the other hand, a sample of 1300 asteroid shape and spin parameters (small grey points), derived from asteroid lightcurve data [7], do not show this same degree of correlation, perhaps indicating the relative weakness of erosion-driven shape modification as compared to other influences. We will continue to investigate this phenomenon as the number of detailed shape models from ground-based radar and other observations continues to increase.

  5. The wind speeds, dust content, and mass-loss rates of evolved AGB and RSG stars at varying metallicity

    NASA Astrophysics Data System (ADS)

    Goldman, Steven R.; van Loon, Jacco Th.; Zijlstra, Albert A.; Green, James A.; Wood, Peter R.; Nanni, Ambra; Imai, Hiroshi; Whitelock, Patricia A.; Matsuura, Mikako; Groenewegen, Martin A. T.; Gómez, José F.

    2017-02-01

    We present the results of our survey of 1612-MHz circumstellar OH maser emission from asymptotic giant branch (AGB) stars and red supergiants (RSGs) in the Large Magellanic Cloud (LMC). We have discovered four new circumstellar maser sources in the LMC, and increased the number of reliable wind speeds from infrared (IR) stars in the LMC from 5 to 13. Using our new wind speeds, as well as those from Galactic sources, we have derived an updated relation for dust-driven winds: vexp ∝ ZL0.4. We compare the subsolar metallicity LMC OH/IR stars with carefully selected samples of more metal-rich OH/IR stars, also at known distances, in the Galactic Centre and Galactic bulge. We derive pulsation periods for eight of the bulge stars for the first time by using near-IR photometry from the Vista Variables in the Via Lactea survey. We have modelled our LMC OH/IR stars and developed an empirical method of deriving gas-to-dust ratios and mass-loss rates by scaling the models to the results from maser profiles. We have done this also for samples in the Galactic Centre and bulge and derived a new mass-loss prescription which includes luminosity, pulsation period, and gas-to-dust ratio dot{M} = 1.06^{+3.5}_{-0.8} × }10^{-5 (L/10^4 L_{⊙})^{0.9± 0.1}(P/500 {d})^{0.75± 0.3} (r_gd/200)^{-0.03± 0.07} M⊙ yr-1. The tightest correlation is found between mass-loss rate and luminosity. We find that the gas-to-dust ratio has little effect on the mass-loss of oxygen-rich AGB stars and RSGs within the Galaxy and the LMC. This suggests that the mass-loss of oxygen-rich AGB stars and RSGs is (nearly) independent of metallicity between a half and twice solar.

  6. Ammonia 15N/14N Isotope Ratio in the Jovian Atmosphere

    NASA Technical Reports Server (NTRS)

    Mahaffy, P.R.; Niemann, H. B.; Atreya, S. K.; Wong, M. H.; Owen, T. C; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Data from the Galileo Probe Mass Spectrometer has been used to derive the N-15/N-14 isotope ratio in ammonia at Jupiter. Although the mass spectral interference from the water contribution to 18 amu makes an accurate derivation of the (N-15)H3/(N-14)H3 ratio difficult from measurements of the singly ionized signals at 18 and 17 amu, this interference is not present in the doubly charged 8.5 and 9.0 amu signals from (N-14)H3++ and (N-15)H3++ respectively. Although the count rate from the 9 amu signal is low during the direct sampling of the atmosphere, the ammonia signal was considerably enhanced during the first enrichment cell (EC1) experiment that measured gas sampled between 0.8 and 2.8 bar. Count rates at 9 amu in the EC1 experiment reach 60/second and measure ammonia sampled from 0.88 to 2.8 bar. In the EC1 measurements the 8.5 amu signal is not measured directly, but can be calculated from the ammonia contribution to 17 amu and the ratio of NH3 ions of a double to single charged observed during a high resolution mass scan taken near the end of the descent. The high resolution scan gives this ratio from ammonia sampled much deeper in the atmosphere. These results are described and compared with Infrared Space Observatory-Short Wavelength Spectrometer (ISO-SWS) observations that give this ratio at 400 mbar.

  7. Peptidomic analysis of endogenous plasma peptides from patients with pancreatic neuroendocrine tumours.

    PubMed

    Kay, Richard G; Challis, Benjamin G; Casey, Ruth T; Roberts, Geoffrey P; Meek, Claire L; Reimann, Frank; Gribble, Fiona M

    2018-06-01

    Diagnosis of pancreatic neuroendocrine tumours requires the study of patient plasma with multiple immunoassays, using multiple aliquots of plasma. The application of mass spectrometry based techniques could reduce the cost and amount of plasma required for diagnosis. Plasma samples from two patients with pancreatic neuroendocrine tumours were extracted using an established acetonitrile based plasma peptide enrichment strategy. The circulating peptidome was characterised using nano and high flow rate LC/MS analyses. To assess the diagnostic potential of the analytical approach, a large sample batch (68 plasmas) from control subjects, and aliquots from subjects harbouring two different types of pancreatic neuroendocrine tumour (insulinoma and glucagonoma) were analysed using a 10-minute LC/MS peptide screen. The untargeted plasma peptidomics approach identified peptides derived from the glucagon prohormone, chromogranin A, chromogranin B and other peptide hormones and proteins related to control of peptide secretion. The glucagon prohormone derived peptides that were detected were compared against putative peptides that were identified using multiple antibody pairs against glucagon peptides. Comparison of the plasma samples for relative levels of selected peptides showed clear separation between the glucagonoma and the insulinoma and control samples. The combination of the organic solvent extraction methodology with high flow rate analysis could potentially be used to aid diagnosis and monitor treatment of patients with functioning pancreatic neuroendocrine tumours. However, significant validation will be required before this approach can be clinically applied. This article is protected by copyright. All rights reserved.

  8. Height and seasonal growth pattern of jack pine full-sib families

    Treesearch

    Don E. Riemenschneider

    1981-01-01

    Total tree height, seasonal shoot elongation, dates of growth initiation and cessation, and mean daily growth rate were measured and analyzed for a population of jack pine full-sib families derived from inter-provenance crosses. Parental provenance had no effect on these variables although this may have been due to small sample size. Progenies differed significantly...

  9. The Roots of Plantation Cottonwood: Their Characteristics and Properties

    Treesearch

    John K. Francis

    1985-01-01

    The root biomass and its distribution and the growth rate of roots of pulpwood-size cottonwood (Popolus deltoides) in plantations were estimated by excavation and sampling. About 27 percent of the total biomass was in root tissue. Equations for predicting stump-taproot dry weight from d.b.h. and top dry weight were derived. Lateral roots in two...

  10. Long-term erosion rates of Panamanian drainage basins determined using in situ 10Be

    NASA Astrophysics Data System (ADS)

    Gonzalez, Veronica Sosa; Bierman, Paul R.; Nichols, Kyle K.; Rood, Dylan H.

    2016-12-01

    Erosion rates of tropical landscapes are poorly known. Using measurements of in situ-produced 10Be in quartz extracted from river and landslide sediment samples, we calculate long-term erosion rates for many physiographic regions of Panama. We collected river sediment samples from a wide variety of watersheds (n = 35), and then quantified 24 landscape-scale variables (physiographic, climatic, seismic, geologic, and land-use proxies) for each watershed before determining the relationship between these variables and long-term erosion rates using linear regression, multiple regression, and analysis of variance (ANOVA). We also used grain-size-specific 10Be analysis to infer the effect of landslides on the concentration of 10Be in fluvial sediment and thus on erosion rates. Cosmogenic 10Be-inferred, background erosion rates in Panama range from 26 to 595 m My- 1, with an arithmetic average of 201 m My- 1, and an area-weighted average of 144 m My- 1. The strongest and most significant relationship in the dataset was between erosion rate and silicate weathering rate, the mass of material leaving the basin in solution. None of the topographic variables showed a significant relationship with erosion rate at the 95% significance level; we observed weak but significant correlation between erosion rates and several climatic variables related to precipitation and temperature. On average, erosion rates in Panama are higher than other cosmogenically-derived erosion rates in tropical climates including those from Puerto Rico, Madagascar, Australia and Sri Lanka, likely the result of Panama's active tectonic setting and thus high rates of seismicity and uplift. Contemporary sediment yield and cosmogenically-derived erosion rates for three of the rivers we studied are similar, suggesting that human activities are not increasing sediment yield above long-term erosion rate averages in Panama. 10Be concentration is inversely proportional to grain size in landslide and fluvial samples from Panama; finer grain sizes from landslide material have lower 10Be concentration than fine-grained fluvial sediment. Large grains from both landslide and stream sediments have similarly low 10Be concentrations. These data suggest that fluvial gravel is delivered to the channel by landslides whereas sand is preferentially delivered by soil creep and bank collapse. Furthermore, the difference in 10Be concentration in sand-sized material delivered by soil creep and that delivered by landsliding suggests that the frequency and intensity of landslides influence basin scale erosion rates.

  11. Sample size determination in group-sequential clinical trials with two co-primary endpoints

    PubMed Central

    Asakura, Koko; Hamasaki, Toshimitsu; Sugimoto, Tomoyuki; Hayashi, Kenichi; Evans, Scott R; Sozu, Takashi

    2014-01-01

    We discuss sample size determination in group-sequential designs with two endpoints as co-primary. We derive the power and sample size within two decision-making frameworks. One is to claim the test intervention’s benefit relative to control when superiority is achieved for the two endpoints at the same interim timepoint of the trial. The other is when the superiority is achieved for the two endpoints at any interim timepoint, not necessarily simultaneously. We evaluate the behaviors of sample size and power with varying design elements and provide a real example to illustrate the proposed sample size methods. In addition, we discuss sample size recalculation based on observed data and evaluate the impact on the power and Type I error rate. PMID:24676799

  12. Exponential synchronization of neural networks with discrete and distributed delays under time-varying sampling.

    PubMed

    Wu, Zheng-Guang; Shi, Peng; Su, Hongye; Chu, Jian

    2012-09-01

    This paper investigates the problem of master-slave synchronization for neural networks with discrete and distributed delays under variable sampling with a known upper bound on the sampling intervals. An improved method is proposed, which captures the characteristic of sampled-data systems. Some delay-dependent criteria are derived to ensure the exponential stability of the error systems, and thus the master systems synchronize with the slave systems. The desired sampled-data controller can be achieved by solving a set of linear matrix inequalitys, which depend upon the maximum sampling interval and the decay rate. The obtained conditions not only have less conservatism but also have less decision variables than existing results. Simulation results are given to show the effectiveness and benefits of the proposed methods.

  13. Lot quality assurance sampling for monitoring immunization programmes: cost-efficient or quick and dirty?

    PubMed

    Sandiford, P

    1993-09-01

    In recent years Lot quality assurance sampling (LQAS), a method derived from production-line industry, has been advocated as an efficient means to evaluate the coverage rates achieved by child immunization programmes. This paper examines the assumptions on which LQAS is based and the effect that these assumptions have on its utility as a management tool. It shows that the attractively low sample sizes used in LQAS are achieved at the expense of specificity unless unrealistic assumptions are made about the distribution of coverage rates amongst the immunization programmes to which the method is applied. Although it is a very sensitive test and its negative predictive value is probably high in most settings, its specificity and positive predictive value are likely to be low. The implications of these strengths and weaknesses with regard to management decision-making are discussed.

  14. Aseptic minimum volume vitrification technique for porcine parthenogenetically activated blastocyst.

    PubMed

    Lin, Lin; Yu, Yutao; Zhang, Xiuqing; Yang, Huanming; Bolund, Lars; Callesen, Henrik; Vajta, Gábor

    2011-01-01

    Minimum volume vitrification may provide extremely high cooling and warming rates if the sample and the surrounding medium contacts directly with the respective liquid nitrogen and warming medium. However, this direct contact may result in microbial contamination. In this work, an earlier aseptic technique was applied for minimum volume vitrification. After equilibration, samples were loaded on a plastic film, immersed rapidly into factory derived, filter-sterilized liquid nitrogen, and sealed into sterile, pre-cooled straws. At warming, the straw was cut, the filmstrip was immersed into a 39 degree C warming medium, and the sample was stepwise rehydrated. Cryosurvival rates of porcine blastocysts produced by parthenogenetical activation did not differ from control, vitrified blastocysts with Cryotop. This approach can be used for minimum volume vitrification methods and may be suitable to overcome the biological dangers and legal restrictions that hamper the application of open vitrification techniques.

  15. Pyrolysis responses of kevlar/epoxy composite materials on laser irradiating

    NASA Astrophysics Data System (ADS)

    Liu, Wei-ping; Wei, Cheng-hua; Zhou, Meng-lian; Ma, Zhi-liang; Song, Ming-ying; Wu, Li-xiong

    2017-05-01

    The pyrolysis responses of kevlar/epoxy composite materials are valuable to study in a case of high temperature rising rate for its widely application. Distinguishing from the Thermal Gravimetric Analysis method, an apparatus is built to research the pyrolysis responses of kevlar/epoxy composite materials irradiated by laser in order to offer a high temperature rising rate of the sample. By deploying the apparatus, a near real-time gas pressure response can be obtained. The sample mass is weighted before laser irradiating and after an experiment finished. Then, the gas products molecular weight and the sample mass loss evolution are derived. It is found that the pressure and mass of the gas products increase with the laser power if it is less than 240W, while the molecular weight varies inversely. The variation tendency is confusing while the laser power is bigger than 240W. It needs more deeper investigations to bring it to light.

  16. Demonstration of a Novel Method for Measuring Mass-loss Rates for Massive Stars

    NASA Astrophysics Data System (ADS)

    Kobulnicky, Henry A.; Chick, William T.; Povich, Matthew S.

    2018-03-01

    The rate at which massive stars eject mass in stellar winds significantly influences their evolutionary path. Cosmic rates of nucleosynthesis, explosive stellar phenomena, and compact object genesis depend on this poorly known facet of stellar evolution. We employ an unexploited observational technique for measuring the mass-loss rates of O and early-B stars. Our approach, which has no adjustable parameters, uses the principle of pressure equilibrium between the stellar wind and the ambient interstellar medium for a high-velocity star generating an infrared bow shock nebula. Results for 20 bow-shock-generating stars show good agreement with two sets of theoretical predictions for O5–O9.5 main-sequence stars, yielding \\dot{M} = 1.3 × 10‑6 to 2 × 10‑9 {M}ȯ {yr}}-1. Although \\dot{M} values derived for this sample are smaller than theoretical expectations by a factor of about two, this discrepancy is greatly reduced compared to canonical mass-loss methods. Bow-shock-derived mass-loss rates are factors of 10 smaller than Hα-based measurements (uncorrected for clumping) for similar stellar types and are nearly an order of magnitude larger than P4+ and some other diagnostics based on UV absorption lines. Ambient interstellar densities of at least several cm‑3 appear to be required for formation of a prominent infrared bow shock nebula. Measurements of \\dot{M} for early-B stars are not yet compelling owing to the small number in our sample and the lack of clear theoretical predictions in the regime of lower stellar luminosities. These results may constitute a partial resolution of the extant “weak-wind problem” for late-O stars. The technique shows promise for determining mass-loss rates in the weak-wind regime.

  17. Simple Model for Detonation Energy and Rate

    NASA Astrophysics Data System (ADS)

    Lauderbach, Lisa M.; Souers, P. Clark

    2017-06-01

    A simple model is used to derive the Eyring equation for the size effect and detonation rate, which depends on a constant energy density. The rate derived from detonation velocities is then converted into a rate constant to be used in a reactive flow model. The rate might be constant if the size effect curve is straight, but the rate constant will change with the radius of the sample and cannot be a constant. This is based on many careful cylinder tests have been run recently on LX-17 with inner copper diameters ranging from 12.7 to 101.6 mm. Copper wall velocities at scaled displacements of 6, 12.5 and 19 mm equate to values at relative volumes of 2.4, 4.4 and 7.0. At each point, the velocities from 25.4 to 101.6 mm are constant within error whereas the 12.7 mm velocities are lower. Using the updated Gurney model, the energy densities at the three larger sizes are also constant. Similar behavior has been seen in LX-14, LX-04, and an 83% RDX mix. A rough saturation has also been in old ANFO data for diameters of 101.6 mm and larger. Although the energy densities saturate, the detonation velocities continue to increase with size. These observations suggest that maximum energy density is a constant for a given explosive of a given density. The correlation of energy density with detonation velocity is not good because the latter depends on the total energy of the sample. This work performed under the auspices of the U. S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  18. Optimizing ultrafast illumination for multiphoton-excited fluorescence imaging

    PubMed Central

    Stoltzfus, Caleb R.; Rebane, Aleksander

    2016-01-01

    We study the optimal conditions for high throughput two-photon excited fluorescence (2PEF) and three-photon excited fluorescence (3PEF) imaging using femtosecond lasers. We derive relations that allow maximization of the rate of imaging depending on the average power, pulse repetition rate, and noise characteristics of the laser, as well as on the size and structure of the sample. We perform our analysis using ~100 MHz, ~1 MHz and 1 kHz pulse rates and using both a tightly-focused illumination beam with diffraction-limited image resolution, as well loosely focused illumination with a relatively low image resolution, where the latter utilizes separate illumination and fluorescence detection beam paths. Our theoretical estimates agree with the experiments, which makes our approach especially useful for optimizing high throughput imaging of large samples with a field-of-view up to 10x10 cm2. PMID:27231620

  19. Derivation and validation of a multivariate model to predict mortality from pulmonary embolism with cancer: the POMPE-C tool

    PubMed Central

    Roy, Pierre-Marie; Than, Martin P.; Hernandez, Jackeline; Courtney, D. Mark; Jones, Alan E.; Penazola, Andrea; Pollack, Charles V.

    2012-01-01

    Background Clinical guidelines recommend risk stratification of patients with acute pulmonary embolism (PE). Active cancer increases risk of PE and worsens prognosis, but also causes incidental PE that may be discovered during cancer staging. No quantitative decision instrument has been derived specifically for patients with active cancer and PE. Methods Classification and regression technique was used to reduce 25 variables prospectively collected from 408 patients with AC and PE. Selected variables were transformed into a logistic regression model, termed POMPE-C, and compared with the pulmonary embolism severity index (PESI) score to predict the outcome variable of death within 30 days. Validation was performed in an independent sample of 182 patients with active cancer and PE. Results POMPE-C included eight predictors: body mass, heart rate >100, respiratory rate, SaO2%, respiratory distress, altered mental status, do not resuscitate status, and unilateral limb swelling. In the derivation set, the area under the ROC curve for POMPE-C was 0.84 (95% CI: 0.82-0.87), significantly greater than PESI (0.68, 0.60-0.76). In the validation sample, POMPE-C had an AUC of 0.86 (0.78-0.93). No patient with POMPE-C estimate ≤5% died within 30 days (0/50, 0-7%), whereas 10/13 (77%, 46-95%) with POMPE-C estimate >50% died within 30 days. Conclusion In patients with active cancer and PE, POMPE-C demonstrated good prognostic accuracy for 30 day mortality and better performance than PESI. If validated in a large sample, POMPE-C may provide a quantitative basis to decide treatment options for PE discovered during cancer staging and with advanced cancer. PMID:22475313

  20. Improved symbol rate identification method for on-off keying and advanced modulation format signals based on asynchronous delayed sampling

    NASA Astrophysics Data System (ADS)

    Cui, Sheng; Jin, Shang; Xia, Wenjuan; Ke, Changjian; Liu, Deming

    2015-11-01

    Symbol rate identification (SRI) based on asynchronous delayed sampling is accurate, cost-effective and robust to impairments. For on-off keying (OOK) signals the symbol rate can be derived from the periodicity of the second-order autocorrelation function (ACF2) of the delay tap samples. But it is found that when applied this method to advanced modulation format signals with auxiliary amplitude modulation (AAM), incorrect results may be produced because AAM has significant impact on ACF2 periodicity, which makes the symbol period harder or even unable to be correctly identified. In this paper it is demonstrated that for these signals the first order autocorrelation function (ACF1) has stronger periodicity and can be used to replace ACF2 to produce more accurate and robust results. Utilizing the characteristics of the ACFs, an improved SRI method is proposed to accommodate both OOK and advanced modulation formant signals in a transparent manner. Furthermore it is proposed that by minimizing the peak to average ratio (PAPR) of the delay tap samples with an additional tunable dispersion compensator (TDC) the limited dispersion tolerance can be expanded to desired values.

  1. Creep Behavior of Passive Bovine Extraocular Muscle

    PubMed Central

    Yoo, Lawrence; Kim, Hansang; Shin, Andrew; Gupta, Vijay; Demer, Joseph L.

    2011-01-01

    This paper characterized bovine extraocular muscles (EOMs) using creep, which represents long-term stretching induced by a constant force. After preliminary optimization of testing conditions, 20 fresh EOM samples were subjected to four different loading rates of 1.67, 3.33, 8.33, and 16.67%/s, after which creep was observed for 1,500 s. A published quasilinear viscoelastic (QLV) relaxation function was transformed to a creep function that was compared with data. Repeatable creep was observed for each loading rate and was similar among all six anatomical EOMs. The mean creep coefficient after 1,500 seconds for a wide range of initial loading rates was at 1.37 ± 0.03 (standard deviation, SD). The creep function derived from the relaxation-based QLV model agreed with observed creep to within 2.7% following 16.67%/s ramp loading. Measured creep agrees closely with a derived QLV model of EOM relaxation, validating a previous QLV model for characterization of EOM biomechanics. PMID:22131809

  2. Sensitivity of novel silicate and borate-based glass structures on in vitro bioactivity and degradation behaviour.

    PubMed

    Mancuso, Elena; Bretcanu, Oana; Marshall, Martyn; Dalgarno, Kenneth W

    2017-10-15

    Three novel glass compositions, identified as NCL2 (SiO 2 -based), NCL4 (B 2 O 3 -based) and NCL7 (SiO 2 -based), along with apatite-wollastonite (AW) were processed to form sintered dense pellets, and subsequently evaluated for their in vitro bioactive potential, resulting physico-chemical properties and degradation rate. Microstructural analysis showed the carbonated hydroxyapatite (HCA) precipitate morphology following SBF testing to be composition-dependent. AW and the NCL7 formulation exhibited greater HCA precursor formation than the NCL2 and NCL4-derived pellets. Moreover, the NCL4 borate-based samples showed the highest biodegradation rate; with silicate-derived structures displaying the lowest weight loss after SBF immersion. The results of this study suggested that glass composition has significant influence on apatite-forming ability and also degradation rate, indicating the possibility to customise the properties of this class of materials towards the bone repair and regeneration process.

  3. Cis-to- Trans Isomerization of Azobenzene Derivatives Studied with Transition Path Sampling and Quantum Mechanical/Molecular Mechanical Molecular Dynamics.

    PubMed

    Muždalo, Anja; Saalfrank, Peter; Vreede, Jocelyne; Santer, Mark

    2018-04-10

    Azobenzene-based molecular photoswitches are becoming increasingly important for the development of photoresponsive, functional soft-matter material systems. Upon illumination with light, fast interconversion between a more stable trans and a metastable cis configuration can be established resulting in pronounced changes in conformation, dipole moment or hydrophobicity. A rational design of functional photosensitive molecules with embedded azo moieties requires a thorough understanding of isomerization mechanisms and rates, especially the thermally activated relaxation. For small azo derivatives considered in the gas phase or simple solvents, Eyring's classical transition state theory (TST) approach yields useful predictions for trends in activation energies or corresponding half-life times of the cis isomer. However, TST or improved theories cannot easily be applied when the azo moiety is part of a larger molecular complex or embedded into a heterogeneous environment, where a multitude of possible reaction pathways may exist. In these cases, only the sampling of an ensemble of dynamic reactive trajectories (transition path sampling, TPS) with explicit models of the environment may reveal the nature of the processes involved. In the present work we show how a TPS approach can conveniently be implemented for the phenomenon of relaxation-isomerization of azobenzenes starting with the simple examples of pure azobenzene and a push-pull derivative immersed in a polar (DMSO) and apolar (toluene) solvent. The latter are represented explicitly at a molecular mechanical (MM) and the azo moiety at a quantum mechanical (QM) level. We demonstrate for the push-pull azobenzene that path sampling in combination with the chosen QM/MM scheme produces the expected change in isomerization pathway from inversion to rotation in going from a low to a high permittivity (explicit) solvent model. We discuss the potential of the simulation procedure presented for comparative calculation of reaction rates and an improved understanding of activated states.

  4. Development of a case-mix funding system for adults with combined vision and hearing loss

    PubMed Central

    2013-01-01

    Background Adults with vision and hearing loss, or dual sensory loss (DSL), present with a wide range of needs and abilities. This creates many challenges when attempting to set the most appropriate and equitable funding levels. Case-mix (CM) funding models represent one method for understanding client characteristics that correlate with resource intensity. Methods A CM model was developed based on a derivation sample (n = 182) and tested with a replication sample (n = 135) of adults aged 18+ with known DSL who were living in the community. All items within the CM model came from a standardized, multidimensional assessment, the interRAI Community Health Assessment and the Deafblind Supplement. The main outcome was a summary of formal and informal service costs which included intervenor and interpreter support, in-home nursing, personal support and rehabilitation services. Informal costs were estimated based on a wage rate of half that for a professional service provider ($10/hour). Decision-tree analysis was used to create groups with homogeneous resource utilization. Results The resulting CM model had 9 terminal nodes. The CM index (CMI) showed a 35-fold range for total costs. In both the derivation and replication sample, 4 groups (out of a total of 18 or 22.2%) had a coefficient of variation value that exceeded the overall level of variation. Explained variance in the derivation sample was 67.7% for total costs versus 28.2% in the replication sample. A strong correlation was observed between the CMI values in the two samples (r = 0.82; p = 0.006). Conclusions The derived CM funding model for adults with DSL differentiates resource intensity across 9 main groups and in both datasets there is evidence that these CM groups appropriately identify clients based on need for formal and informal support. PMID:23587314

  5. The discrimination of geoforensic trace material from close proximity locations by organic profiling using HPLC and plant wax marker analysis by GC.

    PubMed

    McCulloch, G; Dawson, L A; Ross, J M; Morgan, R M

    2018-07-01

    There is a need to develop a wider empirical research base to expand the scope for utilising the organic fraction of soil in forensic geoscience, and to demonstrate the capability of the analytical techniques used in forensic geoscience to discriminate samples at close proximity locations. The determination of wax markers from soil samples by GC analysis has been used extensively in court and is known to be effective in discriminating samples from different land use types. A new HPLC method for the analysis of the organic fraction of forensic sediment samples has also been shown recently to add value in conjunction with existing inorganic techniques for the discrimination of samples derived from close proximity locations. This study compares the ability of these two organic techniques to discriminate samples derived from close proximity locations and finds the GC technique to provide good discrimination at this scale, providing quantification of known compounds, whilst the HPLC technique offered a shorter and simpler sample preparation method and provided very good discrimination between groups of samples of different provenance in most cases. The use of both data sets together gave further improved accuracy rates in some cases, suggesting that a combined organic approach can provide added benefits in certain case scenarios and crime reconstruction contexts. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  6. Effects of algal-derived carbon on sediment methane ...

    EPA Pesticide Factsheets

    Nutrient loading is known to have adverse consequences for aquatic ecosystems, particularly in the form of algal blooms that may result. These blooms pose problems for humans and wildlife, including harmful toxin release, aquatic hypoxia and increased costs for water treatment. Another potential disservice resulting from algal blooms is the enhanced production of methane (CH4), a potent greenhouse gas, in aquatic sediments. Laboratory experiments have shown that algal biomass additions to sediment cores increase rates of CH4 production, but it is unclear whether or not this effect occurs at the ecosystem scale. The goal of this research was to explore the link between algal-derived carbon and methane production in the sediment of a eutrophic reservoir located in southwest Ohio, using a sampling design that capitalized on spatial and temporal gradients in autochthonous carbon input to sediments. Specifically, we aimed to determine if the within-reservoir gradient of sediment algal-derived organic matter and sediment CH4 production rates correlate. This was done by retrieving sediment cores from 15 sites within the reservoir along a known gradient of methane emission rates, at two separate time points in 2016: late spring before the sediments had received large amounts of algal input and mid-summer after algal blooms had been prevalent in the reservoir. Potential CH4 production rates, sediment organic matter source, and microbial community composition were charac

  7. The distant type Ia supernova rate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pain, R.; Fabbro, S.; Sullivan, M.

    2002-05-20

    We present a measurement of the rate of distant Type Ia supernovae derived using 4 large subsets of data from the Supernova Cosmology Project. Within this fiducial sample,which surveyed about 12 square degrees, thirty-eight supernovae were detected at redshifts 0.25--0.85. In a spatially flat cosmological model consistent with the results obtained by the Supernova Cosmology Project, we derive a rest-frame Type Ia supernova rate at a mean red shift z {approx_equal} 0.55 of 1.53 {sub -0.25}{sub -0.31}{sup 0.28}{sup 0.32} x 10{sup -4} h{sup 3} Mpc{sup -3} yr{sup -1} or 0.58{sub -0.09}{sub -0.09}{sup +0.10}{sup +0.10} h{sup 2} SNu(1 SNu = 1more » supernova per century per 10{sup 10} L{sub B}sun), where the first uncertainty is statistical and the second includes systematic effects. The dependence of the rate on the assumed cosmological parameters is studied and the redshift dependence of the rate per unit comoving volume is contrasted with local estimates in the context of possible cosmic star formation histories and progenitor models.« less

  8. Development of the permeability/performance reference compound approach for in situ calibration of semipermeable membrane devices

    USGS Publications Warehouse

    Huckins, J.N.; Petty, J.D.; Lebo, J.A.; Almeida, F.V.; Booij, K.; Alvarez, D.A.; Cranor, W.L.; Clark, R.C.; Mogensen, B.B.

    2002-01-01

    Permeability/performance reference compounds (PRCs) are analytically noninterfering organic compounds with moderate to high fugacity from semipermeable membrane devices (SPMDs) that are added to the lipid prior to membrane enclosure. Assuming that isotropic exchange kinetics (IEK) apply and that SPMD-water partition coefficients are known, measurement of PRC dissipation rate constants during SPMD field exposures and laboratory calibration studies permits the calculation of an exposure adjustment factor (EAF). In theory, PRC-derived EAF ratios reflect changes in SPMD sampling rates (relative to laboratory data) due to differences in exposure temperature, membrane biofouling, and flow velocity-turbulence at the membrane surface. Thus, the PRC approach should allow for more accurate estimates of target solute/vapor concentrations in an exposure medium. Under some exposure conditions, the impact of environmental variables on SPMD sampling rates may approach an order of magnitude. The results of this study suggest that most of the effects of temperature, facial velocity-turbulence, and biofouling on the uptake rates of analytes with a wide range of hydrophobicities can be deduced from PRCs with a much narrower range of hydrophobicities. Finally, our findings indicate that the use of PRCs permits prediction of in situ SPMD sampling rates within 2-fold of directly measured values.

  9. Sampling rare fluctuations of discrete-time Markov chains

    NASA Astrophysics Data System (ADS)

    Whitelam, Stephen

    2018-03-01

    We describe a simple method that can be used to sample the rare fluctuations of discrete-time Markov chains. We focus on the case of Markov chains with well-defined steady-state measures, and derive expressions for the large-deviation rate functions (and upper bounds on such functions) for dynamical quantities extensive in the length of the Markov chain. We illustrate the method using a series of simple examples, and use it to study the fluctuations of a lattice-based model of active matter that can undergo motility-induced phase separation.

  10. Sampling rare fluctuations of discrete-time Markov chains.

    PubMed

    Whitelam, Stephen

    2018-03-01

    We describe a simple method that can be used to sample the rare fluctuations of discrete-time Markov chains. We focus on the case of Markov chains with well-defined steady-state measures, and derive expressions for the large-deviation rate functions (and upper bounds on such functions) for dynamical quantities extensive in the length of the Markov chain. We illustrate the method using a series of simple examples, and use it to study the fluctuations of a lattice-based model of active matter that can undergo motility-induced phase separation.

  11. Noninvasive quantification of cerebral metabolic rate for glucose in rats using 18F-FDG PET and standard input function

    PubMed Central

    Hori, Yuki; Ihara, Naoki; Teramoto, Noboru; Kunimi, Masako; Honda, Manabu; Kato, Koichi; Hanakawa, Takashi

    2015-01-01

    Measurement of arterial input function (AIF) for quantitative positron emission tomography (PET) studies is technically challenging. The present study aimed to develop a method based on a standard arterial input function (SIF) to estimate input function without blood sampling. We performed 18F-fluolodeoxyglucose studies accompanied by continuous blood sampling for measurement of AIF in 11 rats. Standard arterial input function was calculated by averaging AIFs from eight anesthetized rats, after normalization with body mass (BM) and injected dose (ID). Then, the individual input function was estimated using two types of SIF: (1) SIF calibrated by the individual's BM and ID (estimated individual input function, EIFNS) and (2) SIF calibrated by a single blood sampling as proposed previously (EIF1S). No significant differences in area under the curve (AUC) or cerebral metabolic rate for glucose (CMRGlc) were found across the AIF-, EIFNS-, and EIF1S-based methods using repeated measures analysis of variance. In the correlation analysis, AUC or CMRGlc derived from EIFNS was highly correlated with those derived from AIF and EIF1S. Preliminary comparison between AIF and EIFNS in three awake rats supported an idea that the method might be applicable to behaving animals. The present study suggests that EIFNS method might serve as a noninvasive substitute for individual AIF measurement. PMID:25966947

  12. Noninvasive quantification of cerebral metabolic rate for glucose in rats using (18)F-FDG PET and standard input function.

    PubMed

    Hori, Yuki; Ihara, Naoki; Teramoto, Noboru; Kunimi, Masako; Honda, Manabu; Kato, Koichi; Hanakawa, Takashi

    2015-10-01

    Measurement of arterial input function (AIF) for quantitative positron emission tomography (PET) studies is technically challenging. The present study aimed to develop a method based on a standard arterial input function (SIF) to estimate input function without blood sampling. We performed (18)F-fluolodeoxyglucose studies accompanied by continuous blood sampling for measurement of AIF in 11 rats. Standard arterial input function was calculated by averaging AIFs from eight anesthetized rats, after normalization with body mass (BM) and injected dose (ID). Then, the individual input function was estimated using two types of SIF: (1) SIF calibrated by the individual's BM and ID (estimated individual input function, EIF(NS)) and (2) SIF calibrated by a single blood sampling as proposed previously (EIF(1S)). No significant differences in area under the curve (AUC) or cerebral metabolic rate for glucose (CMRGlc) were found across the AIF-, EIF(NS)-, and EIF(1S)-based methods using repeated measures analysis of variance. In the correlation analysis, AUC or CMRGlc derived from EIF(NS) was highly correlated with those derived from AIF and EIF(1S). Preliminary comparison between AIF and EIF(NS) in three awake rats supported an idea that the method might be applicable to behaving animals. The present study suggests that EIF(NS) method might serve as a noninvasive substitute for individual AIF measurement.

  13. Luminescence isochron dating: a new approach using different grain sizes.

    PubMed

    Zhao, H; Li, S H

    2002-01-01

    A new approach to isochron dating is described using different sizes of quartz and K-feldspar grains. The technique can be applied to sites with time-dependent external dose rates. It is assumed that any underestimation of the equivalent dose (De) using K-feldspar is by a factor F, which is independent of grain size (90-350 microm) for a given sample. Calibration of the beta source for different grain sizes is discussed, and then the sample ages are calculated using the differences between quartz and K-feldspar De from grains of similar size. Two aeolian sediment samples from north-eastern China are used to illustrate the application of the new method. It is confirmed that the observed values of De derived using K-feldspar underestimate the expected doses (based on the quartz De) but, nevertheless, these K-feldspar De values correlate linearly with the calculated internal dose rate contribution, supporting the assumption that the underestimation factor F is independent of grain size. The isochron ages are also compared with the results obtained using quartz De and the measured external dose rates.

  14. THE ALFALFA H α SURVEY. I. PROJECT DESCRIPTION AND THE LOCAL STAR FORMATION RATE DENSITY FROM THE FALL SAMPLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sistine, Angela Van; Salzer, John J.; Janowiecki, Steven

    2016-06-10

    The ALFALFA H α survey utilizes a large sample of H i-selected galaxies from the ALFALFA survey to study star formation (SF) in the local universe. ALFALFA H α contains 1555 galaxies with distances between ∼20 and ∼100 Mpc. We have obtained continuum-subtracted narrowband H α images and broadband R images for each galaxy, creating one of the largest homogeneous sets of H α images ever assembled. Our procedures were designed to minimize the uncertainties related to the calculation of the local SF rate density (SFRD). The galaxy sample we constructed is as close to volume-limited as possible, is amore » robust statistical sample, and spans a wide range of galaxy environments. In this paper, we discuss the properties of our Fall sample of 565 galaxies, our procedure for deriving individual galaxy SF rates, and our method for calculating the local SFRD. We present a preliminary value of log(SFRD[ M {sub ⊙} yr{sup −1} Mpc{sup −3}]) = −1.747 ± 0.018 (random) ±0.05 (systematic) based on the 565 galaxies in our Fall sub-sample. Compared to the weighted average of SFRD values around z ≈ 2, our local value indicates a drop in the global SFRD of a factor of 10.2 over that lookback time.« less

  15. Using high sampling rate (10/20 Hz) altimeter data for the observation of coastal surface currents: A case study over the northwestern Mediterranean Sea

    NASA Astrophysics Data System (ADS)

    Birol, Florence; Delebecque, Caroline

    2014-01-01

    Satellite altimetry, measuring sea surface heights (SSHs), has unique capabilities to provide information about the ocean dynamics. In this paper, the skill of the original full rate (10/20 Hz) measurements, relative to conventional 1-Hz data, is evaluated in the context of coastal studies in the Northwestern Mediterranean Sea. The performance and the question of the measurement noise are quantified through a comparison with different tide gauge sea level time series. By applying a specific processing, closer than 30 km to the land, the number of valid data is higher for the 10/20-Hz than for the 1-Hz observations: + 4.5% for T/P, + 10.3 for Jason-1 and + 13% for Jason-2. By filtering higher sampling rate measurements (using a 30-km cut-off low-pass Lanczos filter), we can obtain the same level of sea level accuracy as we would using the classical 1-Hz altimeter data. The gain in near-shore data results in a better observation of the Liguro-Provençal-Catalan Current. The seasonal evolution of the currents derived from 20-Hz data is globally consistent with patterns derived from the corresponding 1-Hz observations. But the use of higher frequency altimeter measurements allows us to observe the variability of the regional flow closer to the coast (~ 10-15 km from land).

  16. Isotopic tracking of Hanford 300 area derived uranium in the Columbia River.

    PubMed

    Christensen, John N; Dresel, P Evan; Conrad, Mark E; Patton, Gregory W; DePaolo, Donald J

    2010-12-01

    Our objectives in this study are to quantify the discharge rate of uranium (U) to the Columbia River from the Hanford Site's 300 Area and to follow that U downriver to constrain its fate. Uranium from the Hanford Site has variable isotopic composition due to nuclear industrial processes carried out at the site. This characteristic makes it possible to use high-precision isotopic measurements of U in environmental samples to identify even trace levels of contaminant U, determine its sources, and estimate discharge rates. Our data on river water samples indicate that as much as 3.2 kg/day can enter the Columbia River from the 300 Area, which is only a small fraction of the total load of dissolved natural background U carried by the Columbia River. This very low level of Hanford-derived U can be discerned, despite dilution to <1% of natural background U, 400 km downstream from the Hanford Site. These results indicate that isotopic methods can allow the amounts of U from the 300 Area of the Hanford Site entering the Columbia River to be measured accurately to ascertain whether they are an environmental concern or insignificant relative to natural uranium background in the Columbia River.

  17. Kinematics, turbulence, and star formation of z ˜ 1 strongly lensed galaxies seen with MUSE

    NASA Astrophysics Data System (ADS)

    Patrício, V.; Richard, J.; Carton, D.; Contini, T.; Epinat, B.; Brinchmann, J.; Schmidt, K. B.; Krajnović, D.; Bouché, N.; Weilbacher, P. M.; Pelló, R.; Caruana, J.; Maseda, M.; Finley, H.; Bauer, F. E.; Martinez, J.; Mahler, G.; Lagattuta, D.; Clément, B.; Soucail, G.; Wisotzki, L.

    2018-06-01

    We analyse a sample of eight highly magnified galaxies at redshift 0.6 < z < 1.5 observed with MUSE, exploring the resolved properties of these galaxies at sub-kiloparsec scales. Combining multiband HST photometry and MUSE spectra, we derive the stellar mass, global star formation rates (SFRs), extinction and metallicity from multiple nebular lines, concluding that our sample is representative of z ˜ 1 star-forming galaxies. We derive the 2D kinematics of these galaxies from the [O II ] emission and model it with a new method that accounts for lensing effects and fits multiple images simultaneously. We use these models to calculate the 2D beam-smearing correction and derive intrinsic velocity dispersion maps. We find them to be fairly homogeneous, with relatively constant velocity dispersions between 15 and 80 km s-1 and Gini coefficient of {≲ }0.3. We do not find any evidence for higher (or lower) velocity dispersions at the positions of bright star-forming clumps. We derive resolved maps of dust attenuation and attenuation-corrected SFRs from emission lines for two objects in the sample. We use this information to study the relation between resolved SFR and velocity dispersion. We find that these quantities are not correlated, and the high-velocity dispersions found for relatively low star-forming densities seems to indicate that, at sub-kiloparsec scales, turbulence in high-z discs is mainly dominated by gravitational instability rather than stellar feedback.

  18. 10Be Erosion Rates Controlled by Normal Fault Slip Rates and Transient Incision

    NASA Astrophysics Data System (ADS)

    Roda-Boluda, D. C.; D'Arcy, M. K.; Whittaker, A. C.; Allen, P.; Gheorghiu, D. M.; Rodés, Á.

    2016-12-01

    Quantifying erosion rates, and how they compare to rock uplift rates, is fundamental for understanding the evolution of relief and the associated sediment supply from mountains to basins. The trade-off between uplift and erosion is well-represented by river incision, which is often accompanied by hillslope steepening and landsliding. However, characterizing the relation between these processes and the impact that these have on sediment delivered to basins, remains a major challenge in many tectonically-active areas. We use Southern Italy as a natural laboratory to address these questions, and quantify the interplay of tectonics, geomorphic response and sediment export. We present 15 new 10Be catchment-averaged erosion rates, collected from catchments along five active normal faults with excellent slip rate constraints. We find that erosion rates are strongly controlled by fault slip rates and the degree of catchment incision. Our data suggests that overall 70% of the rock uplifted by the faults is being eroded, offering new insights into the topographic balance of uplift and erosion in this area. None of the erosion rates are greater than local fault slip rates, so fault activity is effectively establishing an upper limit on erosion. However, eight 10Be samples from low relief, unincised areas within the catchments, collected above knickpoints, yield consistent erosion rates of 0.12 mm/yr. In contrast, samples collected below knickpoints and below the incised sectors of the channels, have erosion rates of 0.2-0.8 mm/yr. The comparison allows us to quantify the impact that transient incisional response has on erosion rates. We show that incision is associated with frequent, shallow landsliding, and we find that the volumes of landslides stored on the catchments are highly correlated with 10Be-derived sediment flux estimates, suggesting that landslides are likely to be a major contributor to sediment fluxes; and we examine the implications that this may have on 10Be concentrations. Finally, we examine the influence that these coupled landscape responses have on the sediment exported from the catchments, and we find that coarser grain size export is associated with deeper channel incision and greater 10Be-derived sediment fluxes.

  19. Monitoring forest areas from continental to territorial levels using a sample of medium spatial resolution satellite imagery

    NASA Astrophysics Data System (ADS)

    Eva, Hugh; Carboni, Silvia; Achard, Frédéric; Stach, Nicolas; Durieux, Laurent; Faure, Jean-François; Mollicone, Danilo

    A global systematic sampling scheme has been developed by the UN FAO and the EC TREES project to estimate rates of deforestation at global or continental levels at intervals of 5 to 10 years. This global scheme can be intensified to produce results at the national level. In this paper, using surrogate observations, we compare the deforestation estimates derived from these two levels of sampling intensities (one, the global, for the Brazilian Amazon the other, national, for French Guiana) to estimates derived from the official inventories. We also report the precisions that are achieved due to sampling errors and, in the case of French Guiana, compare such precision with the official inventory precision. We extract nine sample data sets from the official wall-to-wall deforestation map derived from satellite interpretations produced for the Brazilian Amazon for the year 2002 to 2003. This global sampling scheme estimate gives 2.81 million ha of deforestation (mean from nine simulated replicates) with a standard error of 0.10 million ha. This compares with the full population estimate from the wall-to-wall interpretations of 2.73 million ha deforested, which is within one standard error of our sampling test estimate. The relative difference between the mean estimate from sampling approach and the full population estimate is 3.1%, and the standard error represents 4.0% of the full population estimate. This global sampling is then intensified to a territorial level with a case study over French Guiana to estimate deforestation between the years 1990 and 2006. For the historical reference period, 1990, Landsat-5 Thematic Mapper data were used. A coverage of SPOT-HRV imagery at 20 m × 20 m resolution acquired at the Cayenne receiving station in French Guiana was used for year 2006. Our estimates from the intensified global sampling scheme over French Guiana are compared with those produced by the national authority to report on deforestation rates under the Kyoto protocol rules for its overseas department. The latter estimates come from a sample of nearly 17,000 plots analyzed from same spatial imagery acquired between year 1990 and year 2006. This sampling scheme is derived from the traditional forest inventory methods carried out by IFN (Inventaire Forestier National). Our intensified global sampling scheme leads to an estimate of 96,650 ha deforested between 1990 and 2006, which is within the 95% confidence interval of the IFN sampling scheme, which gives an estimate of 91,722 ha, representing a relative difference from the IFN of 5.4%. These results demonstrate that the intensification of the global sampling scheme can provide forest area change estimates close to those achieved by official forest inventories (<6%), with precisions of between 4% and 7%, although we only estimate errors from sampling, not from the use of surrogate data. Such methods could be used by developing countries to demonstrate that they are fulfilling requirements for reducing emissions from deforestation in the framework of an REDD (Reducing Emissions from Deforestation in Developing Countries) mechanism under discussion within the United Nations Framework Convention on Climate Change (UNFCCC). Monitoring systems at national levels in tropical countries can also benefit from pan-tropical and regional observations, to ensure consistency between different national monitoring systems.

  20. Metallographic cooling rates of L-group ordinary chondrites

    NASA Technical Reports Server (NTRS)

    Bennett, Marvin E.; Mcsween, Harry Y., Jr.

    1993-01-01

    Shock metamorphism appears to be a ubiquitous feature in L-group ordinary chondrites. Brecciation and heterogeneous melting obscure much of the early history of this meteorite group and have caused confusion as to whether L chondrites have undergone thermal metamorphism within onion-shell or rubble-pile parent bodies. Employing the most recent shock criteria, we have examined 55 Antarctic and 24 non-Antarctic L chondrites in order to identify those which have been least affected by post-accretional shock. Six low-shock samples (those with shock grade less than S4) of petrographic types L3-L5 were selected from both populations and metallographic cooling rates were obtained following the technique of Willis and Goldstein. All non-Antarctic L6 chondrites inspected were too heavily shocked to be included in this group. However, 4 shocked L6 chondrites were analyzed in order to determine what effects shock may impose on metallographic cooling rates. Metallographic cooling rates were derived by analyzing the cores of taenite grains and then measuring the distance to the nearest grain edge. Taenites were identified using backscatter imaging on a Cameca SX-50 electron microprobe. Using backscatter we were able to locate homogeneous, rust-free, nearly spherical grains. M-shaped profiles taken from grain traverses were also used to help locate the central portions of selected grains. All points which contained phosphorus above detection limits were discarded. Plots of cooling-rate data are summarized and data from the high-shock samples are presented. The lack of coherency of cooling rates for individual samples is indicative of heterogeneous cooling following shock. The data confirms the statement expressed by numerous workers that extreme care must be taken when selecting samples of L chondrites for cooling-rate studies. Data for the 6 non-Antarctic low-shock samples are also presented. The samples display a general trend in cooling rates. The lowest metamorphic grade yielded the slowest cooling rates and an increase in grade follows an increase in cooling rate. This is the opposite relationship to that predicted by the onion-shell model.

  1. Determination of formaldehyde by HPLC as the DNPH derivative following high-volume air sampling onto bisulfite-coated cellulose filters

    NASA Astrophysics Data System (ADS)

    de Andrade, Jailson B.; Tanner, Roger L.

    A method is described for the specific collection of formaldehyde as hydroxymethanesulfonate on bisulfate-coated cellulose filters. Following extraction in aqueous acid and removal on unreacted bisulfite, the hydroxymethanesulfonate is decomposed by base, and HCHO is determined by DNPH (2,4-dinitrophenylhydrazine) derivatization and HPLC. Since the collection efficiency for formaldehyde is moderately high even when sampling ambient air at high-volume flow rates, a limit of detection of 0.2 ppbv is achieved with 30 min sampling times. Interference from acetaldehyde co-collected as 1-hydroxyethanesulfonate is <5% using this procedure. The technique shows promise for both short-term airborne sampling, and as a means of collecting mg-sized samples of HCHO on an inorganic matrix for carbon isotopic analyses.

  2. Early and long-term mantle processing rates derived from xenon isotopes

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, S.; Parai, R.; Tucker, J.; Middleton, J. L.; Langmuir, C. H.

    2015-12-01

    Noble gases, particularly xenon (Xe), in mantle-derived basalts provide a rich portrait of mantle degassing and surface-interior volatile exchange. The combination of extinct and extant radioactive species in the I-Pu-U-Xe systems shed light on the degassing history of the early Earth throughout accretion, as well as the long-term degassing of the Earth's interior in association with plate tectonics. The ubiquitous presence of shallow-level air contamination, however, frequently obscures the mantle Xe signal. In a majority of the samples, shallow air contamination dominates the Xe budget. For example, in the gas-rich popping rock 2ΠD43, 129Xe/130Xe ratios reach 7.7±0.23 in individual step-crushes, but the bulk composition of the sample is close to air (129Xe/130Xe of 6.7). Thus, the extent of variability in mantle source Xe composition is not well-constrained. Here, we present new MORB Xe data and explore constraints placed on mantle processing rates by the Xe data. Ten step-crushes were obtained on a depleted popping glass that was sealed in ultrapure N2 after dredge retrieval from between the Kane-Atlantis Fracture Zone of the Mid Atlantic Ridge in May 2012. 9 steps yielded 129Xe/130Xe of 7.50-7.67 and one yielded 7.3. The bulk 129Xe/130Xe of the sample is 7.6, nearly identical to the estimated mantle source value of 7.7 for the sample. Hence, the sample is virtually free of shallow-level air contamination. Because sealing the sample in N2upon dredge retrieval largely eliminated air contamination, for many samples, contamination must be added after sample retrieval from the ocean bottom. Our new high-precision Xe isotopic measurements in upper mantle-derived samples provide improved constraints on the Xe isotopic composition of the mantle source. We developed a forward model of mantle volatile evolution to identify solutions that satisfy our Xe isotopic data. We find that accretion timescales of ~10±5 Myr are consistent with I-Pu-Xe constraints, and the last giant impact occurred 45-70 Myr after the start of the solar system. After the giant impact stage, the Pu-U-Xe system indicates that degassing of the planet via solid-state mantle convection and plate tectonics continued to liberate volatiles to the atmosphere and has led to between ~5-8 mantle turnovers over the age of the Earth.

  3. Relationship between optical and X-ray properties of O-type stars surveyed with the Einstein Observatory

    NASA Technical Reports Server (NTRS)

    Sciortino, S.; Vaiana, G. S.; Harnden, F. R., Jr.; Ramella, M.; Morossi, C.

    1990-01-01

    An X-ray luminosity function is derived for a representative volume-limited sample of O-type stars selected from the catalog of Galactic O stars surveyed with the Einstein Observatory. It was found that, for the stars of this sample which is ten times larger than any previously analyzed, the level of X-ray emission is strongly correlated with bolometric luminosity, confirming previous findings of an Lx-L(bol) relationship (e.g., Harnden et al., 1979; Pallavicini et al., 1981). Correlations between the Lx and the mass loss rate with the wind terminal velocity or with the rotation rate were weak. However, there was a strong correlation with wind momentum flux as well as with the wind kinetic energy flux.

  4. Analytical and Clinical Validation of a Digital Sequencing Panel for Quantitative, Highly Accurate Evaluation of Cell-Free Circulating Tumor DNA

    PubMed Central

    Zill, Oliver A.; Sebisanovic, Dragan; Lopez, Rene; Blau, Sibel; Collisson, Eric A.; Divers, Stephen G.; Hoon, Dave S. B.; Kopetz, E. Scott; Lee, Jeeyun; Nikolinakos, Petros G.; Baca, Arthur M.; Kermani, Bahram G.; Eltoukhy, Helmy; Talasaz, AmirAli

    2015-01-01

    Next-generation sequencing of cell-free circulating solid tumor DNA addresses two challenges in contemporary cancer care. First this method of massively parallel and deep sequencing enables assessment of a comprehensive panel of genomic targets from a single sample, and second, it obviates the need for repeat invasive tissue biopsies. Digital SequencingTM is a novel method for high-quality sequencing of circulating tumor DNA simultaneously across a comprehensive panel of over 50 cancer-related genes with a simple blood test. Here we report the analytic and clinical validation of the gene panel. Analytic sensitivity down to 0.1% mutant allele fraction is demonstrated via serial dilution studies of known samples. Near-perfect analytic specificity (> 99.9999%) enables complete coverage of many genes without the false positives typically seen with traditional sequencing assays at mutant allele frequencies or fractions below 5%. We compared digital sequencing of plasma-derived cell-free DNA to tissue-based sequencing on 165 consecutive matched samples from five outside centers in patients with stage III-IV solid tumor cancers. Clinical sensitivity of plasma-derived NGS was 85.0%, comparable to 80.7% sensitivity for tissue. The assay success rate on 1,000 consecutive samples in clinical practice was 99.8%. Digital sequencing of plasma-derived DNA is indicated in advanced cancer patients to prevent repeated invasive biopsies when the initial biopsy is inadequate, unobtainable for genomic testing, or uninformative, or when the patient’s cancer has progressed despite treatment. Its clinical utility is derived from reduction in the costs, complications and delays associated with invasive tissue biopsies for genomic testing. PMID:26474073

  5. Activation energy of tantalum-tungsten oxide thermite reactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cervantes, Octavio G.; Munir, Zuhair A.; Chemical Engineering and Materials Science, University of California, Davis, CA

    2011-01-15

    The activation energy of a sol-gel (SG) derived tantalum-tungsten oxide thermite composite was determined using the Kissinger isoconversion method. The SG derived powder was consolidated using the high-pressure spark plasma sintering (HPSPS) technique at 300 and 400 C. The ignition temperatures were investigated under high heating rates (500-2000 C min{sup -1}). Such heating rates were required in order to ignite the thermite composite. Samples consolidated at 300 C exhibit an abrupt change in temperature response prior to the main ignition temperature. This change in temperature response is attributed to the crystallization of the amorphous WO{sub 3} in the SG derivedmore » Ta-WO{sub 3} thermite composite and not to a pre-ignition reaction between the constituents. Ignition temperatures for the Ta-WO{sub 3} thermite ranged from approximately 465 to 670 C. The activation energies of the SG derived Ta-WO{sub 3} thermite composite consolidated at 300 and 400 C were determined to be 38{+-} 2 kJ mol{sup -1} and 57 {+-} 2 kJ mol{sup -1}, respectively. (author)« less

  6. The Porosity of the neutral ISM in 20 THINGS Galaxies

    NASA Astrophysics Data System (ADS)

    Bagetakos, I.; Brinks, E.; Walter, F.; de Blok, W. J. G.; Usero, A.; Leroy, A. K.; Rich, J. W.; Kennicutt, R. C.

    2011-11-01

    We present an analysis of the properties of H i holes detected in 20 galaxies that are part of "The H i Nearby Galaxy Survey". We detected more than 1000 holes in total in the sampled galaxies. The holes are found throughout the disks of the galaxies, out to the edge of the H i disk. We find that shear limits the age of holes in spirals. Shear is less important in dwarf galaxies which explains why H i holes in dwarfs are rounder, on average than in spirals. Shear is particularly strong in the inner part of spiral galaxies, limiting the lifespan of holes there and explaining why we find that holes outside R25 are larger and older. We proceed to derive the surface and volume porosity and find that this correlates with the type of the host galaxy: later Hubble types tend to be more porous. The size distribution of the holes in our sample follows a power law with a slope of aν ~ -2.9. Assuming that the holes are the result of massive star formation, we derive values for the supernova rate (SNR) and star formation rate (SFR) which scales with the SFR derived based on other tracers. If we extrapolate the observed number of holes to include those that fall below our resolution limit, down to holes created by a single supernova, we find that our results are compatible with the hypothesis that H i holes result from star formation.

  7. Optimal sampling and quantization of synthetic aperture radar signals

    NASA Technical Reports Server (NTRS)

    Wu, C.

    1978-01-01

    Some theoretical and experimental results on optimal sampling and quantization of synthetic aperture radar (SAR) signals are presented. It includes a description of a derived theoretical relationship between the pixel signal to noise ratio of processed SAR images and the number of quantization bits per sampled signal, assuming homogeneous extended targets. With this relationship known, a solution may be realized for the problem of optimal allocation of a fixed data bit-volume (for specified surface area and resolution criterion) between the number of samples and the number of bits per sample. The results indicate that to achieve the best possible image quality for a fixed bit rate and a given resolution criterion, one should quantize individual samples coarsely and thereby maximize the number of multiple looks. The theoretical results are then compared with simulation results obtained by processing aircraft SAR data.

  8. Calculating second derivatives of population growth rates for ecology and evolution

    PubMed Central

    Shyu, Esther; Caswell, Hal

    2014-01-01

    1. Second derivatives of the population growth rate measure the curvature of its response to demographic, physiological or environmental parameters. The second derivatives quantify the response of sensitivity results to perturbations, provide a classification of types of selection and provide one way to calculate sensitivities of the stochastic growth rate. 2. Using matrix calculus, we derive the second derivatives of three population growth rate measures: the discrete-time growth rate λ, the continuous-time growth rate r = log λ and the net reproductive rate R0, which measures per-generation growth. 3. We present a suite of formulae for the second derivatives of each growth rate and show how to compute these derivatives with respect to projection matrix entries and to lower-level parameters affecting those matrix entries. 4. We also illustrate several ecological and evolutionary applications for these second derivative calculations with a case study for the tropical herb Calathea ovandensis. PMID:25793101

  9. Type Ia Supernova Distances at Redshift >1.5 from the Hubble Space Telescope Multi-cycle Treasury Programs: The Early Expansion Rate

    NASA Astrophysics Data System (ADS)

    Riess, Adam G.; Rodney, Steven A.; Scolnic, Daniel M.; Shafer, Daniel L.; Strolger, Louis-Gregory; Ferguson, Henry C.; Postman, Marc; Graur, Or; Maoz, Dan; Jha, Saurabh W.; Mobasher, Bahram; Casertano, Stefano; Hayden, Brian; Molino, Alberto; Hjorth, Jens; Garnavich, Peter M.; Jones, David O.; Kirshner, Robert P.; Koekemoer, Anton M.; Grogin, Norman A.; Brammer, Gabriel; Hemmati, Shoubaneh; Dickinson, Mark; Challis, Peter M.; Wolff, Schuyler; Clubb, Kelsey I.; Filippenko, Alexei V.; Nayyeri, Hooshang; U, Vivian; Koo, David C.; Faber, Sandra M.; Kocevski, Dale; Bradley, Larry; Coe, Dan

    2018-02-01

    We present an analysis of 15 Type Ia supernovae (SNe Ia) at redshift z> 1 (9 at 1.5< z< 2.3) recently discovered in the CANDELS and CLASH Multi-Cycle Treasury programs using WFC3 on the Hubble Space Telescope. We combine these SNe Ia with a new compilation of ∼1050 SNe Ia, jointly calibrated and corrected for simulated survey biases to produce accurate distance measurements. We present unbiased constraints on the expansion rate at six redshifts in the range 0.07< z< 1.5 based only on this combined SN Ia sample. The added leverage of our new sample at z> 1.5 leads to a factor of ∼3 improvement in the determination of the expansion rate at z = 1.5, reducing its uncertainty to ∼20%, a measurement of H(z=1.5)/{H}0 = {2.69}-0.52+0.86. We then demonstrate that these six derived expansion rate measurements alone provide a nearly identical characterization of dark energy as the full SN sample, making them an efficient compression of the SN Ia data. The new sample of SNe Ia at z> 1.5 usefully distinguishes between alternative cosmological models and unmodeled evolution of the SN Ia distance indicators, placing empirical limits on the latter. Finally, employing a realistic simulation of a potential Wide-Field Infrared Survey Telescope SN survey observing strategy, we forecast optimistic future constraints on the expansion rate from SNe Ia.

  10. Carbon-13 Isotopic Abundance and Concentration of Atmospheric Methane for Background Air in the Southern and Northern Hemispheres from 1978 to 1989 (NDP-049)

    DOE Data Explorer

    Stevens, C. M. [Chemical Technology Division, Argonne National Laboratory, Argonne, Illinois (USA)

    2012-01-01

    This data package presents atmospheric CH4 concentration and 13C isotopic abundance data derived from air samples collected over the period 1978-1989 at globally distributed clean-air sites. The data set comprises 201 records, 166 from the Northern Hemisphere and 35 from the Southern Hemisphere. The air samples were collected mostly in rural or marine locations remote from large sources of CH4 and are considered representative of tropospheric background conditions. The air samples were processed by isolation of CH4 from air and conversion to CO2 for isotopic analysis by isotope ratio mass spectrometry. These data represent one of the earliest records of 13C isotopic yy!measurements for atmospheric methane and have been used to refine estimates of CH4 emissions, calculate annual growth rates of emissions from changing sources, and provide evidence for changes in the rate of atmospheric removal of CH4. The data records consist of sample collection date; number of samples combined for analysis; sampling location; analysis date; CH4 concentration; 13C isotopic abundance; and flag codes to indicate outliers, repeated analyses, and other information.

  11. Changes in denudation rates and erosion processes in the transition from a low-relief, arid orogen interior to a high-relief, humid mountain-front setting, Toro Basin, southern Central Andes

    NASA Astrophysics Data System (ADS)

    Tofelde, S.; Düsing, W.; Schildgen, T. F.; Wittmann, H.; Alonso, R. N.; Strecker, M. R.

    2017-12-01

    In tectonically active mountain belts positive correlations between denudation rates and hillslope angles are commonly observed, supporting the notion that landscape morphology may reflect tectonic forcing. However, this relationship generally breaks down at 30°, when hillslopes reach threshold angles. Beyond this threshold, faster denudation may occur by an increased contribution from mass-wasting processes. We test this idea in the 4000 km2 Toro Basin, a fault-bounded basin in the Eastern Cordillera of the southern Central Andes. This N-S oriented basin is located between low-relief, arid conditions in the orogen interior (N) and a high-relief, humid setting at its fluvial outlet (S). We measured in-situ produced 10Be concentration in fluvial sediments, which can be converted into basin-mean denudation rates, assuming a spatially uniform contribution of sediment from the catchment. However, in landslide-influenced areas, this assumption is often violated. Previous studies have suggested that clast-size material is mainly contributed by mass-wasting processes, whereas sand is derived from a broad range of erosional processes. Hence, a combination of clast and sand samples can reveal information about the basin-mean denudation rate as well as the contribution of mass-wasting processes. We sampled 13 pebble (1-3 cm) and sand (250-500 µm) pairs across the basin. The sand-derived denudation rates increase from N to S, ranging from 0.010 mm/yr to 1.337 mm/yr, and reveal a non-linear positive correlation with median basin slope. The clast/sand ratios also increase from N to S, indicating amplified mass-wasting processes with increasing slopes. To test if our ratios represent a real shift in erosional processes, we mapped different erosional processes in the study area (e.g. deep-seated landslides, scree erosion,.., diffusion). We assume that today's distribution of processes has not changed over the integration time of 10Be derived denudation rates. This detailed erosion inventory indicates a shift in the dominant erosional processes with increasing clast/sand ratios and thus with increasing slopes. We provide empirical data supporting the hypothesis that higher denudation rates can be achieved by an increased contribution of mass-wasting processes after threshold slopes have been reached.

  12. Star-formation rate in compact star-forming galaxies

    NASA Astrophysics Data System (ADS)

    Izotova, I. Y.; Izotov, Y. I.

    2018-03-01

    We use the data for the Hβ emission-line, far-ultraviolet (FUV) and mid-infrared 22 μm continuum luminosities to estimate star formation rates < SFR > averaged over the galaxy lifetime for a sample of about 14000 bursting compact star-forming galaxies (CSFGs) selected from the Data Release 12 (DR12) of the Sloan Digital Sky Survey (SDSS). The average coefficient linking < SFR > and the star formation rate SFR0 derived from the Hβ luminosity at zero starburst age is found to be 0.04. We compare < SFR > s with some commonly used SFRs which are derived adopting a continuous star formation during a period of {˜} 100 Myr, and find that the latter ones are 2-3 times higher. It is shown that the relations between SFRs derived using a geometric mean of two star-formation indicators in the UV and IR ranges and reduced to zero starburst age have considerably lower dispersion compared to those with single star-formation indicators. We suggest that our relations for < SFR > determination are more appropriate for CSFGs because they take into account a proper temporal evolution of their luminosities. On the other hand, we show that commonly used SFR relations can be applied for approximate estimation within a factor of {˜} 2 of the < SFR > averaged over the lifetime of the bursting compact galaxy.

  13. Pharmacotherapy for organic brain syndrome in late life. Evaluation of an ergot derivative vs placebo.

    PubMed

    Gaitz, C M; Varner, R V; Overall, J E

    1977-07-01

    Evaluation of treatment modalities, including pharmacotherapy, for organic brain syndrome (OBS) has been difficult because of sampling and methodological problems, and comparisons of research studies are all but impossible. In this study, an ergot derivative, a combination of dihydroergocornine mesylate, dihydroergocristine mesylate, and dihydroergokryptine mesylate (Hydergine) was compared with placebo, using a double-blind technique in a sample of nursing home residents with evidence of OBS. An 18-category symptom rating scale was used for periodic assessment over a six-month interval. Comparisons of the two groups of subjects disclosed that the Hydergine-treated group showed statistically significantly more improvement in most of the variables measured, especially during the last three months of treatment. Furthermore, sophisticated analysis revealed that positive changes in cognitive function cannot be accounted for as a mere reflection, or "halo" effect, associated with improved mood and general sense of well-being.

  14. Inhalation and Ingestion Intakes with Associated Dose Estimates for Level II and Level III Personnel Using Capstone Study Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szrom, Fran; Falo, Gerald A.; Lodde, Gordon M.

    2009-03-01

    Depleted uranium (DU) intake rates and subsequent dose rates were estimated for personnel entering armored combat vehicles perforated with DU penetrators (level II and level III personnel) using data generated during the Capstone Depleted Uranium (DU) Aerosol Study. Inhalation intake rates and associated dose rates were estimated from cascade impactors worn by sample recovery personnel and from cascade impactors that served as area monitors. Ingestion intake rates and associated dose rates were estimated from cotton gloves worn by sample recovery personnel and from wipe test samples from the interior of vehicles perforated with large caliber DU munitions. The mean DUmore » inhalation intake rate for level II personnel ranged from 0.447 mg h-1 based on breathing zone monitor data (in and around a perforated vehicle) to 14.5 mg h-1 based on area monitor data (in a perforated vehicle). The mean DU ingestion intake rate for level II ranged from 4.8 mg h-1 to 38.9 mg h-1 based on the wipe test data including surface to glove transfer factors derived from the Capstone data. Based on glove contamination data, the mean DU ingestion intake rates for level II and level III personnel were 10.6 mg h-1 was and 1.78 mg h-1, respectively. Effective dose rates and peak kidney uranium concentration rates were calculated based on the intake rates. The peak kidney uranium concentration rate cannot be multiplied by the total exposure duration when multiple intakes occur because uranium will clear from the kidney between the exposures.« less

  15. The JCMT Nearby Galaxies Legacy Survey - VII. Hα imaging and massive star formation properties

    NASA Astrophysics Data System (ADS)

    Sánchez-Gallego, J. R.; Knapen, J. H.; Wilson, C. D.; Barmby, P.; Azimlu, M.; Courteau, S.

    2012-06-01

    We present Hα fluxes, star formation rates (SFRs) and equivalent widths (EWs) for a sample of 156 nearby galaxies observed in the 12CO J= 3-2 line as part of the James Clerk Maxwell Telescope Nearby Galaxies Legacy Survey. These are derived from images and values in the literature and from new Hα images for 72 galaxies which we publish here. We describe the sample, observations and procedures to extract the Hα fluxes and related quantities. We discuss the SFR properties of our sample and confirm the well-known correlation with galaxy luminosity, albeit with high dispersion. Our SFRs range from 0.1 to 11 M⊙ yr-1 with a median SFR value for the complete sample of 0.2 M⊙ yr-1. This median value is somewhat lower than similar published measurements, which we attribute, in part, to our sample being H I selected and, thus, not biased towards high SFRs as has frequently been the case in previous studies. Additionally, we calculate internal absorptions for the Hα line, A(Hα), which are lower than many of those used in previous studies. Our derived EWs, which range from 1 to 880 Å with a median value of 27 Å, show little dependence on luminosity but rise by a factor of 5 from early- to late-type galaxies. This paper is the first in a series aimed at comparing SFRs obtained from Hα imaging of galaxies with information derived from other tracers of star formation and atomic and molecular gas.

  16. Rasch fit statistics and sample size considerations for polytomous data.

    PubMed

    Smith, Adam B; Rush, Robert; Fallowfield, Lesley J; Velikova, Galina; Sharpe, Michael

    2008-05-29

    Previous research on educational data has demonstrated that Rasch fit statistics (mean squares and t-statistics) are highly susceptible to sample size variation for dichotomously scored rating data, although little is known about this relationship for polytomous data. These statistics help inform researchers about how well items fit to a unidimensional latent trait, and are an important adjunct to modern psychometrics. Given the increasing use of Rasch models in health research the purpose of this study was therefore to explore the relationship between fit statistics and sample size for polytomous data. Data were collated from a heterogeneous sample of cancer patients (n = 4072) who had completed both the Patient Health Questionnaire - 9 and the Hospital Anxiety and Depression Scale. Ten samples were drawn with replacement for each of eight sample sizes (n = 25 to n = 3200). The Rating and Partial Credit Models were applied and the mean square and t-fit statistics (infit/outfit) derived for each model. The results demonstrated that t-statistics were highly sensitive to sample size, whereas mean square statistics remained relatively stable for polytomous data. It was concluded that mean square statistics were relatively independent of sample size for polytomous data and that misfit to the model could be identified using published recommended ranges.

  17. Rasch fit statistics and sample size considerations for polytomous data

    PubMed Central

    Smith, Adam B; Rush, Robert; Fallowfield, Lesley J; Velikova, Galina; Sharpe, Michael

    2008-01-01

    Background Previous research on educational data has demonstrated that Rasch fit statistics (mean squares and t-statistics) are highly susceptible to sample size variation for dichotomously scored rating data, although little is known about this relationship for polytomous data. These statistics help inform researchers about how well items fit to a unidimensional latent trait, and are an important adjunct to modern psychometrics. Given the increasing use of Rasch models in health research the purpose of this study was therefore to explore the relationship between fit statistics and sample size for polytomous data. Methods Data were collated from a heterogeneous sample of cancer patients (n = 4072) who had completed both the Patient Health Questionnaire – 9 and the Hospital Anxiety and Depression Scale. Ten samples were drawn with replacement for each of eight sample sizes (n = 25 to n = 3200). The Rating and Partial Credit Models were applied and the mean square and t-fit statistics (infit/outfit) derived for each model. Results The results demonstrated that t-statistics were highly sensitive to sample size, whereas mean square statistics remained relatively stable for polytomous data. Conclusion It was concluded that mean square statistics were relatively independent of sample size for polytomous data and that misfit to the model could be identified using published recommended ranges. PMID:18510722

  18. Validation of Improved Broadband Shortwave and Longwave Fluxes Derived From GOES

    NASA Technical Reports Server (NTRS)

    Khaiyer, Mandana M.; Nordeen, Michele L.; Palikonda, Rabindra; Yi, Yuhong; Minnis, Patrick; Doelling, David R.

    2009-01-01

    Broadband (BB) shortwave (SW) and longwave (LW) fluxes at TOA (Top of Atmosphere) are crucial parameters in the study of climate and can be monitored over large portions of the Earth's surface using satellites. The VISST (Visible Infrared Solar Split-Window Technique) satellite retrieval algorithm facilitates derivation of these parameters from the Geostationery Operational Environmental Satellites (GOES). However, only narrowband (NB) fluxes are available from GOES, so this derivation requires use of narrowband-to-broadband (NB-BB) conversion coefficients. The accuracy of these coefficients affects the validity of the derived broadband (BB) fluxes. Most recently, NB-BB fits were re-derived using the NB fluxes from VISST/GOES data with BB fluxes observed by the CERES (Clouds and the Earth's Radiant Energy Budget) instrument aboard Terra, a sun-synchronous polar-orbiting satellite that crosses the equator at 10:30 LT. Subsequent comparison with ARM's (Atmospheric Radiation Measurement) BBHRP (Broadband Heating Rate Profile) BB fluxes revealed that while the derived broadband fluxes agreed well with CERES near the Terra overpass times, the accuracy of both LW and SW fluxes decreased farther away from the overpass times. Terra's orbit hampers the ability of the NB-BB fits to capture diurnal variability. To account for this in the LW, seasonal NB-BB fits are derived separately for day and night. Information from hourly SW BB fluxes from the Meteosat-8 Geostationary Earth Radiation Budget (GERB) is employed to include samples over the complete solar zenith angle (SZA) range sampled by Terra. The BB fluxes derived from these improved NB-BB fits are compared to BB fluxes computed with a radiative transfer model.

  19. [Identification of Dendrobium varieties by Fourier transform infrared spectroscopy combined with spectral retrieval].

    PubMed

    Liu, Fei; Wang, Yuan-zhong; Deng, Xing-yan; Jin, Hang; Yang, Chun-yan

    2014-06-01

    The infrared spectral of stems of 165 trees of 23 Dendrobium varieties were obtained by means of Fourier transform infrared spectroscopy technique. The spectra show that the spectra of all the samples were similar, and the main components of stem of Dendrobium is cellulose. By the spectral professional software Omnic8.0, three spectral databases were constructed. Lib01 includes of the average spectral of the first four trees of every variety, while Lib02 and Lib03 are constructed from the first-derivative spectra and the second-derivative spectra of average spectra, separately. The correlation search, the square difference retrieval and the square differential difference retrieval of the spectra are performed with the spectral database Lib01 in the specified range of 1 800-500 cm(-1), and the yield correct rate of 92.7%, 74.5% and 92.7%, respectively. The square differential difference retrieval of the first-derivative spectra and the second-derivative spectra is carried out with Lib02 and Lib03 in the same specified range 1 800-500 cm(-1), and shows correct rate of 93.9% for the former and 90.3% for the later. The results show that the first-derivative spectral retrieval of square differential difference algorithm is more suitabe for discerning Dendrobium varieties, and FTIR combining with the spectral retrieval method can identify different varieties of Dendrobium, and the correlation retrieval, the square differential retrieval, the first-derivative spectra and second-derivative spectra retrieval in the specified spectral range are effective and simple way of distinguishing different varieties of Dendrobium.

  20. Basic characteristics of plasma rich in growth factors (PRGF): blood cell components and biological effects.

    PubMed

    Nishiyama, Kazuhiko; Okudera, Toshimitsu; Watanabe, Taisuke; Isobe, Kazushige; Suzuki, Masashi; Masuki, Hideo; Okudera, Hajime; Uematsu, Kohya; Nakata, Koh; Kawase, Tomoyuki

    2016-11-01

    Platelet-rich plasma (PRP) is widely used in regenerative medicine because of its high concentrations of various growth factors and platelets. However, the distribution of blood cell components has not been investigated in either PRP or other PRP derivatives. In this study, we focused on plasma rich in growth factors (PRGF), a PRP derivative, and analyzed the distributions of platelets and white blood cells (WBCs). Peripheral blood samples were collected from healthy volunteers ( N  = 14) and centrifuged to prepare PRGF and PRP. Blood cells were counted using an automated hematology analyzer. The effects of PRP and PRGF preparations on cell proliferation were determined using human periosteal cells. In the PRGF preparations, both red blood cells and WBCs were almost completely eliminated, and platelets were concentrated by 2.84-fold, whereas in the PRP preparations, both platelets and WBCs were similarly concentrated by 8.79- and 5.51-fold, respectively. Platelet counts in the PRGF preparations were positively correlated with platelet counts in the whole blood samples, while the platelet concentration rate was negatively correlated with red blood cell counts in the whole blood samples. In contrast, platelet counts and concentration rates in the PRP preparations were significantly influenced by WBC counts in whole blood samples. The PRP preparations, but not the PRGF preparations, significantly suppressed cell growth at higher doses in vitro. Therefore, these results suggest that PRGF preparations can clearly be distinguished from PRP preparations by both inclusion of WBCs and dose-dependent stimulation of periosteal cell proliferation in vitro.

  1. Basic characteristics of plasma rich in growth factors (PRGF): blood cell components and biological effects

    PubMed Central

    Nishiyama, Kazuhiko; Okudera, Toshimitsu; Watanabe, Taisuke; Isobe, Kazushige; Suzuki, Masashi; Masuki, Hideo; Okudera, Hajime; Uematsu, Kohya; Nakata, Koh

    2016-01-01

    Abstract Platelet‐rich plasma (PRP) is widely used in regenerative medicine because of its high concentrations of various growth factors and platelets. However, the distribution of blood cell components has not been investigated in either PRP or other PRP derivatives. In this study, we focused on plasma rich in growth factors (PRGF), a PRP derivative, and analyzed the distributions of platelets and white blood cells (WBCs). Peripheral blood samples were collected from healthy volunteers (N = 14) and centrifuged to prepare PRGF and PRP. Blood cells were counted using an automated hematology analyzer. The effects of PRP and PRGF preparations on cell proliferation were determined using human periosteal cells. In the PRGF preparations, both red blood cells and WBCs were almost completely eliminated, and platelets were concentrated by 2.84‐fold, whereas in the PRP preparations, both platelets and WBCs were similarly concentrated by 8.79‐ and 5.51‐fold, respectively. Platelet counts in the PRGF preparations were positively correlated with platelet counts in the whole blood samples, while the platelet concentration rate was negatively correlated with red blood cell counts in the whole blood samples. In contrast, platelet counts and concentration rates in the PRP preparations were significantly influenced by WBC counts in whole blood samples. The PRP preparations, but not the PRGF preparations, significantly suppressed cell growth at higher doses in vitro. Therefore, these results suggest that PRGF preparations can clearly be distinguished from PRP preparations by both inclusion of WBCs and dose‐dependent stimulation of periosteal cell proliferation in vitro. PMID:29744155

  2. Modeling and monitoring cyclic and linear volatile methylsiloxanes in a wastewater treatment plant using constant water level sequencing batch reactors.

    PubMed

    Wang, De-Gao; Du, Juan; Pei, Wei; Liu, Yongjun; Guo, Mingxing

    2015-04-15

    The fate of cyclic and linear volatile methylsiloxanes (VMSs) was evaluated in a wastewater treatment plant (WWTP) using constant water level sequencing batch reactors from Dalian, China. Influent, effluent, and sewage sludge samples were collected for seven consecutive days. The mean concentrations of cyclic VMSs (cVMSs) in influent and effluent samples are 1.05 μg L(-1) and 0.343 μg L(-1); the total removal efficiency of VMSs is >60%. Linear VMS (lVMS) concentration is under the quantification limitation in aquatic samples but is found in sludge samples with a value of 90 μg kg(-1). High solid-water partition coefficients result in high VMS concentrations in sludge with the mean value of 5030 μg kg(-1). No significant differences of the daily mass flows are found when comparing the concentration during the weekend and during working days. The estimated mass load of total cVMSs is 194 mg d(-1)1000 inhabitants(-1) derived for the population. A mass balance model of the WWTP was developed and derived to simulate the fate of cVMSs. The removal by sorption on sludge increases, and the volatilization decreases with increasing hydrophobicity and decreasing volatility for cVMSs. Sensitivity analysis shows that the total suspended solid concentration in the effluent, mixed liquor suspended solid concentration, the sewage sludge flow rate, and the influent flow rate are the most influential parameters on the mass distribution of cVMSs in this WWTP. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Surface Uplift Rate Constrained by Multiple Terrestrial Cosmogenic Nuclides: Theory and Application from the Central Andean Plateau

    NASA Astrophysics Data System (ADS)

    McPhillips, D. F.; Hoke, G. D.; Niedermann, S.; Wittmann, H.

    2015-12-01

    There is widespread interest in quantifying the growth and decay of topography. However, prominent methods for quantitative determinations of paleoelevation rely on assumptions that are often difficult to test. For example, stable isotope paleoaltimetry relies on the knowledge of past lapse rates and moisture sources. Here, we demonstrate how cosmogenic 10Be - 21Ne and/or 10Be - 26Al sample pairs can be applied to provide independent estimates of surface uplift rate using both published data and new data from the Atacama Desert. Our approach requires a priori knowledge of the maximum age of exposure of the sampled surface. Ignimbrite surfaces provide practical sampling targets. When erosion is very slow (roughly, ≤1 m/Ma), it is often possible to constrain paleo surface uplift rate with precision comparable to that of stable isotopic methods (approximately ±50%). The likelihood of a successful measurement is increased by taking n samples from a landscape surface and solving for one regional paleo surface uplift rate and n local erosion rates. In northern Chile, we solve for surface uplift and erosion rates using three sample groups from the literature (Kober et al., 2007). In the two lower elevation groups, we calculate surface uplift rates of 110 (+60/-12) m/Myr and 160 (+120/-6) m/Myr and estimate uncertainties with a bootstrap approach. The rates agree with independent estimates derived from stream profile analyses nearby (Hoke et al., 2007). Our calculated uplift rates correspond to total uplift of 1200 and 850 m, respectively, when integrated over appropriate timescales. Erosion rates were too high to reliably calculate the uplift rate in the third, high elevation group. New cosmogenic nuclide analyses from the Atacama Desert are in progress, and preliminary results are encouraging. In particular, a replicate sample in the vicinity of the first Kober et al. (2007) group independently yields a surface uplift rate of 110 m/Myr. Compared to stable isotope proxies, cosmogenic nuclides potentially provide better constraints on surface uplift in places where assumptions about paleo-atmospheric conditions are hard to constrain and justify. F. S. Kober et al. (2007), Geomorphology, 83, 97-110. G. D. Hoke et al. (2007), Tectonics, 26, doi:10.1029/2006TC002082.

  4. Random errors of oceanic monthly rainfall derived from SSM/I using probability distribution functions

    NASA Technical Reports Server (NTRS)

    Chang, Alfred T. C.; Chiu, Long S.; Wilheit, Thomas T.

    1993-01-01

    Global averages and random errors associated with the monthly oceanic rain rates derived from the Special Sensor Microwave/Imager (SSM/I) data using the technique developed by Wilheit et al. (1991) are computed. Accounting for the beam-filling bias, a global annual average rain rate of 1.26 m is computed. The error estimation scheme is based on the existence of independent (morning and afternoon) estimates of the monthly mean. Calculations show overall random errors of about 50-60 percent for each 5 deg x 5 deg box. The results are insensitive to different sampling strategy (odd and even days of the month). Comparison of the SSM/I estimates with raingage data collected at the Pacific atoll stations showed a low bias of about 8 percent, a correlation of 0.7, and an rms difference of 55 percent.

  5. Survival analysis for the missing censoring indicator model using kernel density estimation techniques

    PubMed Central

    Subramanian, Sundarraman

    2008-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423

  6. Survival analysis for the missing censoring indicator model using kernel density estimation techniques.

    PubMed

    Subramanian, Sundarraman

    2006-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented.

  7. Effects of Bedrock Landsliding on Cosmogenically Determined Erosion Rates

    NASA Technical Reports Server (NTRS)

    Niemi, Nathan; Oskin, Mike; Burbank, Douglas; Heimsath, Arjun

    2005-01-01

    The successful quantification of long-term erosion rates underpins our understanding of landscape. formation, the topographic evolution of mountain ranges, and the mass balance within active orogens. The measurement of in situ-produced cosmogenic radionuclides (CRNs) in fluvial and alluvial sediments is perhaps the method with the greatest ability to provide such long-term erosion rates. In active orogens, however, deep-seated bedrock landsliding is an important erosional process, the effect of which on CRN-derived erosion rates is largely unquantified. We present a numerical simulation of cosmogenic nuclide production and distribution in landslide-dominated catchments to address the effect of bedrock landsliding on cosmogenic erosion rates in actively eroding landscapes. Results of the simulation indicate that the temporal stability of erosion rates determined from CRN concentrations in sediment decreases with increased ratios of landsliding to sediment detachment rates within a given catchment area, and that larger catchment areas must be sampled with increased frequency of landsliding in order to accurately evaluate long-term erosion rates. In addition, results of this simulation suggest that sediment sampling for CRNs is the appropriate method for determining long-term erosion rates in regions dominated by mass-wasting processes, while bedrock surface sampling for CRNs is generally an ineffective means of determining long-term erosion rates. Response times of CRN concentrations to changes in erosion rate indicate that climatically driven cycles of erosion may be detected relatively quickly after such changes occur, but that complete equilibration of CRN concentrations to new erosional conditions may take tens of thousands of years. Simulation results of CRN erosion rates are compared with a new, rich dataset of CRN concentrations from the Nepalese Himalaya, supporting conclusions drawn from the simulation.

  8. Clinical validity of prototype personality disorder ratings in adolescents.

    PubMed

    Defife, Jared A; Haggerty, Greg; Smith, Scott W; Betancourt, Luis; Ahmed, Zain; Ditkowsky, Keith

    2015-01-01

    A growing body of research shows that personality pathology in adolescents is clinically distinctive and frequently stable into adulthood. A reliable and useful method for rating personality pathology in adolescent patients has the potential to enhance conceptualization, dissemination, and treatment effectiveness. The aim of this study is to examine the clinical validity of a prototype matching approach (derived from the Shedler Westen Assessment Procedure-Adolescent Version) for quantifying personality pathology in an adolescent inpatient sample. Sixty-six adolescent inpatients and their parents or legal guardians completed forms of the Child Behavior Checklist (CBCL) assessing emotional and behavioral problems. Clinical criterion variables including suicide history, substance use, and fights with peers were also assessed. Patients' individual and group therapists on the inpatient unit completed personality prototype ratings. Prototype diagnoses demonstrated substantial reliability (median intraclass correlation coefficient =.75) across independent ratings from individual and group therapists. Personality prototype ratings correlated with the CBCL scales and clinical criterion variables in anticipated and meaningful ways. As seen in prior research with adult samples, prototype personality ratings show clinical validity across independent clinician raters previously unfamiliar with the approach, and they are meaningfully related to clinical symptoms, behavioral problems, and adaptive functioning.

  9. A Bayesian framework to estimate diversification rates and their variation through time and space

    PubMed Central

    2011-01-01

    Background Patterns of species diversity are the result of speciation and extinction processes, and molecular phylogenetic data can provide valuable information to derive their variability through time and across clades. Bayesian Markov chain Monte Carlo methods offer a promising framework to incorporate phylogenetic uncertainty when estimating rates of diversification. Results We introduce a new approach to estimate diversification rates in a Bayesian framework over a distribution of trees under various constant and variable rate birth-death and pure-birth models, and test it on simulated phylogenies. Furthermore, speciation and extinction rates and their posterior credibility intervals can be estimated while accounting for non-random taxon sampling. The framework is particularly suitable for hypothesis testing using Bayes factors, as we demonstrate analyzing dated phylogenies of Chondrostoma (Cyprinidae) and Lupinus (Fabaceae). In addition, we develop a model that extends the rate estimation to a meta-analysis framework in which different data sets are combined in a single analysis to detect general temporal and spatial trends in diversification. Conclusions Our approach provides a flexible framework for the estimation of diversification parameters and hypothesis testing while simultaneously accounting for uncertainties in the divergence times and incomplete taxon sampling. PMID:22013891

  10. Clinical Validity of Prototype Personality Disorder Ratings in Adolescents

    PubMed Central

    DeFife, Jared A.; Haggerty, Greg; Smith, Scott W.; Betancourt, Luis; Ahmed, Zain; Ditkowsky, Keith

    2015-01-01

    A growing body of research shows that personality pathology in adolescents is clinically distinctive and frequently stable into adulthood. A reliable and useful method for rating personality pathology in adolescent patients has the potential to enhance conceptualization, dissemination, and treatment effectiveness. The aim of this study is to examine the clinical validity of a prototype matching approach (derived from the Shedler Westen Assessment Procedure – Adolescent Version) for quantifying personality pathology in an adolescent inpatient sample. Sixty-six adolescent inpatients and their parents or legal guardians completed forms of the Child Behavior Checklist (CBCL) assessing emotional and behavioral problems. Clinical criterion variables including suicide history, substance use, and fights with peers were also assessed. Patients’ individual and group therapists on the inpatient unit completed personality prototype ratings. Prototype diagnoses demonstrated substantial reliability (median ICC = .75) across independent ratings from individual and group therapists. Personality prototype ratings correlated with the CBCL scales and clinical criterion variables in anticipated and meaningful ways. As seen in prior research with adult samples, prototype personality ratings show clinical validity across independent clinician raters previously unfamiliar with the approach, and they are meaningfully related to clinical symptoms, behavioral problems, and adaptive functioning. PMID:25457971

  11. Concentrations and Potential Health Risks of Metals in Lip Products

    PubMed Central

    Liu, Sa; Rojas-Cheatham, Ann

    2013-01-01

    Background: Metal content in lip products has been an issue of concern. Objectives: We measured lead and eight other metals in a convenience sample of 32 lip products used by young Asian women in Oakland, California, and assessed potential health risks related to estimated intakes of these metals. Methods: We analyzed lip products by inductively coupled plasma optical emission spectrometry and used previous estimates of lip product usage rates to determine daily oral intakes. We derived acceptable daily intakes (ADIs) based on information used to determine public health goals for exposure, and compared ADIs with estimated intakes to assess potential risks. Results: Most of the tested lip products contained high concentrations of titanium and aluminum. All examined products had detectable manganese. Lead was detected in 24 products (75%), with an average concentration of 0.36 ± 0.39 ppm, including one sample with 1.32 ppm. When used at the estimated average daily rate, estimated intakes were > 20% of ADIs derived for aluminum, cadmium, chromium, and manganese. In addition, average daily use of 10 products tested would result in chromium intake exceeding our estimated ADI for chromium. For high rates of product use (above the 95th percentile), the percentages of samples with estimated metal intakes exceeding ADIs were 3% for aluminum, 68% for chromium, and 22% for manganese. Estimated intakes of lead were < 20% of ADIs for average and high use. Conclusions: Cosmetics safety should be assessed not only by the presence of hazardous contents, but also by comparing estimated exposures with health-based standards. In addition to lead, metals such as aluminum, cadmium, chromium, and manganese require further investigation. PMID:23674482

  12. In vitro fermentation of alginate and its derivatives by human gut microbiota.

    PubMed

    Li, Miaomiao; Li, Guangsheng; Shang, Qingsen; Chen, Xiuxia; Liu, Wei; Pi, Xiong'e; Zhu, Liying; Yin, Yeshi; Yu, Guangli; Wang, Xin

    2016-06-01

    Alginate (Alg) has a long history as a food ingredient in East Asia. However, the human gut microbes responsible for the degradation of alginate and its derivatives have not been fully understood yet. Here, we report that alginate and the low molecular polymer derivatives of mannuronic acid oligosaccharides (MO) and guluronic acid oligosaccharides (GO) can be completely degraded and utilized at various rates by fecal microbiota obtained from six Chinese individuals. However, the derivative of propylene glycol alginate sodium sulfate (PSS) was not hydrolyzed. The bacteria having a pronounced ability to degrade Alg, MO and GO were isolated from human fecal samples and were identified as Bacteroides ovatus, Bacteroides xylanisolvens, and Bacteroides thetaiotaomicron. Alg, MO and GO can increase the production level of short chain fatty acids (SCFA), but GO generates the highest level of SCFA. Our data suggest that alginate and its derivatives could be degraded by specific bacteria in the human gut, providing the basis for the impacts of alginate and its derivates as special food additives on human health. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Estimating implied rates of discount in healthcare decision-making.

    PubMed

    West, R R; McNabb, R; Thompson, A G H; Sheldon, T A; Grimley Evans, J

    2003-01-01

    To consider whether implied rates of discounting from the perspectives of individual and society differ, and whether implied rates of discounting in health differ from those implied in choices involving finance or "goods". The study comprised first a review of economics, health economics and social science literature and then an empirical estimate of implied rates of discounting in four fields: personal financial, personal health, public financial and public health, in representative samples of the public and of healthcare professionals. Samples were drawn in the former county and health authority district of South Glamorgan, Wales. The public sample was a representative random sample of men and women, aged over 18 years and drawn from electoral registers. The health professional sample was drawn at random with the cooperation of professional leads to include doctors, nurses, professions allied to medicine, public health, planners and administrators. The literature review revealed few empirical studies in representative samples of the population, few direct comparisons of public with private decision-making and few direct comparisons of health with financial discounting. Implied rates of discounting varied widely and studies suggested that discount rates are higher the smaller the value of the outcome and the shorter the period considered. The relationship between implied discount rates and personal attributes was mixed, possibly reflecting the limited nature of the samples. Although there were few direct comparisons, some studies found that individuals apply different rates of discount to social compared with private comparisons and health compared with financial. The present study also found a wide range of implied discount rates, with little systematic effect of age, gender, educational level or long-term illness. There was evidence, in both samples, that people chose a lower rate of discount in comparisons made on behalf of society than in comparisons made for themselves. Both public and health professional samples tended to choose lower discount rates in health-related comparisons than in finance-related comparisons. It was also suggested that implied rates of discount, derived from responses to hypothetical questions, can be influenced by detail of question framing. The study suggested that both the lay public and healthcare professionals consider that the discount rate appropriate for public decisions is lower than that for private decisions. This finding suggests that lay people as well as healthcare professionals, used to making decisions on behalf of others, recognise that society is not simply an aggregate of individuals. It also implies a general appreciation that society is more stable and has a more predictable future than does the individual. There is fairly general support for this view in the theoretical literature and limited support in the few previous direct comparisons. Further research is indicated, possibly involving more in-depth interviewing and drawing inference on real, rather than hypothetical choices.

  14. General solution of undersampling frequency conversion and its optimization for parallel photodisplacement imaging.

    PubMed

    Nakata, Toshihiko; Ninomiya, Takanori

    2006-10-10

    A general solution of undersampling frequency conversion and its optimization for parallel photodisplacement imaging is presented. Phase-modulated heterodyne interference light generated by a linear region of periodic displacement is captured by a charge-coupled device image sensor, in which the interference light is sampled at a sampling rate lower than the Nyquist frequency. The frequencies of the components of the light, such as the sideband and carrier (which include photodisplacement and topography information, respectively), are downconverted and sampled simultaneously based on the integration and sampling effects of the sensor. A general solution of frequency and amplitude in this downconversion is derived by Fourier analysis of the sampling procedure. The optimal frequency condition for the heterodyne beat signal, modulation signal, and sensor gate pulse is derived such that undesirable components are eliminated and each information component is converted into an orthogonal function, allowing each to be discretely reproduced from the Fourier coefficients. The optimal frequency parameters that maximize the sideband-to-carrier amplitude ratio are determined, theoretically demonstrating its high selectivity over 80 dB. Preliminary experiments demonstrate that this technique is capable of simultaneous imaging of reflectivity, topography, and photodisplacement for the detection of subsurface lattice defects at a speed corresponding to an acquisition time of only 0.26 s per 256 x 256 pixel area.

  15. Distribution of petroleum hydrocarbons and toluene biodegradation, Knox Street fire pits, Fort Bragg, North Carolina

    USGS Publications Warehouse

    Harden, S.L.; Landmeyer, J.E.

    1996-01-01

    An investigation was conducted at the Knox Street fire pits, Fort Bragg, North Carolina, to monitor the distribution of toluene, ethylbenzene, and xylene (TEX) in soil vapor, ground water, and ground-water/vapor to evaluate if total concentrations of TEX at the site are decreasing with time, and to quantify biodegradation rates of toluene in the unsaturated and saturated zones. Soil-vapor and ground-water samples were collected around the fire pits and ground-water/vapor samples were collected along the ground-water discharge zone, Beaver Creek, on a monthly basis from June 1994 through June 1995. Concentrations of TEX compounds in these samples were determined with a field gas chro- matograph. Laboratory experiments were performed on aquifer sediment samples to measure rates of toluene biodegradation by in situ micro- organisms. Based on field gas chromatographic analytical results, contamination levels of TEX compounds in both soil vapor and ground water appear to decrease downgradient of the fire-pit source area. During the 1-year study period, the observed temporal and spatial trends in soil vapor TEX concentrations appear to reflect differences in the distribution of TEX among solid, aqueous, and gaseous phases within fuel-contaminated soils in the unsaturated zone. Soil temperature and soil moisture are two important factors which influence the distribution of TEX com- pounds among the different phases. Because of the short period of data collection, it was not possible to distinguish between seasonal fluc- tuations in soil vapor TEX concentrations and an overall net decrease in TEX concentrations at the study site. No seasonal trend was observed in total TEX concentrations for ground- water samples collected at the study site. Although the analytical results could not be used to determine if ground-water TEX concen- trations decreased during the study at a specific location, the data were used to examine rate constants of toluene biodegradation. Based on ground-water toluene concentration data, a maximum rate constant for anaerobic biodegradation of toluene in the saturated zone was estimated to be as low as 0.002 d-1 or as high as 0.026 d-1. Based on analyses of ground-water/vapor samples, toluene was the prin- cipal TEX compound identified in ground water discharging to Beaver Creek. Observed decreases in ground-water/vapor toluene concentrations during the study period may reflect a decrease in source inputs, an increase in dilution caused by higher ground-water flow, and(or) removal by biological or other physical processes. Rate constants of toluene anaerobic biodegradation determined by laboratory measurements illustrate a typical acclimation response of micro-organisms to hydrocarbon contamination in sediments collected from the site. Toluene biodegradation rate constants derived from laboratory microcosm studies ranged from 0.001 to 0.027 d-1, which is similar to the range of 0.002 to 0.026 d-1 for toluene biodegradation rate constants derived from ground-water analytical data. The close agreement of toluene biodegradation rate constants reported using both approaches offer strong evidence that toluene can be degraded at environmentally significant rates at the study site.

  16. A Theoretical Basis for Entropy-Scaling Effects in Human Mobility Patterns.

    PubMed

    Osgood, Nathaniel D; Paul, Tuhin; Stanley, Kevin G; Qian, Weicheng

    2016-01-01

    Characterizing how people move through space has been an important component of many disciplines. With the advent of automated data collection through GPS and other location sensing systems, researchers have the opportunity to examine human mobility at spatio-temporal resolution heretofore impossible. However, the copious and complex data collected through these logging systems can be difficult for humans to fully exploit, leading many researchers to propose novel metrics for encapsulating movement patterns in succinct and useful ways. A particularly salient proposed metric is the mobility entropy rate of the string representing the sequence of locations visited by an individual. However, mobility entropy rate is not scale invariant: entropy rate calculations based on measurements of the same trajectory at varying spatial or temporal granularity do not yield the same value, limiting the utility of mobility entropy rate as a metric by confounding inter-experimental comparisons. In this paper, we derive a scaling relationship for mobility entropy rate of non-repeating straight line paths from the definition of Lempel-Ziv compression. We show that the resulting formulation predicts the scaling behavior of simulated mobility traces, and provides an upper bound on mobility entropy rate under certain assumptions. We further show that this formulation has a maximum value for a particular sampling rate, implying that optimal sampling rates for particular movement patterns exist.

  17. Improving tree age estimates derived from increment cores: a case study of red pine

    Treesearch

    Shawn Fraver; John B. Bradford; Brian J. Palik

    2011-01-01

    Accurate tree ages are critical to a range of forestry and ecological studies. However, ring counts from increment cores, if not corrected for the years between the root collar and coring height, can produce sizeable age errors. The magnitude of errors is influenced by both the height at which the core is extracted and the growth rate. We destructively sampled saplings...

  18. The Derivation of Job Compensation Index Values from the Position Analysis Questionnaire (PAQ). Report No. 6.

    ERIC Educational Resources Information Center

    McCormick, Ernest J.; And Others

    The study deals with the job component method of establishing compensation rates. The basic job analysis questionnaire used in the study was the Position Analysis Questionnaire (PAQ) (Form B). On the basis of a principal components analysis of PAQ data for a large sample (2,688) of jobs, a number of principal components (job dimensions) were…

  19. Deformation of Olivine at Subduction Zone Conditions Determined from In situ Measurements with Synchrotron Radiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    H Long; D Weidner; L Li

    2011-12-31

    We report measurements of the deformation stress for San Carlos olivine at pressures of 3-5 GPa, temperatures of 25-1150 C, and strain rates of 10{sup -7}-10{sup -5} s{sup -1}. We determine a deformation stress of approximately 2.5 GPa that is relatively temperature and strain rate independent in the temperature range of 400-900 C. The deformation experiments have been carried out on a deformation DIA (D-DIA) apparatus, Sam85, at X17B2, NSLS. Powder samples are used in these experiments. Enstatite (MgSiO{sub 3}) (3-5% total quality of sample) is used as the buffer to control the activity of silica. Ni foil is usedmore » in some experiments to buffer the oxygen fugacity. Water content is confirmed by IR spectra of the recovered samples. Samples are compressed at room temperature and are then annealed at 1200 C for at least 2 h before deformation. The total (plastic and elastic) strains (macroscopic) are derived from the direct measurements of the images taken by X-ray radiograph technique. The differential stresses are derived from the diffraction determined elastic strains. In the regime of 25-400 C, there is a small decrease of stress at steady state as temperature increases; in the regime of 400 C to the 'transition temperature', the differential stress at steady state ({approx}2.5 GPa) is relatively insensitive to the changes of temperature and strain rate; however, it drastically decreases to about 1 GPa and becomes temperature-dependent above the transition temperature and thereafter. The transition temperature is near 900 C. Above the transition temperature, the flow agrees with power law creep measurements of previous investigations. The anisotropy of differential stress in individual planes indicates that the deformation of olivine at low temperature is dominated by [0 0 1](1 0 0). Accounting to a slower strain rate in the natural system, the transition temperature for the olivine in the slab is most likely in the range of 570-660 C.« less

  20. THE ARECIBO LEGACY FAST ALFA SURVEY: THE GALAXY POPULATION DETECTED BY ALFALFA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Shan; Haynes, Martha P.; Giovanelli, Riccardo

    Making use of H I 21 cm line measurements from the ALFALFA survey ({alpha}.40) and photometry from the Sloan Digital Sky Survey (SDSS) and Galaxy Evolution Explorer (GALEX), we investigate the global scaling relations and fundamental planes linking stars and gas for a sample of 9417 common galaxies: the {alpha}.40-SDSS-GALEX sample. In addition to their H I properties derived from the ALFALFA data set, stellar masses (M{sub *}) and star formation rates (SFRs) are derived from fitting the UV-optical spectral energy distributions. 96% of the {alpha}.40-SDSS-GALEX galaxies belong to the blue cloud, with the average gas fraction f{sub HI} {identical_to}more » M{sub HI}/M{sub *} {approx} 1.5. A transition in star formation (SF) properties is found whereby below M{sub *} {approx} 10{sup 9.5} M{sub Sun }, the slope of the star-forming sequence changes, the dispersion in the specific star formation rate (SSFR) distribution increases, and the star formation efficiency (SFE) mildly increases with M{sub *}. The evolutionary track in the SSFR-M{sub *} diagram, as well as that in the color-magnitude diagram, is linked to the H I content; below this transition mass, the SF is regulated strongly by the H I. Comparison of H I and optically selected samples over the same restricted volume shows that the H I-selected population is less evolved and has overall higher SFR and SSFR at a given stellar mass, but lower SFE and extinction, suggesting either that a bottleneck exists in the H I-to-H{sub 2} conversion or that the process of SF in the very H I-dominated galaxies obeys an unusual, low-efficiency SF law. A trend is found that, for a given stellar mass, high gas fraction galaxies reside preferentially in dark matter halos with high spin parameters. Because it represents a full census of H I-bearing galaxies at z {approx} 0, the scaling relations and fundamental planes derived for the ALFALFA population can be used to assess the H I detection rate by future blind H I surveys and intensity mapping experiments at higher redshift.« less

  1. In-situ temperature-controllable shear flow device for neutron scattering measurement—An example of aligned bicellar mixtures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, Yan; Li, Ming; Kučerka, Norbert

    We have designed and constructed a temperature-controllable shear flow cell for in-situ study on flow alignable systems. The device has been tested in the neutron diffraction and has the potential to be applied in the small angle neutron scattering configuration to characterize the nanostructures of the materials under flow. The required sample amount is as small as 1 ml. The shear rate on the sample is controlled by the flow rate produced by an external pump and can potentially vary from 0.11 to 3.8 × 10{sup 5} s{sup −1}. Both unidirectional and oscillational flows are achievable by the setting ofmore » the pump. The instrument is validated by using a lipid bicellar mixture, which yields non-alignable nanodisc-like bicelles at low T and shear-alignable membranes at high T. Using the shear cell, the bicellar membranes can be aligned at 31 °C under the flow with a shear rate of 11.11 s{sup −1}. Multiple high-order Bragg peaks are observed and the full width at half maximum of the “rocking curve” around the Bragg’s condition is found to be 3.5°–4.1°. It is noteworthy that a portion of the membranes remains aligned even after the flow stops. Detailed and comprehensive intensity correction for the rocking curve has been derived based on the finite rectangular sample geometry and the absorption of the neutrons as a function of sample angle [See supplementary material at http://dx.doi.org/10.1063/1.4908165 for the detailed derivation of the absorption correction]. The device offers a new capability to study the conformational or orientational anisotropy of the solvated macromolecules or aggregates induced by the hydrodynamic interaction in a flow field.« less

  2. The Xenon record of Earth's early differentiaiton

    NASA Astrophysics Data System (ADS)

    Peto, M. K.; Mukhopadhyay, S.; Kelley, K. A.

    2011-12-01

    Xenon isotopes in mantle derived rocks provide information on the early differentiation of the silicate mantle of our planet. {131,132 134,136}Xe isotopes are produced by the spontaneous fission of two different elements: the now extinct radionuclide 244Pu, and the long-lived 238U. These two parent nuclides, however, yield rather different proportion of fissiogenic Xenon isotopes. Hence, the proportion of Pu- to U-derived fission xenon is indicative of the degree and rate of outgassing of a mantle reservoir. Recent data obtained from Iceland in our lab confirm that the Xenon isotopic composition of the plume source(s) is characterized by lower 136Xe/130Xe ratios than the MORB source and the Iceland plume is more enriched in the Pu-derived Xenon component. These features are interpreted as reflecting different degrees of outgassing and appear not to be the result of preferential recycling of Xenon to the deep mantle. To further investigate how representative the Icelandic measurements might be of other mantle plumes, we measured noble gases (He, Ne, Ar, Xe) in gas-rich basalt glasses from the Rochambeau Ridge (RR) in the Northern Lau Basin. Recent work suggests the presence of a "Samoan-like" OIB source in the northern Lau Basin and our measurements were performed on samples with plume-like 3He/4He ratios (15-28 RA) [1]. The Xenon isotopic measurements indicate that the maximum measured 136Xe/130Xe ratios in the Rochambeau samples are similar to Iceland. In particular, for one of the gas rich samples we were able to obtain 77 different isotopic measurements through step-crushing. Preliminary investigation of this sample suggests higher Pu- to U-derived fission Xenon than in MORBs. To quantitatively evaluate the degree and rate of outgassing of the plume and MORB reservoirs, particularly during the first few hundred million years of Earth's history, we have modified a geochemical reservoir model that was previously developed to investigate mantle overturn and mixing from He, Ar and lithophile isotopes [2]. We will present the results from this geochemical reservoirs model, which is constrained by our high precision dataset from the Rochambeau Rift (Northern Lau Basin) and Iceland along with the Xenon dataset from popping rock [3]. [1] Lupton et al., GRL, 2009. [2] Gonnermann and Mukhopadhyay, Nature, 2009. [3] Kunz et al., Science, 1998.

  3. Sample size re-assessment leading to a raised sample size does not inflate type I error rate under mild conditions.

    PubMed

    Broberg, Per

    2013-07-19

    One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.

  4. Genotypic Characterization of Human Immunodeficiency Virus Type 1 Derived from Antiretroviral Therapy-Naive Individuals Residing in Sorong, West Papua.

    PubMed

    Witaningrum, Adiana Mutamsari; Kotaki, Tomohiro; Khairunisa, Siti Qamariyah; Yunifiar M, Muhammad Qushai; Indriati, Dwi Wahyu; Bramanthi, Rendra; Nasronudin; Kameoka, Masanori

    2016-08-01

    Papua and West Papua provinces have the highest prevalence rate of human immunodeficiency virus type 1 (HIV-1) infection in Indonesia; however, data on the molecular epidemiology of HIV-1 are limited. We conducted a genotypic study on HIV-1 genes derived from antiretroviral therapy-naive individuals residing in Sorong, West Papua. HIV-1 genomic fragments were amplified from 43 peripheral blood samples, and sequencing analysis of the genes was carried out. Of the 43 samples, 41 protease (PR), 31 reverse transcriptase (RT), 26 gag, and 25 env genes were sequenced. HIV-1 subtyping revealed that CRF01_AE (48.8%, 21/43) and subtype B (41.9%, 18/43) were the major subtypes prevalent in the region, whereas other recombinant forms were also detected. Major drug resistance-associated mutations for PR inhibitors were not detected; however, mutations for the RT inhibitors, A62V and E138A, appeared in a few samples, indicating the possible emergence of transmitted HIV-1 drug resistance in Sorong, West Papua.

  5. Capacitance-level/density monitor for fluidized-bed combustor

    DOEpatents

    Fasching, George E.; Utt, Carroll E.

    1982-01-01

    A multiple segment three-terminal type capacitance probe with segment selection, capacitance detection and compensation circuitry and read-out control for level/density measurements in a fluidized-bed vessel is provided. The probe is driven at a high excitation frequency of up to 50 kHz to sense quadrature (capacitive) current related to probe/vessel capacitance while being relatively insensitive to the resistance current component. Compensation circuitry is provided for generating a negative current of equal magnitude to cancel out only the resistive component current. Clock-operated control circuitry separately selects the probe segments in a predetermined order for detecting and storing this capacitance measurement. The selected segment acts as a guarded electrode and is connected to the read-out circuitry while all unselected segments are connected to the probe body, which together form the probe guard electrode. The selected probe segment capacitance component signal is directed to a corresponding segment channel sample and hold circuit dedicated to that segment to store the signal derived from that segment. This provides parallel outputs for display, computer input, etc., for the detected capacitance values. The rate of segment sampling may be varied to either monitor the dynamic density profile of the bed (high sampling rate) or monitor average bed characteristics (slower sampling rate).

  6. Evaluation of procedures for estimating ruminal particle turnover and diet digestibility in ruminant animals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cochran, R.C.

    1985-01-01

    Procedures used in estimating ruminal particle turnover and diet digestibility were evaluated in a series of independent experiments. Experiment 1 and 2 evaluated the influence of sampling site, mathematical model and intraruminal mixing on estimates of ruminal particle turnover in beef steers grazing crested wheatgrass or offered ad libitum levels of prairie hay once daily, respectively. Particle turnover rate constants were estimated by intraruminal administration (via rumen cannula) of ytterbium (Yb)-labeled forage, followed by serial collection of rumen digesta or fecal samples. Rumen Yb concentrations were transformed to natural logarithms and regressed on time. Influence of sampling site (rectum versusmore » rumen) on turnover estimates was modified by the model used to fit fecal marker excretion curves in the grazing study. In contrast, estimated turnover rate constants from rumen sampling were smaller (P < 0.05) than rectally derived rate constants, regardless of fecal model used, when steers were fed once daily. In Experiment 3, in vitro residues subjected to acid or neutral detergent fiber extraction (IVADF and IVNDF), acid detergent fiber incubated in cellulase (ADFIC) and acid detergent lignin (ADL) were evaluated as internal markers for predicting diet digestibility. Both IVADF and IVNDF displayed variable accuracy for prediction of in vivo digestibility whereas ADL and ADFIC inaccurately predicted digestibility of all diets.« less

  7. Eigenvalue sensitivity of sampled time systems operating in closed loop

    NASA Astrophysics Data System (ADS)

    Bernal, Dionisio

    2018-05-01

    The use of feedback to create closed-loop eigenstructures with high sensitivity has received some attention in the Structural Health Monitoring field. Although practical implementation is necessarily digital, and thus in sampled time, work thus far has center on the continuous time framework, both in design and in checking performance. It is shown in this paper that the performance in discrete time, at typical sampling rates, can differ notably from that anticipated in the continuous time formulation and that discrepancies can be particularly large on the real part of the eigenvalue sensitivities; a consequence being important error on the (linear estimate) of the level of damage at which closed-loop stability is lost. As one anticipates, explicit consideration of the sampling rate poses no special difficulties in the closed-loop eigenstructure design and the relevant expressions are developed in the paper, including a formula for the efficient evaluation of the derivative of the matrix exponential based on the theory of complex perturbations. The paper presents an easily reproduced numerical example showing the level of error that can result when the discrete time implementation of the controller is not considered.

  8. Ion-probe U–Pb dating of authigenic and detrital opal from Neogene-Quaternary alluvium

    USGS Publications Warehouse

    Neymark, Leonid; Paces, James B.

    2013-01-01

    Knowing depositional ages of alluvial fans is essential for many tectonic, paleoclimatic, and geomorphic studies in arid environments. The use of U–Pb dating on secondary silica to establish the age of Neogene-Quaternary clastic sediments was tested on samples of authigenic and detrital opal and chalcedony from depths of ∼25 to 53 m in boreholes at Midway Valley, Nevada. Dating of authigenic opal present as rinds on rock clasts and in calcite/silica cements establishes minimum ages of alluvium deposition; dating of detrital opal or chalcedony derived from the source volcanic rocks gives the maximum age of sediment deposition.Materials analyzed included 12 samples of authigenic opal, one sample of fracture-coating opal from bedrock, one sample of detrital opal, and two samples of detrital chalcedony. Uranium–lead isotope data were obtained by both thermal ionization mass spectrometry and ion-microprobe. Uranium concentrations ranged from tens to hundreds of μg/g. Relatively large U/Pb allowed calculation of 206Pb/238U ages that ranged from 1.64±0.36 (2σ) to 6.16±0.50 Ma for authigenic opal and from 8.34±0.28 to 11.2±1.3 Ma for detrital opal/chalcedony. Three samples with the most radiogenic Pb isotope compositions also allowed calculation of 207Pb/235U ages, which were concordant with 206Pb/238U ages from the same samples.These results indicate that basin development at Midway Valley was initiated between about 8 and 6 Ma, and that the basin was filled at long-term average deposition rates of less than 1 cm/ka. Because alluvium in Midway Valley was derived from adjacent highlands at Yucca Mountain, the low rates of deposition determined in this study may imply a slow rate of erosion of Yucca Mountain. Volcanic strata underlying the basin are offset by a number of buried faults to a greater degree than the relatively smooth-sloping bedrock/alluvium contact. These geologic relations indicate that movement on most faults ceased prior to erosional planation and burial. Therefore, ages of the authigenic opal from basal alluvium indicate that the last movement on buried faults was older than about 6 Ma.

  9. Measurement of backbone hydrogen-deuterium exchange in the type III secretion system needle protein PrgI by solid-state NMR

    NASA Astrophysics Data System (ADS)

    Chevelkov, Veniamin; Giller, Karin; Becker, Stefan; Lange, Adam

    2017-10-01

    In this report we present site-specific measurements of amide hydrogen-deuterium exchange rates in a protein in the solid state phase by MAS NMR. Employing perdeuteration, proton detection and a high external magnetic field we could adopt the highly efficient Relax-EXSY protocol previously developed for liquid state NMR. According to this method, we measured the contribution of hydrogen exchange on apparent 15N longitudinal relaxation rates in samples with differing D2O buffer content. Differences in the apparent T1 times allowed us to derive exchange rates for multiple residues in the type III secretion system needle protein.

  10. Star Formation Rates in Cooling Flow Clusters: A UV Pilot Study with Archival XMM-Newton Optical Monitor Data

    NASA Technical Reports Server (NTRS)

    Hicks, A. K.; Mushotzky, R.

    2006-01-01

    We have analyzed XMM-Newton Optical Monitor (OM) UV (180-400 nm) data for a sample of 33 galaxies. 30 are cluster member galaxies, and nine of these are central cluster galaxies (CCGs) in cooling flow clusters having mass deposition rates which span a range of 8 - 525 Solar Mass/yr. By comparing the ratio of UV to 2MASS J band fluxes, we find a significant UV excess in many, but not all, cooling flow CCGs, a finding consistent with the outcome of previous studies based on optical imaging data (McNamara & O'Connell 1989; Cardiel, Gorgas, & Aragon-Salamanca 1998; Crawford et al. 1999). This UV excess is a direct indication of the presence of young massive stars, and therefore recent star formation, in these galaxies. Using the Starburst99 spectral energy distribution (SED) model of continuous star formation over a 900 Myr period, we derive star formation rates of 0.2 - 219 solar Mass/yr for the cooling flow sample. For 2/3 of this sample it is possible to equate Chandra/XMM cooling flow mass deposition rates with UV inferred star formation rates, for a combination of starburst lifetime and IMF slope. This is a pilot study of the well populated XMM UV cluster archive and a more extensive follow up study is currently underway.

  11. The red and blue galaxy populations in the GOODS field: evidence for an excess of red dwarfs

    NASA Astrophysics Data System (ADS)

    Salimbeni, S.; Giallongo, E.; Menci, N.; Castellano, M.; Fontana, A.; Grazian, A.; Pentericci, L.; Trevese, D.; Cristiani, S.; Nonino, M.; Vanzella, E.

    2008-01-01

    Aims: We study the evolution of the galaxy population up to z˜ 3 as a function of its colour properties. In particular, luminosity functions and luminosity densities were derived as a function of redshift for the blue/late and red/early populations. Methods: We use data from the GOODS-MUSIC catalogue, which have typical magnitude limits z850≤ 26 and K_s≤ 23.5 for most of the sample. About 8% of the galaxies have spectroscopic redshifts; the remaining have well calibrated photometric redshifts derived from the extremely wide multi-wavelength coverage in 14 bands (from the U band to the Spitzer 8~ μm band). We have derived a catalogue of galaxies complete in the rest-frame B-band, which has been divided into two subsamples according to their rest-frame U-V colour (or derived specific star formation rate) properties. Results: We confirm a bimodality in the U-V colour and specific star formation rate of the galaxy sample up to z˜ 3. This bimodality is used to compute the luminosity functions of the blue/late and red/early subsamples. The luminosity functions of the blue/late and total samples are well represented by steep Schechter functions evolving in luminosity with increasing redshifts. The volume density of the luminosity functions of the red/early populations decreases with increasing redshift. The shape of the red/early luminosity functions shows an excess of faint red dwarfs with respect to the extrapolation of a flat Schechter function and can be represented by the sum of two Schechter functions. Our model for galaxy formation in the hierarchical clustering scenario, which also includes external feedback due to a diffuse UV background, shows a general broad agreement with the luminosity functions of both populations, the larger discrepancies being present at the faint end for the red population. Hints on the nature of the red dwarf population are given on the basis of their stellar mass and spatial distributions.

  12. Crustal subsidence rate off Hawaii determined from sup 234 U/ sup 238 U ages of drowned coral reefs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ludwig, K.R.; Szabo, B.J.; Simmons, K.R.

    1991-02-01

    A series of submerged coral reefs off northwestern Hawaii was formed during (largely glacial) intervals when the rate of local sea-level rise was less than the maximum upward growth rate of the reefs. Mass-spectrometric {sup 234}U/{sup 238}U ages for samples from six such reefs range from 17 to 475 ka and indicate that this part of the Hawaiian Ridge has been subsiding at a roughly uniform rate of 2.6 mm/yr for the past 475 ka. The {sup 234}U/{sup 238}U ages are in general agreement with model ages of reef drowning (based on estimates of paleo-sea-level stands derived from oxygen-isotope ratiosmore » of deep-sea sediments), but there are disagreements in detail. The high attainable precision ({plus minus}10 ka or better on samples younger than {approximately}800 ka), large applicable age range, relative robustness against open-system behavior, and ease of analysis for this technique hold great promise for future applications of dating of 50-1,000 ka coral.« less

  13. A method for the measurement of atmospheric HONO based on DNPH derivatization and HPLC analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, X.; Qiao, H.; Deng, G.

    1999-10-15

    A simple measurement technique was developed for atmospheric HONO based on aqueous scrubbing using a coil sampler followed by 2,4-dinitrophenylhydrazine (DNPH) derivatization and high-performance liquid chromatographic (HPLC) analysis. Quantitative sampling efficiency was obtained using a 1 mM phosphate buffer, pH 7.0, as the scrubbing solution at a gas sampling flow rate of 2 L min{sup {minus}1} and a liquid flow rate of 0.24 mL min{sup {minus}1}. Derivation of the scrubbed nitrous acid by DNPH was fast and was completed within 5 min in a derivatization medium containing 300 {micro}M DNPH and 8 mM HCI at 45 C. The azide derivativemore » was separated from DNPH reagent and carbonyl derivatives by reverse-phase HPLC and was detected with an UV detector at 309 nm. The detection limit is {le}5 pptv and may be lowered to 1 pptv with further DNPH purification. Interferences from NO, NO{sub 2} PAN, O{sub 3}, HNO{sub 3}, and HCHO were studied and found to be negligible. Ambient HONO concentration was measured simultaneously in downtown Albany, NY, by this method and by an ion chromatographic technique after sampling using a fritted bubbler. The results, from 70 pptv during the day to 1.7 ppbv in the early morning, were in very good agreement from the two techniques, within {+-} 20%.« less

  14. The healing effect of bone marrow-derived stem cells in acute radiation syndrome.

    PubMed

    Mortazavi, Seyed Mohammad Javad; Shekoohi-Shooli, Fatemeh; Aghamir, Seyed Mahmood Reza; Mehrabani, Davood; Dehghanian, Amirreza; Zare, Shahrokh; Mosleh-Shirazi, Mohammad Amin

    2016-01-01

    To determine the effect of bone marrow-derived mesenchymal stem cells (BMSCs) on regeneration of bone marrow and intestinal tissue and survival rate in experimental mice with acute radiation syndrome (ARS). Forty mice were randomly divided into two equal groups of A receiving no BMSC transplantation and B receiving BMSCs. BMSCs were isolated from the bone marrow and cultured in DMEM media. Both groups were irradiated with 10 Gy (dose rate 0.28 Gy/ min) (60)CO during 35 minutes with a field size of 35×35 for all the body area. Twenty-four hours after γ irradiation, 150×10(3) cells of passage 5 in 150 µl medium were injected intravenously into the tail. Animals were euthanized one and two weeks after cell transplantation. They were evaluated histologically for any changes in bone marrow and intestinal tissues. The survival rate in mice were also determined. A significant increase for bone marrow cell count and survival rate were observed in group B in comparison to group A. Histological findings denoted to a healing in sample tissues. BMSCs could significantly reduce the side effects of ARS and increase the survival rate and healing in injured tissue. As such their transplantation may open a window in treatment of patients with ARS.

  15. Abundant carbon in the mantle beneath Hawai`i

    NASA Astrophysics Data System (ADS)

    Anderson, Kyle R.; Poland, Michael P.

    2017-09-01

    Estimates of carbon concentrations in Earth’s mantle vary over more than an order of magnitude, hindering our ability to understand mantle structure and mineralogy, partial melting, and the carbon cycle. CO2 concentrations in mantle-derived magmas supplying hotspot ocean island volcanoes yield our most direct constraints on mantle carbon, but are extensively modified by degassing during ascent. Here we show that undegassed magmatic and mantle carbon concentrations may be estimated in a Bayesian framework using diverse geologic information at an ocean island volcano. Our CO2 concentration estimates do not rely upon complex degassing models, geochemical tracer elements, assumed magma supply rates, or rare undegassed rock samples. Rather, we couple volcanic CO2 emission rates with probabilistic magma supply rates, which are obtained indirectly from magma storage and eruption rates. We estimate that the CO2 content of mantle-derived magma supplying Hawai`i’s active volcanoes is 0.97-0.19+0.25 wt%--roughly 40% higher than previously believed--and is supplied from a mantle source region with a carbon concentration of 263-62+81 ppm. Our results suggest that mantle plumes and ocean island basalts are carbon-rich. Our data also shed light on helium isotope abundances, CO2/Nb ratios, and may imply higher CO2 emission rates from ocean island volcanoes.

  16. Investigation of polar organic chemical integrative sampler (POCIS) flow rate dependence for munition constituents in underwater environments.

    PubMed

    Lotufo, Guilherme R; George, Robert D; Belden, Jason B; Woodley, Christa M; Smith, David L; Rosen, Gunther

    2018-02-24

    Munition constituents (MC) are present in aquatic environments throughout the world. Potential for fluctuating release with low residence times may cause concentrations of MC to vary widely over time at contaminated sites. Recently, polar organic chemical integrative samplers (POCIS) have been demonstrated to be valuable tools for the environmental exposure assessment of MC in water. Flow rate is known to influence sampling by POCIS. Because POCIS sampling rates (R s ) for MC have only been determined under quasi-static conditions, the present study evaluated the uptake of 2,4,6-trinitrotoluene (TNT), RDX (hexahydro-1,3,5-trinitro-1,3,5-triazine), and 2,4- and 2,6-dinitrotoluenes (DNT), by POCIS in a controlled water flume at 7, 15, and 30 cm/s in 10-day experiments using samplers both within and without a protective cage. Sampling rate increased with flow rate for all MC investigated, but flow rate had the strongest impact on TNT and the weakest impact on RDX. For uncaged POCIS, mean R s for 30 cm/s was significantly higher than that for 7 cm by 2.7, 1.9, 1.9, and 1.3 folds for TNT, 2,4-DNT, 2,6-DNT, and RDX, respectively. For all MC except RDX, mean R s for caged POCIS at 7 cm/s were significantly lower than for uncaged samplers and similar to those measured at quasi-static condition, but except for 2,6-DNT, no caging effect was measured at the highest flow rate, indicating that the impact of caging on R s is flow rate-dependent. When flow rates are known, flow rate-specific R s should be used for generating POCIS-derived time-averaged concentrations of MC at contaminated sites.

  17. The star-forming history of the young cluster NGC 2264

    NASA Technical Reports Server (NTRS)

    Adams, M. T.; Strom, K. M.; Strom, S. E.

    1983-01-01

    UBVRI H-alpha photographic photometry was obtained for a sample of low-mass stars in the young open cluster NGC 2264 in order to investigate the star-forming history of this region. A theoretical H-R diagram was constructed for the sample of probable cluster members. Isochrones and evolutionary tracks were adopted from Cohen and Kuhi (1979). Evidence for a significant age spread in the cluster was found amounting to over ten million yr. In addition, the derived star formation rate as a function of stellar mass suggests that the principal star-forming mass range in NGC 2264 has proceeded sequentially in time from the lowest to the highest masses. The low-mass cluster stars were the first cluster members to form in significant numbers, although their present birth rate is much lower now than it was about ten million yr ago. The star-formation rate has risen to a peak at successively higher masses and then declined.

  18. Potential of on-line visible and near infrared spectroscopy for measurement of pH for deriving variable rate lime recommendations.

    PubMed

    Tekin, Yücel; Kuang, Boyan; Mouazen, Abdul M

    2013-08-08

    This paper aims at exploring the potential of visible and near infrared (vis-NIR) spectroscopy for on-line measurement of soil pH, with the intention to produce variable rate lime recommendation maps. An on-line vis-NIR soil sensor set up to a frame was used in this study. Lime application maps, based on pH predicted by vis-NIR techniques, were compared with maps based on traditional lab-measured pH. The validation of the calibration model using off-line spectra provided excellent prediction accuracy of pH (R2 = 0.85, RMSEP = 0.18 and RPD = 2.52), as compared to very good accuracy obtained with the on-line measured spectra (R2 = 0.81, RMSEP = 0.20 and RPD = 2.14). On-line predicted pH of all points (e.g., 2,160) resulted in the largest overall field virtual lime requirement (1.404 t), as compared to those obtained with 16 validation points off-line prediction (0.28 t), on-line prediction (0.14 t) and laboratory reference measurement (0.48 t). The conclusion is that the vis-NIR spectroscopy can be successfully used for the prediction of soil pH and for deriving lime recommendations. The advantage of the on-line sensor over sampling with limited number of samples is that more detailed information about pH can be obtained, which is the reason for a higher but precise calculated lime recommendation rate.

  19. Potential of On-Line Visible and Near Infrared Spectroscopy for Measurement of pH for Deriving Variable Rate Lime Recommendations

    PubMed Central

    Tekin, Yücel; Kuang, Boyan; Mouazen, Abdul M.

    2013-01-01

    This paper aims at exploring the potential of visible and near infrared (vis-NIR) spectroscopy for on-line measurement of soil pH, with the intention to produce variable rate lime recommendation maps. An on-line vis-NIR soil sensor set up to a frame was used in this study. Lime application maps, based on pH predicted by vis-NIR techniques, were compared with maps based on traditional lab-measured pH. The validation of the calibration model using off-line spectra provided excellent prediction accuracy of pH (R2 = 0.85, RMSEP = 0.18 and RPD = 2.52), as compared to very good accuracy obtained with the on-line measured spectra (R2 = 0.81, RMSEP = 0.20 and RPD = 2.14). On-line predicted pH of all points (e.g., 2,160) resulted in the largest overall field virtual lime requirement (1.404 t), as compared to those obtained with 16 validation points off-line prediction (0.28 t), on-line prediction (0.14 t) and laboratory reference measurement (0.48 t). The conclusion is that the vis-NIR spectroscopy can be successfully used for the prediction of soil pH and for deriving lime recommendations. The advantage of the on-line sensor over sampling with limited number of samples is that more detailed information about pH can be obtained, which is the reason for a higher but precise calculated lime recommendation rate. PMID:23966186

  20. High Temperature Carbonized Grass as a High Performance Sodium Ion Battery Anode.

    PubMed

    Zhang, Fang; Yao, Yonggang; Wan, Jiayu; Henderson, Doug; Zhang, Xiaogang; Hu, Liangbing

    2017-01-11

    Hard carbon is currently considered the most promising anode candidate for room temperature sodium ion batteries because of its relatively high capacity, low cost, and good scalability. In this work, switchgrass as a biomass example was carbonized under an ultrahigh temperature, 2050 °C, induced by Joule heating to create hard carbon anodes for sodium ion batteries. Switchgrass derived carbon materials intrinsically inherit its three-dimensional porous hierarchical architecture, with an average interlayer spacing of 0.376 nm. The larger interlayer spacing than that of graphite allows for the significant Na ion storage performance. Compared to the sample carbonized under 1000 °C, switchgrass derived carbon at 2050 °C induced an improved initial Coulombic efficiency. Additionally, excellent rate capability and superior cycling performance are demonstrated for the switchgrass derived carbon due to the unique high temperature treatment.

  1. Measuring Clinical Decision Support Influence on Evidence-Based Nursing Practice.

    PubMed

    Cortez, Susan; Dietrich, Mary S; Wells, Nancy

    2016-07-01

    To measure the effect of clinical decision support (CDS) on oncology nurse evidence-based practice (EBP).
. Longitudinal cluster-randomized design.
. Four distinctly separate oncology clinics associated with an academic medical center.
. The study sample was comprised of randomly selected data elements from the nursing documentation software. The data elements were patient-reported symptoms and the associated nurse interventions. The total sample observations were 600, derived from a baseline, posteducation, and postintervention sample of 200 each (100 in the intervention group and 100 in the control group for each sample).
. The cluster design was used to support randomization of the study intervention at the clinic level rather than the individual participant level to reduce possible diffusion of the study intervention. An elongated data collection cycle (11 weeks) controlled for temporary increases in nurse EBP related to the education or CDS intervention.
. The dependent variable was the nurse evidence-based documentation rate, calculated from the nurse-documented interventions. The independent variable was the CDS added to the nursing documentation software.
. The average EBP rate at baseline for the control and intervention groups was 27%. After education, the average EBP rate increased to 37%, and then decreased to 26% in the postintervention sample. Mixed-model linear statistical analysis revealed no significant interaction of group by sample. The CDS intervention did not result in an increase in nurse EBP.
. EBP education increased nurse EBP documentation rates significantly but only temporarily. Nurses may have used evidence in practice but may not have documented their interventions.
. More research is needed to understand the complex relationship between CDS, nursing practice, and nursing EBP intervention documentation. CDS may have a different effect on nurse EBP, physician EBP, and other medical professional EBP.

  2. Optimization and validation of CEDIA drugs of abuse immunoassay tests in serum on Hitachi 912.

    PubMed

    Kirschbaum, Katrin M; Musshoff, Frank; Schmithausen, Ricarda; Stockhausen, Sarah; Madea, Burkhard

    2011-10-10

    Due to sensitive limits of detection of chromatographic methods and low limit values regarding the screening of drugs under the terms of impairment in safe driving (§ 24a StVG, Street Traffic Law in Germany), preliminary immunoassay (IA) tests should be able to detect also low concentrations of legal and illegal drugs in serum in forensic cases. False-negatives should be avoided, the rate of false-positive samples should be low due to cost and time. An optimization of IA cutoff values and a validation of the assay is required for each laboratory. In a retrospective study results for serum samples containing amphetamine, methylenedioxy derivatives, cannabinoids, benzodiazepines, cocaine (metabolites), methadone and opiates obtained with CEDIA drugs of abuse reagents on a Hitachi 912 autoanalyzer were compared with quantitative results of chromatographic methods (gas or liquid chromatography coupled with mass spectrometry (GC/MS or LC/MS)). Firstly sensitivity, specificity, positive and negative predictive values and overall misclassification rates were evaluated by contingency tables and compared to ROC-analyses and Youden-Indices. Secondly ideal cutoffs were statistically calculated on the basis of sensitivity and specificity as decisive statistical criteria with focus on a high sensitivity (low rates of false-negatives), i.e. using the Youden-Index. Immunoassay (IA) and confirmatory results were available for 3014 blood samples. Sensitivity was 90% or more for nearly all analytes: amphetamines (IA cutoff 9.5 ng/ml), methylenedioxy derivatives (IA cutoff 5.5 ng/ml), cannabinoids (IA cutoff 14.5 ng/ml), benzodiazepines (IA cutoff >0 ng/ml). Test of opiates showed a sensitivity of 86% for a IA cutoff value of >0 ng/ml. Values for specificity ranged between 33% (methadone, IA cutoff 10 ng/ml) and 90% (cocaine, IA cutoff 20 ng/ml). Lower cutoff values as recommended by ROC analyses were chosen for most tests to decrease the rate of false-negatives. Analyses enabled the definition of cutoff values with good values for sensitivity. Small rates of false-positives can be accepted in forensic cases. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  3. Continuous Cooling Transformation in Cast Duplex Stainless Steels CD3MN and CD3MWCuN

    NASA Astrophysics Data System (ADS)

    Kim, Yoon-Jun; Chumbley, L. Scott; Gleeson, Brian

    2008-04-01

    The kinetics of brittle phase transformation in cast duplex stainless steels CD3MN and CD3MWCuN was investigated under continuous cooling conditions. Cooling rates slower than 5 °C/min. were obtained using a conventional tube furnace with a programable controller. In order to obtain controlled high cooling rates, a furnace equipped to grow crystals by means of the Bridgman method was used. Samples were soaked at 1100 °C for 30 min and cooled at different rates by changing the furnace position at various velocities. The velocity of the furnace movement was correlated to a continuous-cooling-temperature profile for the samples. Continuous-cooling-transformation (CCT) diagrams were constructed based on experimental observations through metallographic sample preparations and optical microscopy. These are compared to calculated diagrams derived from previously determined isothermal transformation diagrams. The theoretical calculations employed a modified Johnson-Mehl-Avrami (JMA) equation (or Avrami equation) under assumption of the additivity rule. Rockwell hardness tests were made to present the correlation between hardness change and the amount of brittle phases (determined by tint-etching to most likely be a combination of sigma + chi) after cooling.

  4. Consequences of sludge composition on combustion performance derived from thermogravimetry analysis.

    PubMed

    Li, Meiyan; Xiao, Benyi; Wang, Xu; Liu, Junxin

    2015-01-01

    Wastewater treatment plants produce millions of tons of sewage sludge. Sewage sludge is recognized as a promising feedstock for power generation via combustion and can be used for energy crisis adaption. We aimed to investigate the quantitative effects of various sludge characteristics on the overall sludge combustion process performance. Different types of sewage sludge were derived from numerous wastewater treatment plants in Beijing for further thermogravimetric analysis. Thermogravimetric-differential thermogravimetric curves were used to compare the performance of the studied samples. Proximate analytical data, organic compositions, elementary composition, and calorific value of the samples were determined. The relationship between combustion performance and sludge composition was also investigated. Results showed that the performance of sludge combustion was significantly affected by the concentration of protein, which is the main component of volatiles. Carbohydrates and lipids were not correlated with combustion performance, unlike protein. Overall, combustion performance varied with different sludge organic composition. The combustion rate of carbohydrates was higher than those of protein and lipid, and carbohydrate weight loss mainly occurred during the second stage (175-300°C). Carbohydrates have a substantial effect on the rate of system combustion during the second stage considering the specific combustion feature. Additionally, the combustion performance of digested sewage sludge is more negative than the others. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Incorporating Human Interindividual Biotransformation ...

    EPA Pesticide Factsheets

    The protection of sensitive individuals within a population dictates that measures other than central tendencies be employed to estimate risk. The refinement of human health risk assessments for chemicals metabolized by the liver to reflect data on human variability can be accomplished through (1) the characterization of enzyme expression in large banks of human liver samples, (2) the employment of appropriate techniques for the quantification and extrapolation of metabolic rates derived in vitro, and (3) the judicious application of physiologically based pharmacokinetic (PBPK) modeling. While in vitro measurements of specific biochemical reactions from multiple human samples can yield qualitatively valuable data on human variance, such measures must be put into the perspective of the intact human to yield the most valuable predictions of metabolic differences among humans. For quantitative metabolism data to be the most valuable in risk assessment, they must be tied to human anatomy and physiology, and the impact of their variance evaluated under real exposure scenarios. For chemicals metabolized in the liver, the concentration of parent chemical in the liver represents the substrate concentration in the MichaelisMenten description of metabolism. Metabolic constants derived in vitro may be extrapolated to the intact liver, when appropriate conditions are met. Metabolic capacity Vmax; the maximal rate of the reaction) can be scaled directly to the concentration

  6. Small scale temporal distribution of radiocesium in undisturbed coniferous forest soil: Radiocesium depth distribution profiles.

    PubMed

    Teramage, Mengistu T; Onda, Yuichi; Kato, Hiroaki

    2016-04-01

    The depth distribution of pre-Fukushima and Fukushima-derived (137)Cs in undisturbed coniferous forest soil was investigated at four sampling dates from nine months to 18 months after the Fukushima nuclear power plant accident. The migration rate and short-term temporal variability among the sampling profiles were evaluated. Taking the time elapsed since the peak deposition of pre-Fukushima (137)Cs and the median depth of the peaks, its downward displacement rates ranged from 0.15 to 0.67 mm yr(-1) with a mean of 0.46 ± 0.25 mm yr(-1). On the other hand, in each examined profile considerable amount of the Fukushima-derived (137)Cs was found in the organic layer (51%-92%). At this moment, the effect of time-distance on the downward distribution of Fukushima-derived (137)Cs seems invisible as its large portion is still found in layers where organic matter is maximal. This indicates that organic matter seems the primary and preferential sorbent of radiocesium that could be associated with the physical blockage of the exchanging sites by organic-rich dusts that act as a buffer against downward propagation of radiocesium, implying radiocesium to be remained in the root zone for considerable time period. As a result, this soil section can be a potential source of radiation dose largely due to high radiocesium concentration coupled with its low density. Generally, such kind of information will be useful to establish a dynamic safety-focused decision support system to ease and assist management actions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Dusty Lyman-alpha Emitters As Seen By Spitzer

    NASA Astrophysics Data System (ADS)

    Dolan, Kyle; Scarlata, C.; Colbert, J. W.; Teplitz, H. I.; Hayes, M.

    2013-01-01

    We have used the IRAC and MIPS Spitzer archive to derive the full mid-IR SED for the largest sample of local Lyman-alpha emitters, probing the internal activities of these sources as well as analyzing the role that dust properties play in the Lyman-alpha escape fraction. We utilized all available IRAC and MIPS data for a sample of about 100 local Lyman-alpha emitters at redshift 0.2≤z≤0.4 , originally discovered by Deharveng et al. (2008) and Cowie et al. (2011), to quantify the level of star formation (SF) and AGN activity in these sources, probing into dust-enshrouded regions that block UV and optical photons from escaping. In order to derive the total bolometric IR luminosity from 8μm to 1000μm, we fit the IR data to the template SEDs derived by Chary and Elbaz (2001). Using this information, we quantified the total star formation rate (SFR) of these galaxies and how much SF is missed by optical and UV surveys. We also identified any AGN activity and produced new estimates for AGN contamination within the population of Lyman-alpha emitters. This work has been supported by NASA's Astrophysics Data Analysis Program, Award # NNX11AH84G.

  8. Climate-driven unsteady denudation and sediment flux in a high-relief unglaciated catchment-fan using 26Al and 10Be: Panamint Valley, California

    NASA Astrophysics Data System (ADS)

    Mason, Cody C.; Romans, Brian W.

    2018-06-01

    Environmental changes within erosional catchments of sediment routing systems are predicted to modulate sediment transfer dynamics. However, empirical and numerical models that predict such phenomena are difficult to test in natural systems over multi-millennial timescales. Tectonic boundary conditions and climate history in the Panamint Range, California, are relatively well-constrained by existing low-temperature thermochronology and regional multi-proxy paleoclimate studies, respectively. Catchment-fan systems present there minimize sediment storage and recycling, offering an excellent natural laboratory to test models of climate-sedimentary dynamics. We used stratigraphic characterization and cosmogenic radionuclides (CRNs; 26Al and 10Be) in the Pleasant Canyon complex (PCC), a linked catchment-fan system, to examine the effects of Pleistocene high-magnitude, high-frequency climate change on CRN-derived denudation rates and sediment flux in a high-relief, unglaciated catchment-fan system. Calculated 26Al/10Be burial ages from 13 samples collected in an ∼180 m thick outcropping stratigraphic succession range from ca. 1.55 ± 0.22 Ma in basal strata, to ca. 0.36 ± 0.18-0.52 ± 0.20 Ma within the uppermost part of the succession. The mean long-term CRN-derived paleodenudation rate, 36 ± 8 mm/kyr (1σ), is higher than the modern rate of 24 ± 0.6 mm/kyr from Pleasant Canyon, and paleodenudation rates during the middle Pleistocene display some high-frequency variability in the high end (up to 54 ± 10 mm/kyr). The highest CRN-derived denudation rates are associated with stratigraphic evidence for increased precipitation during glacial-pluvial events after the middle Pleistocene transition (post ca. 0.75 Ma), suggesting 100 kyr Milankovitch periodicity could drive the observed variability. We investigated the potential for non-equilibrium sedimentary processes, i.e. increased landslides or sediment storage/recycling, to influence apparent paleodenudation rates; end-member mixing models suggest that a mixture of >50% low-CRN-concentration sediment from landslides is required to produce the largest observed increase in paleodenudation rate. The overall pattern of CRN-derived burial ages, paleodenudation rates, and stratigraphic facies suggests Milankovitch timescale climate transitions drive variability in catchment denudation rates and sediment flux, or alternatively that climate transitions affect sedimentary process regimes that result in measurable variability of CRN concentrations in unglaciated catchment-fan systems.

  9. Sample distribution in peak mode isotachophoresis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubin, Shimon; Schwartz, Ortal; Bercovici, Moran, E-mail: mberco@technion.ac.il

    We present an analytical study of peak mode isotachophoresis (ITP), and provide closed form solutions for sample distribution and electric field, as well as for leading-, trailing-, and counter-ion concentration profiles. Importantly, the solution we present is valid not only for the case of fully ionized species, but also for systems of weak electrolytes which better represent real buffer systems and for multivalent analytes such as proteins and DNA. The model reveals two major scales which govern the electric field and buffer distributions, and an additional length scale governing analyte distribution. Using well-controlled experiments, and numerical simulations, we verify andmore » validate the model and highlight its key merits as well as its limitations. We demonstrate the use of the model for determining the peak concentration of focused sample based on known buffer and analyte properties, and show it differs significantly from commonly used approximations based on the interface width alone. We further apply our model for studying reactions between multiple species having different effective mobilities yet co-focused at a single ITP interface. We find a closed form expression for an effective-on rate which depends on reactants distributions, and derive the conditions for optimizing such reactions. Interestingly, the model reveals that maximum reaction rate is not necessarily obtained when the concentration profiles of the reacting species perfectly overlap. In addition to the exact solutions, we derive throughout several closed form engineering approximations which are based on elementary functions and are simple to implement, yet maintain the interplay between the important scales. Both the exact and approximate solutions provide insight into sample focusing and can be used to design and optimize ITP-based assays.« less

  10. CLAAS: the CM SAF cloud property dataset using SEVIRI

    NASA Astrophysics Data System (ADS)

    Stengel, M.; Kniffka, A.; Meirink, J. F.; Lockhoff, M.; Tan, J.; Hollmann, R.

    2013-10-01

    An 8 yr record of satellite based cloud properties named CLAAS (CLoud property dAtAset using SEVIRI) is presented, which was derived within the EUMETSAT Satellite Application Facility on Climate Monitoring. The dataset is based on SEVIRI measurements of the Meteosat Second Generation satellites, of which the visible and near-infrared channels were intercalibrated with MODIS. Including latest development components of the two applied state-of-the-art retrieval schemes ensure high accuracy in cloud detection, cloud vertical placement and microphysical cloud properties. These properties were further processed to provide daily to monthly averaged quantities, mean diurnal cycles and monthly histograms. In particular the collected histogram information enhance the insight in spatio-temporal variability of clouds and their properties. Due to the underlying intercalibrated measurement record, the stability of the derived cloud properties is ensured, which is exemplarily demonstrated for three selected cloud variables for the entire SEVIRI disk and a European subregion. All data products and processing levels are introduced and validation results indicated. The sampling uncertainty of the averaged products in CLAAS is minimized due to the high temporal resolution of SEVIRI. This is emphasized by studying the impact of reduced temporal sampling rates taken at typical overpass times of polar-orbiting instruments. In particular cloud optical thickness and cloud water path are very sensitive to the sampling rate, which in our study amounted to systematic deviations of over 10% if only sampled once a day. The CLAAS dataset facilitates many cloud related applications at small spatial scales of a few kilometres and short temporal scales of a few hours. Beyond this, the spatiotemporal characteristics of clouds on diurnal to seasonal, but also on multi-annual scales, can be studied.

  11. CLAAS: the CM SAF cloud property data set using SEVIRI

    NASA Astrophysics Data System (ADS)

    Stengel, M. S.; Kniffka, A. K.; Meirink, J. F. M.; Lockhoff, M. L.; Tan, J. T.; Hollmann, R. H.

    2014-04-01

    An 8-year record of satellite-based cloud properties named CLAAS (CLoud property dAtAset using SEVIRI) is presented, which was derived within the EUMETSAT Satellite Application Facility on Climate Monitoring. The data set is based on SEVIRI measurements of the Meteosat Second Generation satellites, of which the visible and near-infrared channels were intercalibrated with MODIS. Applying two state-of-the-art retrieval schemes ensures high accuracy in cloud detection, cloud vertical placement and microphysical cloud properties. These properties were further processed to provide daily to monthly averaged quantities, mean diurnal cycles and monthly histograms. In particular, the per-month histogram information enhances the insight in spatio-temporal variability of clouds and their properties. Due to the underlying intercalibrated measurement record, the stability of the derived cloud properties is ensured, which is exemplarily demonstrated for three selected cloud variables for the entire SEVIRI disc and a European subregion. All data products and processing levels are introduced and validation results indicated. The sampling uncertainty of the averaged products in CLAAS is minimized due to the high temporal resolution of SEVIRI. This is emphasized by studying the impact of reduced temporal sampling rates taken at typical overpass times of polar-orbiting instruments. In particular, cloud optical thickness and cloud water path are very sensitive to the sampling rate, which in our study amounted to systematic deviations of over 10% if only sampled once a day. The CLAAS data set facilitates many cloud related applications at small spatial scales of a few kilometres and short temporal scales of a~few hours. Beyond this, the spatiotemporal characteristics of clouds on diurnal to seasonal, but also on multi-annual scales, can be studied.

  12. An X-ray/SDSS sample. II. AGN-driven outflowing gas plasma properties

    NASA Astrophysics Data System (ADS)

    Perna, M.; Lanzuisi, G.; Brusa, M.; Cresci, G.; Mignoli, M.

    2017-10-01

    Aims: Galaxy-scale outflows are currently observed in many active galactic nuclei (AGNs); however, characterisation of them in terms of their (multi-) phase nature, amount of flowing material, and effects on their host galaxy is still unresolved. In particular, ionised gas mass outflow rate and related energetics are still affected by many sources of uncertainty. In this respect, outflowing gas plasma conditions, being largely unknown, play a crucial role. Methods: We have analysed stacked spectra and sub-samples of sources with high signal-to-noise temperature- and density-sensitive emission lines to derive the plasma properties of the outflowing ionised gas component. We did this by taking advantage of the spectroscopic analysis results we obtained while studying the X-ray/SDSS sample of 563 AGNs at z < 0.8 presented in our companion paper. For these sources, we also studied in detail various diagnostic diagrams to infer information about outflowing gas ionisation mechanisms. Results: We derive, for the first time, median values for electron temperature and density of outflowing gas from medium-size samples ( 30 targets) and stacked spectra of AGNs. Evidence of shock excitation are found for outflowing gas. Conclusions: We measure electron temperatures of the order of 1.7 × 104 K and densities of 1200 cm-3 for faint and moderately luminous AGNs (intrinsic X-ray luminosity 40.5 < log (LX) < 44 in the 2-10 keV band). We note that the electron density that is usually assumed (Ne = 100 cm-3) in ejected material might result in relevant overestimates of flow mass rates and energetics and, as a consequence, of the effects of AGN-driven outflows on the host galaxy.

  13. Variation in foliar respiration and wood CO2 efflux rates among species and canopy layers in a wet tropical forest.

    PubMed

    Asao, Shinichi; Bedoya-Arrieta, Ricardo; Ryan, Michael G

    2015-02-01

    As tropical forests respond to environmental change, autotrophic respiration may consume a greater proportion of carbon fixed in photosynthesis at the expense of growth, potentially turning the forests into a carbon source. Predicting such a response requires that we measure and place autotrophic respiration in a complete carbon budget, but extrapolating measurements of autotrophic respiration from chambers to ecosystem remains a challenge. High plant species diversity and complex canopy structure may cause respiration rates to vary and measurements that do not account for this complexity may introduce bias in extrapolation more detrimental than uncertainty. Using experimental plantations of four native tree species with two canopy layers, we examined whether species and canopy layers vary in foliar respiration and wood CO2 efflux and whether the variation relates to commonly used scalars of mass, nitrogen (N), photosynthetic capacity and wood size. Foliar respiration rate varied threefold between canopy layers, ∼0.74 μmol m(-2) s(-1) in the overstory and ∼0.25 μmol m(-2) s(-1) in the understory, but little among species. Leaf mass per area, N and photosynthetic capacity explained some of the variation, but height explained more. Chamber measurements of foliar respiration thus can be extrapolated to the canopy with rates and leaf area specific to each canopy layer or height class. If area-based rates are sampled across canopy layers, the area-based rate may be regressed against leaf mass per area to derive the slope (per mass rate) to extrapolate to the canopy using the total leaf mass. Wood CO2 efflux varied 1.0-1.6 μmol m(-2) s(-1) for overstory trees and 0.6-0.9 μmol m(-2) s(-1) for understory species. The variation in wood CO2 efflux rate was mostly related to wood size, and little to species, canopy layer or height. Mean wood CO2 efflux rate per surface area, derived by regressing CO2 efflux per mass against the ratio of surface area to mass, can be extrapolated to the stand using total wood surface area. The temperature response of foliar respiration was similar for three of the four species, and wood CO2 efflux was similar between wet and dry seasons. For these species and this forest, vertical sampling may yield more accurate estimates than would temporal sampling. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Understanding environmental DNA detection probabilities: A case study using a stream-dwelling char Salvelinus fontinalis

    USGS Publications Warehouse

    Wilcox, Taylor M; Mckelvey, Kevin S.; Young, Michael K.; Sepulveda, Adam; Shepard, Bradley B.; Jane, Stephen F; Whiteley, Andrew R.; Lowe, Winsor H.; Schwartz, Michael K.

    2016-01-01

    Environmental DNA sampling (eDNA) has emerged as a powerful tool for detecting aquatic animals. Previous research suggests that eDNA methods are substantially more sensitive than traditional sampling. However, the factors influencing eDNA detection and the resulting sampling costs are still not well understood. Here we use multiple experiments to derive independent estimates of eDNA production rates and downstream persistence from brook trout (Salvelinus fontinalis) in streams. We use these estimates to parameterize models comparing the false negative detection rates of eDNA sampling and traditional backpack electrofishing. We find that using the protocols in this study eDNA had reasonable detection probabilities at extremely low animal densities (e.g., probability of detection 0.18 at densities of one fish per stream kilometer) and very high detection probabilities at population-level densities (e.g., probability of detection > 0.99 at densities of ≥ 3 fish per 100 m). This is substantially more sensitive than traditional electrofishing for determining the presence of brook trout and may translate into important cost savings when animals are rare. Our findings are consistent with a growing body of literature showing that eDNA sampling is a powerful tool for the detection of aquatic species, particularly those that are rare and difficult to sample using traditional methods.

  15. Precipitation and Latent Heating Distributions from Satellite Passive Microwave Radiometry. Part 1; Improved Method and Uncertainties

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.; hide

    2006-01-01

    A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and nonconvective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud-resolving model simulations, and from the Bayesian formulation itself. Synthetic rain-rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in TMI instantaneous rain-rate estimates at 0.5 -resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. Errors in collocated spaceborne radar rain-rate estimates are roughly 50%-80% of the TMI errors at this resolution. The estimated algorithm random error in TMI rain rates at monthly, 2.5deg resolution is relatively small (less than 6% at 5 mm day.1) in comparison with the random error resulting from infrequent satellite temporal sampling (8%-35% at the same rain rate). Percentage errors resulting from sampling decrease with increasing rain rate, and sampling errors in latent heating rates follow the same trend. Averaging over 3 months reduces sampling errors in rain rates to 6%-15% at 5 mm day.1, with proportionate reductions in latent heating sampling errors.

  16. Radiation damage in single-particle cryo-electron microscopy: effects of dose and dose rate.

    PubMed

    Karuppasamy, Manikandan; Karimi Nejadasl, Fatemeh; Vulovic, Milos; Koster, Abraham J; Ravelli, Raimond B G

    2011-05-01

    Radiation damage is an important resolution limiting factor both in macromolecular X-ray crystallography and cryo-electron microscopy. Systematic studies in macromolecular X-ray crystallography greatly benefited from the use of dose, expressed as energy deposited per mass unit, which is derived from parameters including incident flux, beam energy, beam size, sample composition and sample size. In here, the use of dose is reintroduced for electron microscopy, accounting for the electron energy, incident flux and measured sample thickness and composition. Knowledge of the amount of energy deposited allowed us to compare doses with experimental limits in macromolecular X-ray crystallography, to obtain an upper estimate of radical concentrations that build up in the vitreous sample, and to translate heat-transfer simulations carried out for macromolecular X-ray crystallography to cryo-electron microscopy. Stroboscopic exposure series of 50-250 images were collected for different incident flux densities and integration times from Lumbricus terrestris extracellular hemoglobin. The images within each series were computationally aligned and analyzed with similarity metrics such as Fourier ring correlation, Fourier ring phase residual and figure of merit. Prior to gas bubble formation, the images become linearly brighter with dose, at a rate of approximately 0.1% per 10 MGy. The gradual decomposition of a vitrified hemoglobin sample could be visualized at a series of doses up to 5500 MGy, by which dose the sample was sublimed. Comparison of equal-dose series collected with different incident flux densities showed a dose-rate effect favoring lower flux densities. Heat simulations predict that sample heating will only become an issue for very large dose rates (50 e(-)Å(-2) s(-1) or higher) combined with poor thermal contact between the grid and cryo-holder. Secondary radiolytic effects are likely to play a role in dose-rate effects. Stroboscopic data collection combined with an improved understanding of the effects of dose and dose rate will aid single-particle cryo-electron microscopists to have better control of the outcome of their experiments.

  17. Radiation damage in single-particle cryo-electron microscopy: effects of dose and dose rate

    PubMed Central

    Karuppasamy, Manikandan; Karimi Nejadasl, Fatemeh; Vulovic, Milos; Koster, Abraham J.; Ravelli, Raimond B. G.

    2011-01-01

    Radiation damage is an important resolution limiting factor both in macromolecular X-ray crystallography and cryo-electron microscopy. Systematic studies in macromolecular X-ray crystallography greatly benefited from the use of dose, expressed as energy deposited per mass unit, which is derived from parameters including incident flux, beam energy, beam size, sample composition and sample size. In here, the use of dose is reintroduced for electron microscopy, accounting for the electron energy, incident flux and measured sample thickness and composition. Knowledge of the amount of energy deposited allowed us to compare doses with experimental limits in macromolecular X-ray crystallography, to obtain an upper estimate of radical concentrations that build up in the vitreous sample, and to translate heat-transfer simulations carried out for macromolecular X-ray crystallography to cryo-electron microscopy. Stroboscopic exposure series of 50–250 images were collected for different incident flux densities and integration times from Lumbricus terrestris extracellular hemoglobin. The images within each series were computationally aligned and analyzed with similarity metrics such as Fourier ring correlation, Fourier ring phase residual and figure of merit. Prior to gas bubble formation, the images become linearly brighter with dose, at a rate of approximately 0.1% per 10 MGy. The gradual decomposition of a vitrified hemoglobin sample could be visualized at a series of doses up to 5500 MGy, by which dose the sample was sublimed. Comparison of equal-dose series collected with different incident flux densities showed a dose-rate effect favoring lower flux densities. Heat simulations predict that sample heating will only become an issue for very large dose rates (50 e−Å−2 s−1 or higher) combined with poor thermal contact between the grid and cryo-holder. Secondary radiolytic effects are likely to play a role in dose-rate effects. Stroboscopic data collection combined with an improved understanding of the effects of dose and dose rate will aid single-particle cryo-electron microscopists to have better control of the outcome of their experiments. PMID:21525648

  18. Modeling of frequency-domain scalar wave equation with the average-derivative optimal scheme based on a multigrid-preconditioned iterative solver

    NASA Astrophysics Data System (ADS)

    Cao, Jian; Chen, Jing-Bo; Dai, Meng-Xue

    2018-01-01

    An efficient finite-difference frequency-domain modeling of seismic wave propagation relies on the discrete schemes and appropriate solving methods. The average-derivative optimal scheme for the scalar wave modeling is advantageous in terms of the storage saving for the system of linear equations and the flexibility for arbitrary directional sampling intervals. However, using a LU-decomposition-based direct solver to solve its resulting system of linear equations is very costly for both memory and computational requirements. To address this issue, we consider establishing a multigrid-preconditioned BI-CGSTAB iterative solver fit for the average-derivative optimal scheme. The choice of preconditioning matrix and its corresponding multigrid components is made with the help of Fourier spectral analysis and local mode analysis, respectively, which is important for the convergence. Furthermore, we find that for the computation with unequal directional sampling interval, the anisotropic smoothing in the multigrid precondition may affect the convergence rate of this iterative solver. Successful numerical applications of this iterative solver for the homogenous and heterogeneous models in 2D and 3D are presented where the significant reduction of computer memory and the improvement of computational efficiency are demonstrated by comparison with the direct solver. In the numerical experiments, we also show that the unequal directional sampling interval will weaken the advantage of this multigrid-preconditioned iterative solver in the computing speed or, even worse, could reduce its accuracy in some cases, which implies the need for a reasonable control of directional sampling interval in the discretization.

  19. The Application of the Preschool Child Behavior Checklist and the Caregiver–Teacher Report Form to Mainland Chinese Children: Syndrome Structure, Gender Differences, Country Effects, and Inter-Informant Agreement

    PubMed Central

    Cheng, Halina

    2010-01-01

    Preschool children have long been a neglected population in the study of psychopathology. The Achenbach System of Empirically Based Assessment (ASEBA), which includes the Child Behavior Checklist/1.5-5 (CBCL/1.5-5) and the Caregiver-Teacher Report Form (C-TRF), constitutes the few available measures to assess preschoolers with an empirically derived taxonomy of preschool psychopathology. However, the utility of the measures and their taxonomy of preschool psychopathology to the Chinese is largely unknown and has not been studied. The present study aimed at testing the cross-cultural factorial validity of the CBCL/1.5-5 and C-TRF, as well as the applicability of the taxonomy of preschool psychopathology they embody, to Mainland Chinese preschoolers. Country effects between our Chinese sample and the original U.S. sample, gender differences, and cross-informant agreement between teachers and parents were also to be examined. A Chinese version of the CBCL/1.5-5 and C-TRF was completed by parents and teachers respectively on 876 preschoolers in Mainland China. Confirmatory factor analysis (CFA) confirmed the original, U.S.-derived second order, multi-factor model best fit the Chinese preschool data of the CBCL/1.5-5 and C-TRF. Rates of total behavior problems in Chinese preschoolers were largely similar to those in American preschoolers. Specifically, Chinese preschoolers scored higher on internalizing problems while American preschoolers scored higher on externalizing problems. Chinese preschool boys had significantly higher rates of externalizing problems than Chinese preschool girls. Cross-informant agreement between Chinese teachers and parents was relatively low compared to agreement in the original U.S. sample. Results support the generalizability of the taxonomic structure of preschool psychopathology derived in the U.S. to the Chinese, as well as the applicability of the Chinese version of the CBCL/1.5-5 and C-TRF. PMID:20821258

  20. Evaluation of Pump Pulsation in Respirable Size-Selective Sampling: Part III. Investigation of European Standard Methods

    PubMed Central

    Soo, Jhy-Charm; Lee, Eun Gyung; Lee, Larry A.; Kashon, Michael L.; Harper, Martin

    2015-01-01

    Lee et al. (Evaluation of pump pulsation in respirable size-selective sampling: part I. Pulsation measurements. Ann Occup Hyg 2014a;58:60–73) introduced an approach to measure pump pulsation (PP) using a real-world sampling train, while the European Standards (EN) (EN 1232-1997 and EN 12919-1999) suggest measuring PP using a resistor in place of the sampler. The goal of this study is to characterize PP according to both EN methods and to determine the relationship of PP between the published method (Lee et al., 2014a) and the EN methods. Additional test parameters were investigated to determine whether the test conditions suggested by the EN methods were appropriate for measuring pulsations. Experiments were conducted using a factorial combination of personal sampling pumps (six medium- and two high-volumetric flow rate pumps), back pressures (six medium- and seven high-flow rate pumps), resistors (two types), tubing lengths between a pump and resistor (60 and 90 cm), and different flow rates (2 and 2.5 l min−1 for the medium- and 4.4, 10, and 11.2 l min−1 for the high-flow rate pumps). The selection of sampling pumps and the ranges of back pressure were based on measurements obtained in the previous study (Lee et al., 2014a). Among six medium-flow rate pumps, only the Gilian5000 and the Apex IS conformed to the 10% criterion specified in EN 1232-1997. Although the AirChek XR5000 exceeded the 10% limit, the average PP (10.9%) was close to the criterion. One high-flow rate pump, the Legacy (PP = 8.1%), conformed to the 10% criterion in EN 12919-1999, while the Elite12 did not (PP = 18.3%). Conducting supplemental tests with additional test parameters beyond those used in the two subject EN standards did not strengthen the characterization of PPs. For the selected test conditions, a linear regression model [PPEN = 0.014 + 0.375 × PPNIOSH (adjusted R2 = 0.871)] was developed to determine the PP relationship between the published method (Lee et al., 2014a) and the EN methods. The 25% PP criterion recommended by Lee et al. (2014a), average value derived from repetitive measurements, corresponds to 11% PPEN. The 10% pass/fail criterion in the EN Standards is not based on extensive laboratory evaluation and would unreasonably exclude at least one pump (i.e. AirChek XR5000 in this study) and, therefore, the more accurate criterion of average 11% from repetitive measurements should be substituted. This study suggests that users can measure PP using either a real-world sampling train or a resistor setup and obtain equivalent findings by applying the model herein derived. The findings of this study will be delivered to the consensus committees to be considered when those standards, including the EN 1232-1997, EN 12919-1999, and ISO 13137-2013, are revised. PMID:25053700

  1. Evaluating steady-state soil thickness by coupling uranium series and 10Be cosmogenic radionuclides

    NASA Astrophysics Data System (ADS)

    Vanacker, Veerle; Schoonejans, Jerome; Opfergelt, Sophie; Granet, Matthieu; Christl, Marcus; Chabaux, Francois

    2017-04-01

    Within the Critical Zone, the development of the regolith mantle is controlled by the downwards propagation of the weathering front into the bedrock and denudation at the surface of the regolith by mass movements, water and wind erosion. When the removal of surface material is approximately balanced by the soil production, the soil system is assumed to be in steady-state. The steady state soil thickness (or so-called SSST) can be considered as a dynamic equilibrium of the system, where the thickness of the soil mantle stays relatively constant over time. In this study, we present and compare analytical data from two independent isotopic techniques: in-situ produced cosmogenic nuclides and U-series disequilibria to constrain soil development under semi-arid climatic conditions. The Spanish Betic Cordillera (Southeast Spain) was selected for this study, as it offers us a unique opportunity to analyze soil thickness steady-state conditions for thin soils of semiarid environments. Three soil profiles were sampled across the Betic Ranges, at the ridge crest of zero-order catchments with distinct topographic relief, hillslope gradient and 10Be-derived denudation rate. The magnitude of soil production rates determined based on U-series isotopes (238U, 234U, 230Th and 226Ra) is in the same order of magnitude as the 10Be-derived denudation rates, suggesting steady state soil thickness in two out of three sampling sites. The results suggest that coupling U-series isotopes with in-situ produced radionuclides can provide new insights in the rates of soil development; and also illustrate the potential frontiers in applying U-series disequilibria to track soil production in rapidly eroding landscapes characterized by thin weathering depths.

  2. Statistical inference involving binomial and negative binomial parameters.

    PubMed

    García-Pérez, Miguel A; Núñez-Antón, Vicente

    2009-05-01

    Statistical inference about two binomial parameters implies that they are both estimated by binomial sampling. There are occasions in which one aims at testing the equality of two binomial parameters before and after the occurrence of the first success along a sequence of Bernoulli trials. In these cases, the binomial parameter before the first success is estimated by negative binomial sampling whereas that after the first success is estimated by binomial sampling, and both estimates are related. This paper derives statistical tools to test two hypotheses, namely, that both binomial parameters equal some specified value and that both parameters are equal though unknown. Simulation studies are used to show that in small samples both tests are accurate in keeping the nominal Type-I error rates, and also to determine sample size requirements to detect large, medium, and small effects with adequate power. Additional simulations also show that the tests are sufficiently robust to certain violations of their assumptions.

  3. A Theoretical Basis for Entropy-Scaling Effects in Human Mobility Patterns

    PubMed Central

    2016-01-01

    Characterizing how people move through space has been an important component of many disciplines. With the advent of automated data collection through GPS and other location sensing systems, researchers have the opportunity to examine human mobility at spatio-temporal resolution heretofore impossible. However, the copious and complex data collected through these logging systems can be difficult for humans to fully exploit, leading many researchers to propose novel metrics for encapsulating movement patterns in succinct and useful ways. A particularly salient proposed metric is the mobility entropy rate of the string representing the sequence of locations visited by an individual. However, mobility entropy rate is not scale invariant: entropy rate calculations based on measurements of the same trajectory at varying spatial or temporal granularity do not yield the same value, limiting the utility of mobility entropy rate as a metric by confounding inter-experimental comparisons. In this paper, we derive a scaling relationship for mobility entropy rate of non-repeating straight line paths from the definition of Lempel-Ziv compression. We show that the resulting formulation predicts the scaling behavior of simulated mobility traces, and provides an upper bound on mobility entropy rate under certain assumptions. We further show that this formulation has a maximum value for a particular sampling rate, implying that optimal sampling rates for particular movement patterns exist. PMID:27571423

  4. High sensitive analysis of steroids in doping control using gas chromatography/time-of-flight mass-spectrometry.

    PubMed

    Revelsky, A I; Samokhin, A S; Virus, E D; Rodchenkov, G M; Revelsky, I A

    2011-04-01

    The method of high sensitive gas chromatographic/time-of-flight mass-spectrometric (GC/TOF-MS) analysis of steroids was developed. Low-resolution TOF-MS instrument (with fast spectral acquisition rate) was used. This method is based on the formation of the silyl derivatives of steroids; exchange of the reagent mixture (pyridine and N,O-bis(trimethylsilyl)trifluoroacetamide (BSTFA)) for tert-butylmethylether; offline large sample volume injection of this solution based on sorption concentration of the respective derivatives from the vapour-gas mixture flow formed from the solution and inert gas flows; and entire analytes solvent-free concentrate transfer into the injector of the gas chromatograph. Detection limits for 100 µl sample solution volume were 0.5-2 pg/µl (depending on the component). Application of TOF-MS model 'TruTOF' (Leco, St Joseph, MO, USA) coupled with gas chromatograph and ChromaTOF software (Leco, St Joseph, MO, USA) allowed extraction of the full mass spectra and resolving coeluted peaks. Due to use of the proposed method (10 µl sample aliquot) and GC/TOF-MS, two times more steroid-like compounds were registered in the urine extract in comparison with the injection of 1 µl of the same sample solution. Copyright © 2010 John Wiley & Sons, Ltd.

  5. Microbiota of Cow’s Milk; Distinguishing Healthy, Sub-Clinically and Clinically Diseased Quarters

    PubMed Central

    Oikonomou, Georgios; Bicalho, Marcela Lucas; Meira, Enoch; Rossi, Rodolfo Elke; Foditsch, Carla; Machado, Vinicius Silva; Teixeira, Andre Gustavo Vieira; Santisteban, Carlos; Schukken, Ynte Hein; Bicalho, Rodrigo Carvalho

    2014-01-01

    The objective of this study was to use pyrosequencing of the 16S rRNA genes to describe the microbial diversity of bovine milk samples derived from clinically unaffected quarters across a range of somatic cell counts (SCC) values or from clinical mastitis, culture negative quarters. The obtained microbiota profiles were used to distinguish healthy, subclinically and clinically affected quarters. Two dairy farms were used for the collection of milk samples. A total of 177 samples were used. Fifty samples derived from healthy, culture negative quarters with a SCC of less than 20,000 cells/ml (group 1); 34 samples derived from healthy, culture negative quarters, with a SCC ranging from 21,000 to 50,000 cells/ml (group 2); 26 samples derived from healthy, culture negative quarters with a SCC greater than 50,000 cells/ml (group 3); 34 samples derived from healthy, culture positive quarters, with a SCC greater than 400,000 (group 4, subclinical); and 33 samples derived from clinical mastitis, culture negative quarters (group 5, clinical). Bacterial DNA was isolated from these samples and the 16S rRNA genes were individually amplified and pyrosequenced. All samples analyzed revealed great microbial diversity. Four bacterial genera were present in every sample obtained from healthy quarters (Faecalibacterium spp., unclassified Lachnospiraceae, Propionibacterium spp. and Aeribacillus spp.). Discriminant analysis models showed that samples derived from healthy quarters were easily discriminated based on their microbiota profiles from samples derived from clinical mastitis, culture negative quarters; that was also the case for samples obtained from different farms. Staphylococcus spp. and Streptococcus spp. were among the most prevalent genera in all groups while a general multivariable linear model revealed that Sphingobacterium and Streptococcus prevalences were associated with increased 10 log SCC. Conversely, Nocardiodes and Paenibacillus were negatively correlated, and a higher percentage of the genera was associated with a lower 10 log SCC. PMID:24465777

  6. Comparison of some aspects of the in situ and in vitro methods in evaluation of neutral detergent fiber digestion.

    PubMed

    Krizsan, S J; Jančík, F; Ramin, M; Huhtanen, P

    2013-02-01

    The objective of the present study was to compare digestion rates (kd) of NDF for different feeds estimated with the in situ method or derived from an automated gas in vitro system. A meta-analysis was conducted to evaluate how in situ derived kd of NDF related to in vivo digestibility of NDF. Furthermore, in vitro true digestibility of the feed samples incubated within filter bags or dispersed in the medium was compared, and kd for insoluble and soluble components of those feeds were estimated. Four different concentrates and 4 forages were used in this study. Two lactating Swedish Red cows fed a diet of 60% grass silage and 40% concentrate on DM basis were used for in situ incubations and for collection of rumen fluid. The feed samples were ground through a 2.0-mm screen before the in situ incubations and a 1.0-mm screen before the in vitro gas incubations. In situ nylon bags were introduced into the rumen for determination of kd of NDF. Additional kinetic data were produced from isolated NDF and intact samples subjected to in vitro incubations in which gas production was recorded for 72 h. Samples were weighed in the bottles or within filter bags (for fiber and in vitro studies) that were placed in the bottles. The interaction between feed and method was significant (P < 0.01); kd of NDF for grass hay tended (P = 0.06) to be less whereas kd of NDF for alfalfa, barley grain, canola meal, and dried sugar beet pulp were greater (P < 0.01) when estimated with the in situ method than from gas production recordings. The meta-analysis suggested that in situ derived kd of NDF were biased and underestimated in vivo digestibility of NDF. Digestion rates of the intact samples were lower for all feeds, except for the hay, when incubated within the bags compared with dispersed in the medium (P < 0.01). Less OM and NDF were digested for all feeds when incubated within bags than dispersed in the medium (P < 0.01). It is concluded from the in vitro study that microbial activity within the bags is less than in the medium. Significant interactions between method (in situ vs. in vitro) and feed suggest that one or both methods result in biased estimates of digestion kinetics.

  7. Liver fibrosis alleviation after co-transplantation of hematopoietic stem cells with mesenchymal stem cells in patients with thalassemia major.

    PubMed

    Ghavamzadeh, Ardeshir; Sotoudeh, Masoud; Hashemi Taheri, Amir Pejman; Alimoghaddam, Kamran; Pashaiefar, Hossein; Jalili, Mahdi; Shahi, Farhad; Jahani, Mohammad; Yaghmaie, Marjan

    2018-02-01

    The aims of this study are to determine the replacement rate of damaged hepatocytes by donor-derived cells in sex-mismatched recipient patients with thalassemia major and to determine whether co-transplantation of mesenchymal stem cells and hematopoietic stem cells (HSCs) can alleviate liver fibrosis. Ten sex-mismatched donor-recipient pairs who received co-transplantation of HSCs with mesenchymal stem cells were included in our study. Liver biopsy was performed before transplantation. Two other liver biopsies were performed between 2 and 5 years after transplantation. The specimens were studied for the presence of donor-derived epithelial cells or hepatocytes using fluorescence in situ hybridization by X- and Y-centromeric probes and immunohistochemical staining for pancytokeratin, CD45, and a hepatocyte-specific antigen. All sex-mismatched tissue samples demonstrated donor-derived hepatocyte independent of donor gender. XY-positive epithelial cells or hepatocytes accounted for 11 to 25% of the cells in histologic sections of female recipients in the first follow-up. It rose to 47-95% in the second follow-up. Although not statistically significant, four out of ten patients showed signs of improvement in liver fibrosis. Our results showed that co-transplantation of HSC with mesenchymal stem cells increases the rate of replacement of recipient hepatocytes by donor-derived cells and may improve liver fibrosis.

  8. Measuring Carbon-based Contaminant Mineralization Using Combined CO2 Flux and Radiocarbon Analyses.

    PubMed

    Boyd, Thomas J; Montgomery, Michael T; Cuenca, Richard H; Hagimoto, Yutaka

    2016-10-21

    A method is described which uses the absence of radiocarbon in industrial chemicals and fuels made from petroleum feedstocks which frequently contaminate the environment. This radiocarbon signal - or rather the absence of signal - is evenly distributed throughout a contaminant source pool (unlike an added tracer) and is not impacted by biological, chemical or physical processes (e.g., the 14 C radioactive decay rate is immutable). If the fossil-derived contaminant is fully degraded to CO2, a harmless end-product, that CO2 will contain no radiocarbon. CO2 derived from natural organic matter (NOM) degradation will reflect the NOM radiocarbon content (usually <30,000 years old). Given a known radiocarbon content for NOM (a site background), a two end-member mixing model can be used to determine the CO2 derived from a fossil source in a given soil gas or groundwater sample. Coupling the percent CO2 derived from the contaminant with the CO2 respiration rate provides an estimate for the total amount of contaminant degraded per unit time. Finally, determining a zone of influence (ZOI) representing the volume from which site CO2 is collected allows determining the contaminant degradation per unit time and volume. Along with estimates for total contaminant mass, this can ultimately be used to calculate time-to-remediate or otherwise used by site managers for decision-making.

  9. Measuring Carbon-based Contaminant Mineralization Using Combined CO2 Flux and Radiocarbon Analyses

    PubMed Central

    Boyd, Thomas J.; Montgomery, Michael T.; Cuenca, Richard H.; Hagimoto, Yutaka

    2016-01-01

    A method is described which uses the absence of radiocarbon in industrial chemicals and fuels made from petroleum feedstocks which frequently contaminate the environment. This radiocarbon signal — or rather the absence of signal — is evenly distributed throughout a contaminant source pool (unlike an added tracer) and is not impacted by biological, chemical or physical processes (e.g., the 14C radioactive decay rate is immutable). If the fossil-derived contaminant is fully degraded to CO2, a harmless end-product, that CO2 will contain no radiocarbon. CO2 derived from natural organic matter (NOM) degradation will reflect the NOM radiocarbon content (usually <30,000 years old). Given a known radiocarbon content for NOM (a site background), a two end-member mixing model can be used to determine the CO2 derived from a fossil source in a given soil gas or groundwater sample. Coupling the percent CO2 derived from the contaminant with the CO2 respiration rate provides an estimate for the total amount of contaminant degraded per unit time. Finally, determining a zone of influence (ZOI) representing the volume from which site CO2 is collected allows determining the contaminant degradation per unit time and volume. Along with estimates for total contaminant mass, this can ultimately be used to calculate time-to-remediate or otherwise used by site managers for decision-making. PMID:27805601

  10. Black and White women's perspectives on femininity.

    PubMed

    Cole, Elizabeth R; Zucker, Alyssa N

    2007-01-01

    The authors explore how Black and White women view three aspects of normative femininity, and whether self-rated femininity is related to feminism. Through telephone surveys, a nationally representative sample of women (N=1130) rated themselves on feminism and items derived from Collins' (2004) benchmarks of femininity: feminine appearance, traits, and traditional gender role ideology. Confirmatory factor analysis revealed both groups conceptualized femininity as including the same dimensions, although Black women rated themselves higher on items related to feminine appearance. Among White women, traditional gender ideology was negatively related to feminism; among Black women, wearing feminine clothes was positively related to feminism. Results are discussed in terms of possibilities for resistance to the hegemonic concept of femininity that both groups share. (c) 2007 APA, all rights reserved.

  11. Domain Adaptation for Pedestrian Detection Based on Prediction Consistency

    PubMed Central

    Huan-ling, Tang; Zhi-yong, An

    2014-01-01

    Pedestrian detection is an active area of research in computer vision. It remains a quite challenging problem in many applications where many factors cause a mismatch between source dataset used to train the pedestrian detector and samples in the target scene. In this paper, we propose a novel domain adaptation model for merging plentiful source domain samples with scared target domain samples to create a scene-specific pedestrian detector that performs as well as rich target domain simples are present. Our approach combines the boosting-based learning algorithm with an entropy-based transferability, which is derived from the prediction consistency with the source classifications, to selectively choose the samples showing positive transferability in source domains to the target domain. Experimental results show that our approach can improve the detection rate, especially with the insufficient labeled data in target scene. PMID:25013850

  12. Demonstration of the use of ADAPT to derive predictive maintenance algorithms for the KSC central heat plant

    NASA Technical Reports Server (NTRS)

    Hunter, H. E.

    1972-01-01

    The Avco Data Analysis and Prediction Techniques (ADAPT) were employed to determine laws capable of detecting failures in a heat plant up to three days in advance of the occurrence of the failure. The projected performance of algorithms yielded a detection probability of 90% with false alarm rates of the order of 1 per year for a sample rate of 1 per day with each detection, followed by 3 hourly samplings. This performance was verified on 173 independent test cases. The program also demonstrated diagnostic algorithms and the ability to predict the time of failure to approximately plus or minus 8 hours up to three days in advance of the failure. The ADAPT programs produce simple algorithms which have a unique possibility of a relatively low cost updating procedure. The algorithms were implemented on general purpose computers at Kennedy Space Flight Center and tested against current data.

  13. Identification and classification of chemicals using terahertz reflective spectroscopic focal-plane imaging system.

    PubMed

    Zhong, Hua; Redo-Sanchez, Albert; Zhang, X-C

    2006-10-02

    We present terahertz (THz) reflective spectroscopic focal-plane imaging of four explosive and bio-chemical materials (2, 4-DNT, Theophylline, RDX and Glutamic Acid) at a standoff imaging distance of 0.4 m. The 2 dimension (2-D) nature of this technique enables a fast acquisition time and is very close to a camera-like operation, compared to the most commonly used point emission-detection and raster scanning configuration. The samples are identified by their absorption peaks extracted from the negative derivative of the reflection coefficient respect to the frequency (-dr/dv) of each pixel. Classification of the samples is achieved by using minimum distance classifier and neural network methods with a rate of accuracy above 80% and a false alarm rate below 8%. This result supports the future application of THz time-domain spectroscopy (TDS) in standoff distance sensing, imaging, and identification.

  14. Characterizing acoustic shocks in high-performance jet aircraft flyover noise.

    PubMed

    Reichman, Brent O; Gee, Kent L; Neilsen, Tracianne B; Downing, J Micah; James, Michael M; Wall, Alan T; McInerny, Sally Anne

    2018-03-01

    Acoustic shocks have been previously documented in high-amplitude jet noise, including both the near and far fields of military jet aircraft. However, previous investigations into the nature and formation of shocks have historically concentrated on stationary, ground run-up measurements, and previous attempts to connect full-scale ground run-up and flyover measurements have omitted the effect of nonlinear propagation. This paper shows evidence for nonlinear propagation and the presence of acoustic shocks in acoustical measurements of F-35 flyover operations. Pressure waveforms, derivatives, and statistics indicate nonlinear propagation, and the resulting shock formation is significant at high engine powers. Variations due to microphone size, microphone height, and sampling rate are considered, and recommendations for future measurements are made. Metrics indicating nonlinear propagation are shown to be influenced by changes in sampling rate and microphone size, and exhibit less variation due to microphone height.

  15. Real-Time Associations Between Engaging in Leisure and Daily Health and Well-Being.

    PubMed

    Zawadzki, Matthew J; Smyth, Joshua M; Costigan, Heather J

    2015-08-01

    Engagement in leisure has a wide range of beneficial health effects. Yet, this evidence is derived from between-person methods that do not examine the momentary within-person processes theorized to explain leisure's benefits. This study examined momentary relationships between leisure and health and well-being in daily life. A community sample (n = 115) completed ecological momentary assessments six times a day for three consecutive days. At each measurement, participants indicated if they were engaging in leisure and reported on their mood, interest/boredom, and stress levels. Next, participants collected a saliva sample for cortisol analyses. Heart rate was assessed throughout the study. Multilevel models revealed that participants had more positive and less negative mood, more interest, less stress, and lower heart rate when engaging in leisure than when not. Results suggest multiple mechanisms explaining leisure's effectiveness, which can inform leisure-based interventions to improve health and well-being.

  16. Multivariate statistical analysis of the polyphenolic constituents in kiwifruit juices to trace fruit varieties and geographical origins.

    PubMed

    Guo, Jing; Yuan, Yahong; Dou, Pei; Yue, Tianli

    2017-10-01

    Fifty-one kiwifruit juice samples of seven kiwifruit varieties from five regions in China were analyzed to determine their polyphenols contents and to trace fruit varieties and geographical origins by multivariate statistical analysis. Twenty-one polyphenols belonging to four compound classes were determined by ultra-high-performance liquid chromatography coupled with ultra-high-resolution TOF mass spectrometry. (-)-Epicatechin, (+)-catechin, procyanidin B1 and caffeic acid derivatives were the predominant phenolic compounds in the juices. Principal component analysis (PCA) allowed a clear separation of the juices according to kiwifruit varieties. Stepwise linear discriminant analysis (SLDA) yielded satisfactory categorization of samples, provided 100% success rate according to kiwifruit varieties and 92.2% success rate according to geographical origins. The result showed that polyphenolic profiles of kiwifruit juices contain enough information to trace fruit varieties and geographical origins. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Digital image classification approach for estimating forest clearing and regrowth rates and trends

    NASA Technical Reports Server (NTRS)

    Sader, Steven A.

    1987-01-01

    A technique is presented to monitor vegetation changes for a selected study area in Costa Rica. A normalized difference vegetation index was computed for three dates of Landsat satellite data and a modified parallelipiped classifier was employed to generate a multitemporal greenness image representing all three dates. A second-generation image was created by partitioning the intensity levels at each date into high, medium, and low and thereby reducing the number of classes to 21. A sampling technique was applied to describe forest and other land cover change occurring between time periods based on interpretation of aerial photography that closely matched the dates of satellite acquisition. Comparison of the Landsat-derived classes with the photo-interpreted sample areas can provide a basis for evaluating the satellite monitoring technique and the accuracy of estimating forest clearing and regrowth rates and trends.

  18. EFFECT OF MASSIVE NEUTRON EXPOSURE ON THE DISTORTION OF REACTOR GRAPHITE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helm, J.W.; Davidson, J.M.

    1963-05-28

    Distortion of reactor-grade graphites was studied at varying neutron exposures ranging up to 14 x 10/sup 21/ neutrons per cm/sup 2/ (nvt)/sup */ at temperatures of irradiation ranging from 425 to 800 deg C. This exposure level corresponds to approximately 100,000 megawatt days per adjacent ton of fuel (Mwd/ At) in a graphite-moderated reactor. A conventionalcoke graphite, CSF, and two needle-coke graphites, NC-7 and NC-8, were studied. At all temperatures of irradiation the contraction rate of the samples cut parallel to the extrusion axis increased with increasing neutron exposure. For parallel samples the needle- coke graphites and the CSF graphitemore » contracted approximately the same amount. In the transverse direction the rate of cortraction at the higher irradiation temperntures appeared to be decreasing. Volume contractions derived from the linear contractions are discussed. (auth)« less

  19. Relationship of glomerular filtration rate based on serum iodixanol clearance to IRIS staging in cats with chronic kidney disease.

    PubMed

    Iwama, Ryosuke; Sato, Tsubasa; Katayama, Masaaki; Shimamura, Shunsuke; Satoh, Hiroshi; Ichijo, Toshihiro; Furuhama, Kazuhisa

    2015-08-01

    We examined the correlation between the glomerular filtration rate (GFR) estimated from an equation based on the serum iodixanol clearance technique and International Renal Interest Society (IRIS) stages of chronic kidney disease (CKD) in cats. The equation included the injection dose, sampling time, serum concentration and estimated volume of distribution (Vd) of the isotonic, nonionic, contrast medium iodixanol as a test tracer. The percent changes in the median basal GFR values calculated from the equation in CKD cats resembled those of IRIS stages 1-3. These data validate the association between the GFR derived from the simplified equation and IRIS stages based on the serum creatinine concentration in cats with CKD. They describe the GFR ranges determined using single-sample iodixanol clearance for healthy cats and cats with various IRIS stages of CKD.

  20. Dynamic strain and rotation ground motions of the 2011 Tohoku earthquake from dense high-rate GPS observations in Taiwan

    NASA Astrophysics Data System (ADS)

    Huang, B. S.; Rau, R. J.; Lin, C. J.; Kuo, L. C.

    2017-12-01

    Seismic waves generated by the 2011 Mw 9.0 Tohoku, Japan earthquake were well recorded by continuous GPS in Taiwan. Those GPS were operated in one hertz sampling rate and densely distributed in Taiwan Island. Those continuous GPS observations and the precise point positioning technique provide an opportunity to estimate spatial derivatives from absolute ground motions of this giant teleseismic event. In this study, we process and investigate more than one and half hundred high-rate GPS displacements and its spatial derivatives, thus strain and rotations, to compare to broadband seismic and rotational sensor observations. It is shown that continuous GPS observations are highly consistent with broadband seismic observations during its surface waves across Taiwan Island. Several standard Geodesy and seismic array analysis techniques for spatial gradients have been applied to those continuous GPS time series to determine its dynamic strain and rotation time histories. Results show that those derivate GPS vertical axis ground rotations are consistent to seismic array determined rotations. However, vertical rotation-rate observations from the R1 rotational sensors have low resolutions and could not compared with GPS observations for this special event. For its dese spatial distribution of GPS stations in Taiwan Island, not only wavefield gradient time histories at individual site was obtained but also 2-D spatial ground motion fields were determined in this study also. In this study, we will report the analyzed results of those spatial gradient wavefields of the 2011 Tohoku earthquake across Taiwan Island and discuss its geological implications.

  1. Motor speech signature of behavioral variant frontotemporal dementia: Refining the phenotype.

    PubMed

    Vogel, Adam P; Poole, Matthew L; Pemberton, Hugh; Caverlé, Marja W J; Boonstra, Frederique M C; Low, Essie; Darby, David; Brodtmann, Amy

    2017-08-22

    To provide a comprehensive description of motor speech function in behavioral variant frontotemporal dementia (bvFTD). Forty-eight individuals (24 bvFTD and 24 age- and sex-matched healthy controls) provided speech samples. These varied in complexity and thus cognitive demand. Their language was assessed using the Progressive Aphasia Language Scale and verbal fluency tasks. Speech was analyzed perceptually to describe the nature of deficits and acoustically to quantify differences between patients with bvFTD and healthy controls. Cortical thickness and subcortical volume derived from MRI scans were correlated with speech outcomes in patients with bvFTD. Speech of affected individuals was significantly different from that of healthy controls. The speech signature of patients with bvFTD is characterized by a reduced rate (75%) and accuracy (65%) on alternating syllable production tasks, and prosodic deficits including reduced speech rate (45%), prolonged intervals (54%), and use of short phrases (41%). Groups differed on acoustic measures derived from the reading, unprepared monologue, and diadochokinetic tasks but not the days of the week or sustained vowel tasks. Variability of silence length was associated with cortical thickness of the inferior frontal gyrus and insula and speech rate with the precentral gyrus. One in 8 patients presented with moderate speech timing deficits with a further two-thirds rated as mild or subclinical. Subtle but measurable deficits in prosody are common in bvFTD and should be considered during disease management. Language function correlated with speech timing measures derived from the unprepared monologue only. © 2017 American Academy of Neurology.

  2. Reconstruction of the modified discrete Langevin equation from persistent time series

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Czechowski, Zbigniew

    The discrete Langevin-type equation, which can describe persistent processes, was introduced. The procedure of reconstruction of the equation from time series was proposed and tested on synthetic data, with short and long-tail distributions, generated by different Langevin equations. Corrections due to the finite sampling rates were derived. For an exemplary meteorological time series, an appropriate Langevin equation, which constitutes a stochastic macroscopic model of the phenomenon, was reconstructed.

  3. Cross-system nutrient transport: effects of locally-derived aeolian dust on oligotrophic lakes in West Greenland

    NASA Astrophysics Data System (ADS)

    Bullard, J. E.; Anderson, N. J.; McGowan, S.; Prater, C.; Watts, M.; Whitford, E.

    2017-12-01

    Terrestrially-derived nutrients can strongly affect production in aquatic environments. However, while some research has focused on nutrient delivery via hydrological inputs, the effects of atmospheric dry deposition are comparatively understudied. This paper examines the influence of aeolian-derived elements on water chemistry and microbial nutrient-limitation in oligotrophic lakes in West Greenland. Estimates of seasonal dust deposition and elemental leaching rates are combined with lake nutrient concentration measurements to establish the role of glacio-fluvial dust deposition in shaping nutrient stoichiometry of downwind lakes. The bioavailability of dust-associated elements is also explored using enzyme assays designed to indicate nutrient-limitation in microbial communities sampled across a dust deposition gradient. Together, these analyses demonstrate the importance of atmospheric dust inputs on hydrologically-isolated lakes found in arid high-latitude environments and demonstrate the need to better understand the role of aeolian deposition in cross-system nutrient transport.

  4. X-Shooter study of accretion in Chamaeleon I

    NASA Astrophysics Data System (ADS)

    Manara, C. F.; Fedele, D.; Herczeg, G. J.; Teixeira, P. S.

    2016-01-01

    We present the analysis of 34 new VLT/X-Shooter spectra of young stellar objects in the Chamaeleon I star-forming region, together with four more spectra of stars in Taurus and two in Chamaeleon II. The broad wavelength coverage and accurate flux calibration of our spectra allow us to estimate stellar and accretion parameters for our targets by fitting the photospheric and accretion continuum emission from the Balmer continuum down to ~700 nm. The dependence of accretion on stellar properties for this sample is consistent with previous results from the literature. The accretion rates for transitional disks are consistent with those of full disks in the same region. The spread of mass accretion rates at any given stellar mass is found to be smaller than in many studies, but is larger than that derived in the Lupus clouds using similar data and techniques. Differences in the stellar mass range and in the environmental conditions between our sample and that of Lupus may account for the discrepancy in scatter between Chamaeleon I and Lupus. Complete samples in Chamaeleon I and Lupus are needed to determine whether the difference in scatter of accretion rates and the lack of evolutionary trends are not influenced by sample selection. This work is based on observations made with ESO Telescopes at the Paranal Observatory under programme ID 084.C-1095 and 094.C-0913.

  5. The structured ancestral selection graph and the many-demes limit.

    PubMed

    Slade, Paul F; Wakeley, John

    2005-02-01

    We show that the unstructured ancestral selection graph applies to part of the history of a sample from a population structured by restricted migration among subpopulations, or demes. The result holds in the limit as the number of demes tends to infinity with proportionately weak selection, and we have also made the assumptions of island-type migration and that demes are equivalent in size. After an instantaneous sample-size adjustment, this structured ancestral selection graph converges to an unstructured ancestral selection graph with a mutation parameter that depends inversely on the migration rate. In contrast, the selection parameter for the population is independent of the migration rate and is identical to the selection parameter in an unstructured population. We show analytically that estimators of the migration rate, based on pairwise sequence differences, derived under the assumption of neutrality should perform equally well in the presence of weak selection. We also modify an algorithm for simulating genealogies conditional on the frequencies of two selected alleles in a sample. This permits efficient simulation of stronger selection than was previously possible. Using this new algorithm, we simulate gene genealogies under the many-demes ancestral selection graph and identify some situations in which migration has a strong effect on the time to the most recent common ancestor of the sample. We find that a similar effect also increases the sensitivity of the genealogy to selection.

  6. Poisson Statistics of Combinatorial Library Sampling Predict False Discovery Rates of Screening

    PubMed Central

    2017-01-01

    Microfluidic droplet-based screening of DNA-encoded one-bead-one-compound combinatorial libraries is a miniaturized, potentially widely distributable approach to small molecule discovery. In these screens, a microfluidic circuit distributes library beads into droplets of activity assay reagent, photochemically cleaves the compound from the bead, then incubates and sorts the droplets based on assay result for subsequent DNA sequencing-based hit compound structure elucidation. Pilot experimental studies revealed that Poisson statistics describe nearly all aspects of such screens, prompting the development of simulations to understand system behavior. Monte Carlo screening simulation data showed that increasing mean library sampling (ε), mean droplet occupancy, or library hit rate all increase the false discovery rate (FDR). Compounds identified as hits on k > 1 beads (the replicate k class) were much more likely to be authentic hits than singletons (k = 1), in agreement with previous findings. Here, we explain this observation by deriving an equation for authenticity, which reduces to the product of a library sampling bias term (exponential in k) and a sampling saturation term (exponential in ε) setting a threshold that the k-dependent bias must overcome. The equation thus quantitatively describes why each hit structure’s FDR is based on its k class, and further predicts the feasibility of intentionally populating droplets with multiple library beads, assaying the micromixtures for function, and identifying the active members by statistical deconvolution. PMID:28682059

  7. The effect of interest rate derivative transactions on debt savings for not-for-profit health systems.

    PubMed

    Venkataramani, Prakash; Johnson, Tricia; O'Neil, Patricia; Poindexter, Victoria; Rooney, Jeffrey

    2006-01-01

    The utilization of interest rate derivative instruments in US for-profit companies has grown exponentially since the early 1980s. International Swaps and Derivatives Association, Inc. (ISDA), reported that the amount of outstanding standard swaps grew by 25 percent during the first six months of 2003. The growth rate of all interest rate derivatives, which includes single-currency interest rate swaps, cross-currency interest rate swaps, and interest rate options, grew by 24 percent during the same period. The total outstanding amount of interest rate derivatives now totals $123.9 trillion compared to $99.9 trillion at the end of 2002 (Dodd, 2003). This explosion in usage is a testament to the efficacy and flexibility of the instruments and the increased appreciation by financial managers of the importance of financial risk management in a volatile interest rate environment.

  8. The abundance of ammonia in Comet P/Halley derived from ultraviolet spectrophotometry of NH by ASTRON and IUE

    NASA Technical Reports Server (NTRS)

    Feldman, P. D.; Fournier, K. B.; Grinin, V. P.; Zvereva, A. M.

    1993-01-01

    From an analysis of the spatial profiles of both the NH and OH UV emissions observed by the ASTRON satellite, the ratio of ammonia-to-water production rates in Comet Halley on April 9, 1986 is derived and found to lie in the range of 0.44-0.94 percent. In order to compare this result with those based on both ground-based and in situ observations made on other dates during the 1985-1986 apparition of the comet, the IUE observational data base for December 1985 and March-April 1986 is used to evaluate the ratio of NH to OH column density in the IUE field of view and thus constrain the long-term behavior of this ratio. The IUE data base indicates that, to within a factor of 2, the ammonia-to-water production rate ratio is the same for a small sample of moderately bright comets observed recently.

  9. Do Stimulants Reduce the Risk for Alcohol and Substance Use in Youth With ADHD? A Secondary Analysis of a Prospective, 24-Month Open-Label Study of Osmotic-Release Methylphenidate.

    PubMed

    Hammerness, Paul; Petty, Carter; Faraone, Stephen V; Biederman, Joseph

    2017-01-01

    The purpose of this study was to examine the impact of stimulant treatment on risk for alcohol and illicit drug use in adolescents with ADHD. Analysis of data derived from a prospective open-label treatment study of adolescent ADHD ( n = 115, 76% male), and a historical, naturalistic sample of ADHD ( n = 44, 68% male) and non-ADHD youth ( n = 52, 73% male) of similar age and sex. Treatment consisted of extended-release methylphenidate in the clinical trial or naturalistic stimulant treatment. Self-report of alcohol and drug use was derived from a modified version of the Drug Use Screening Inventory. Rates of alcohol and drug use in the past year were significantly lower in the clinical trial compared with untreated and treated naturalistic ADHD comparators, and similar to rates in non-ADHD comparators. Well-monitored stimulant treatment may reduce the risk for alcohol and substance use in adolescent ADHD.

  10. Next-Generation Sequencing Analysis of the Diversity of Human Noroviruses in Japanese Oysters.

    PubMed

    Imamura, Saiki; Kanezashi, Hiromi; Goshima, Tomoko; Haruna, Mika; Okada, Tsukasa; Inagaki, Nobuya; Uema, Masashi; Noda, Mamoru; Akimoto, Keiko

    2017-08-01

    To obtain detailed information on the diversity of infectious norovirus in oysters (Crossostrea gigas), oysters obtained from fish producers at six different sites (sites A, B, C, D, E, and F) in Japan were analyzed once a month during the period spanning October 2015-February 2016. To avoid false-positive polymerase chain reaction (PCR) results derived from noninfectious virus particles, samples were pretreated with RNase before reverse transcription-PCR (RT-PCR). RT-PCR products were subjected to next-generation sequencing to identify norovirus genotypes in oysters. As a result, all GI genotypes were detected in the investigational period. The detection rate and proportion of norovirus GI genotypes differed depending on the sampling site and month. GII.3, GII.4, GII.13, GII.16, and GII.17 were detected in this study. Both the detection rate and proportion of norovirus GII genotypes differed depending on the sampling site and month. In total, the detection rate and proportion of GII.3 were highest from October to December among all detected genotypes. In January, the detection rates of GII.4 and GII.17 reached the same level as that of GII.3. The proportion of GII.17 was relatively lower from October to December, whereas it was the highest in January. To our knowledge, this is the first investigation on noroviruses in oysters in Japan, based on a method that can distinguish their infectivity.

  11. Sampling frequency for water quality variables in streams: Systems analysis to quantify minimum monitoring rates.

    PubMed

    Chappell, Nick A; Jones, Timothy D; Tych, Wlodek

    2017-10-15

    Insufficient temporal monitoring of water quality in streams or engineered drains alters the apparent shape of storm chemographs, resulting in shifted model parameterisations and changed interpretations of solute sources that have produced episodes of poor water quality. This so-called 'aliasing' phenomenon is poorly recognised in water research. Using advances in in-situ sensor technology it is now possible to monitor sufficiently frequently to avoid the onset of aliasing. A systems modelling procedure is presented allowing objective identification of sampling rates needed to avoid aliasing within strongly rainfall-driven chemical dynamics. In this study aliasing of storm chemograph shapes was quantified by changes in the time constant parameter (TC) of transfer functions. As a proportion of the original TC, the onset of aliasing varied between watersheds, ranging from 3.9-7.7 to 54-79 %TC (or 110-160 to 300-600 min). However, a minimum monitoring rate could be identified for all datasets if the modelling results were presented in the form of a new statistic, ΔTC. For the eight H + , DOC and NO 3 -N datasets examined from a range of watershed settings, an empirically-derived threshold of 1.3(ΔTC) could be used to quantify minimum monitoring rates within sampling protocols to avoid artefacts in subsequent data analysis. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Comparison of heavy metal loads in stormwater runoff from major and minor urban roads using pollutant yield rating curves.

    PubMed

    Davis, Brett; Birch, Gavin

    2010-08-01

    Trace metal export by stormwater runoff from a major road and local street in urban Sydney, Australia, is compared using pollutant yield rating curves derived from intensive sampling data. The event loads of copper, lead and zinc are well approximated by logarithmic relationships with respect to total event discharge owing to the reliable appearance of a first flush in pollutant mass loading from urban roads. Comparisons of the yield rating curves for these three metals show that copper and zinc export rates from the local street are comparable with that of the major road, while lead export from the local street is much higher, despite a 45-fold difference in traffic volume. The yield rating curve approach allows problematic environmental data to be presented in a simple yet meaningful manner with less information loss. Copyright 2010 Elsevier Ltd. All rights reserved.

  13. Optimal thresholds for the estimation of area rain-rate moments by the threshold method

    NASA Technical Reports Server (NTRS)

    Short, David A.; Shimizu, Kunio; Kedem, Benjamin

    1993-01-01

    Optimization of the threshold method, achieved by determination of the threshold that maximizes the correlation between an area-average rain-rate moment and the area coverage of rain rates exceeding the threshold, is demonstrated empirically and theoretically. Empirical results for a sequence of GATE radar snapshots show optimal thresholds of 5 and 27 mm/h for the first and second moments, respectively. Theoretical optimization of the threshold method by the maximum-likelihood approach of Kedem and Pavlopoulos (1991) predicts optimal thresholds near 5 and 26 mm/h for lognormally distributed rain rates with GATE-like parameters. The agreement between theory and observations suggests that the optimal threshold can be understood as arising due to sampling variations, from snapshot to snapshot, of a parent rain-rate distribution. Optimal thresholds for gamma and inverse Gaussian distributions are also derived and compared.

  14. Temporally and spatially uniform rates of erosion in the southern Appalachian Great Smoky Mountains

    USGS Publications Warehouse

    Matmon, A.; Bierman, P.R.; Larsen, J.; Southworth, S.; Pavich, M.; Caffee, M.

    2003-01-01

    We measured 10Be in fluvial sediment samples (n = 27) from eight Great Smoky Mountain drainages (1-330 km2). Results suggest spatially homogeneous sediment generation (on the 104-105 yr time scale and > 100 km2 spatial scale) at 73 ?? 11 t km-2 yr-1, equivalent to 27 ?? 4 m/m.y. of bedrock erosion. This rate is consistent with rates derived from fission-track, long-term sediment budget, and sediment yield data, all of which indicate that the Great Smoky Mountains and the southern Appalachians eroded during the Mesozoic and Cenozoic at ???30 m/m.y. In contrast, unroofing rates during the Paleozoic orogenic events that formed the Appalachian Mountains were higher (???102 m/m.y.). Erosion rates decreased after termination of tectonically driven uplift, enabling the survival of this ancient mountain belt with its deep crustal root as an isostatically maintained feature in the contemporary landscape.

  15. The mass function of black holes 1

    NASA Astrophysics Data System (ADS)

    Natarajan, Priyamvada; Volonteri, Marta

    2012-05-01

    In this paper, we compare the observationally derived black hole mass function (BHMF) of luminous (>1045-1046 erg s-1) broad-line quasars (BLQSOs) at 1 < z < 4.5 drawn from the Sloan Digital Sky Survey (SDSS) presented by Kelly et al., with models of merger-driven black hole (BH) growth in the context of standard hierarchical structure formation models. In these models, we explore two distinct black hole seeding prescriptions at the highest redshifts: 'light seeds'- remnants of Population III stars and 'massive seeds' that form from the direct collapse of pre-galactic discs. The subsequent merger triggered mass build-up of the black hole population is tracked over cosmic time under the assumption of a fixed accretion rate as well as rates drawn from the distribution derived by Merloni & Heinz. Four model snapshots at z= 1.25, 2, 3.25 and 4.25 are compared with the SDSS-derived BHMFs of BLQSOs. We find that the light seed models fall short of reproducing the observationally derived mass function of BLQSOs at MBH > 109 M⊙ throughout the redshift range; the massive seed models with a fixed accretion rate of 0.3 Edd, or with accretion rates drawn from the Merloni & Heinz distribution provide the best fit to the current observational data at z > 2, although they overestimate the high-mass end of the mass function at lower redshifts. At low redshifts, a drastic drop in the accretion rate is observed and this is explained as arising due to the diminished gas supply available due to consumption by star formation or changes in the geometry of the inner feeding regions. Therefore, the overestimate at the high-mass end of the black hole mass function for the massive seed models can be easily modified, as the accretion rate is likely significantly lower at these epochs than what we assume. For the Merloni & Heinz model, examining the Eddington ratio distributions fEdd, we find that they are almost uniformly sampled from fEdd= 10-2 to 1 at z≃ 1, while at high redshift, current observations suggest accretion rates close to Eddington, if not mildly super-Eddington, at least for these extremely luminous quasars. Our key findings are that the duty cycle of super-massive black holes powering BLQSOs increases with increasing redshift for all models and models with Population III remnants as black hole seeds are unable to fit the observationally derived BHMFs for BLQSOs, lending strong support for the massive seeding model.

  16. Serum Brain-Derived Neurotrophic Factors in Taiwanese Patients with Drug-Naïve First-Episode Major Depressive Disorder: Effects of Antidepressants.

    PubMed

    Chiou, Yu-Jie; Huang, Tiao-Lai

    2017-03-01

    Brain-derived neurotrophic factors are known to be related to the psychopathology of major depressive disorder. However, studies focusing on drug-naïve first-episode patients are still rare. Over a 6-year period, we examined the serum brain-derived neurotrophic factors levels in patients with first-episode drug-naïve major depressive disorder and compared them with sex-matched healthy controls. We also investigated the relationships between serum brain-derived neurotrophic factors levels, suicidal behavior, and Hamilton Depression Rating Scale scores before and after a 4-week antidepressant treatment. The baseline serum brain-derived neurotrophic factors levels of 71 patients were significantly lower than those of the controls (P=.017), and the Hamilton Depression Rating Scale scores in 71 patients did not correlate with brain-derived neurotrophic factor levels. Brain-derived neurotrophic factor levels were significantly lower in 13 suicidal major depressive disorder patients than in 58 nonsuicidal major depressive disorder patients (P=.038). Among 41 followed-up patients, there was no alteration in serum brain-derived neurotrophic factors levels after treatment with antidepressants (P=.126). In receiver operating characteristic curve analysis of using pretreatment brain-derived neurotrophic factors to estimate the response to treatment, the area under the curve was 0.684. The most suitable cut-off point was 6.1 ng/mL (sensitivity=78.6%, specificity = 53.8%). Our data support the serum brain-derived neurotrophic factor levels in patients with drug-naïve first-episode major depressive disorder were lower than those in the healthy controls, and patients with pretreatment brain-derived neurotrophic factors >6.1 ng/mL were more likely to be responders. Although the relationship of our results to the mechanism of drug action and pathophysiology of depression remains unclear, the measure may have potential use as a predictor of response to treatment. In the future, it needs a large sample to prove these results. © The Author 2016. Published by Oxford University Press on behalf of CINP.

  17. United States Forest Disturbance Trends Observed Using Landsat Time Series

    NASA Technical Reports Server (NTRS)

    Masek, Jeffrey G.; Goward, Samuel N.; Kennedy, Robert E.; Cohen, Warren B.; Moisen, Gretchen G.; Schleeweis, Karen; Huang, Chengquan

    2013-01-01

    Disturbance events strongly affect the composition, structure, and function of forest ecosystems; however, existing U.S. land management inventories were not designed to monitor disturbance. To begin addressing this gap, the North American Forest Dynamics (NAFD) project has examined a geographic sample of 50 Landsat satellite image time series to assess trends in forest disturbance across the conterminous United States for 1985-2005. The geographic sample design used a probability-based scheme to encompass major forest types and maximize geographic dispersion. For each sample location disturbance was identified in the Landsat series using the Vegetation Change Tracker (VCT) algorithm. The NAFD analysis indicates that, on average, 2.77 Mha/yr of forests were disturbed annually, representing 1.09%/yr of US forestland. These satellite-based national disturbance rates estimates tend to be lower than those derived from land management inventories, reflecting both methodological and definitional differences. In particular the VCT approach used with a biennial time step has limited sensitivity to low-intensity disturbances. Unlike prior satellite studies, our biennial forest disturbance rates vary by nearly a factor of two between high and low years. High western US disturbance rates were associated with active fire years and insect activity, while variability in the east is more strongly related to harvest rates in managed forests. We note that generating a geographic sample based on representing forest type and variability may be problematic since the spatial pattern of disturbance does not necessarily correlate with forest type. We also find that the prevalence of diffuse, non-stand clearing disturbance in US forests makes the application of a biennial geographic sample problematic. Future satellite-based studies of disturbance at regional and national scales should focus on wall-to-wall analyses with annual time step for improved accuracy.

  18. Sample Size Methods for Estimating HIV Incidence from Cross-Sectional Surveys

    PubMed Central

    Brookmeyer, Ron

    2015-01-01

    Summary Understanding HIV incidence, the rate at which new infections occur in populations, is critical for tracking and surveillance of the epidemic. In this paper we derive methods for determining sample sizes for cross-sectional surveys to estimate incidence with sufficient precision. We further show how to specify sample sizes for two successive cross-sectional surveys to detect changes in incidence with adequate power. In these surveys biomarkers such as CD4 cell count, viral load, and recently developed serological assays are used to determine which individuals are in an early disease stage of infection. The total number of individuals in this stage, divided by the number of people who are uninfected, is used to approximate the incidence rate. Our methods account for uncertainty in the durations of time spent in the biomarker defined early disease stage. We find that failure to account for this uncertainty when designing surveys can lead to imprecise estimates of incidence and underpowered studies. We evaluated our sample size methods in simulations and found that they performed well in a variety of underlying epidemics. Code for implementing our methods in R is available with this paper at the Biometrics website on Wiley Online Library. PMID:26302040

  19. Sample size methods for estimating HIV incidence from cross-sectional surveys.

    PubMed

    Konikoff, Jacob; Brookmeyer, Ron

    2015-12-01

    Understanding HIV incidence, the rate at which new infections occur in populations, is critical for tracking and surveillance of the epidemic. In this article, we derive methods for determining sample sizes for cross-sectional surveys to estimate incidence with sufficient precision. We further show how to specify sample sizes for two successive cross-sectional surveys to detect changes in incidence with adequate power. In these surveys biomarkers such as CD4 cell count, viral load, and recently developed serological assays are used to determine which individuals are in an early disease stage of infection. The total number of individuals in this stage, divided by the number of people who are uninfected, is used to approximate the incidence rate. Our methods account for uncertainty in the durations of time spent in the biomarker defined early disease stage. We find that failure to account for this uncertainty when designing surveys can lead to imprecise estimates of incidence and underpowered studies. We evaluated our sample size methods in simulations and found that they performed well in a variety of underlying epidemics. Code for implementing our methods in R is available with this article at the Biometrics website on Wiley Online Library. © 2015, The International Biometric Society.

  20. Abundant carbon in the mantle beneath Hawai`i

    USGS Publications Warehouse

    Anderson, Kyle R.; Poland, Michael

    2017-01-01

    Estimates of carbon concentrations in Earth’s mantle vary over more than an order of magnitude, hindering our ability to understand mantle structure and mineralogy, partial melting, and the carbon cycle. CO2 concentrations in mantle-derived magmas supplying hotspot ocean island volcanoes yield our most direct constraints on mantle carbon, but are extensively modified by degassing during ascent. Here we show that undegassed magmatic and mantle carbon concentrations may be estimated in a Bayesian framework using diverse geologic information at an ocean island volcano. Our CO2 concentration estimates do not rely upon complex degassing models, geochemical tracer elements, assumed magma supply rates, or rare undegassed rock samples. Rather, we couple volcanic CO2 emission rates with probabilistic magma supply rates, which are obtained indirectly from magma storage and eruption rates. We estimate that the CO2content of mantle-derived magma supplying Hawai‘i’s active volcanoes is 0.97−0.19+0.25 wt%—roughly 40% higher than previously believed—and is supplied from a mantle source region with a carbon concentration of 263−62+81 ppm. Our results suggest that mantle plumes and ocean island basalts are carbon-rich. Our data also shed light on helium isotope abundances, CO2/Nb ratios, and may imply higher CO2 emission rates from ocean island volcanoes.

  1. Measurements of Pu and Ra isotopes in soils and sediments by AMS

    NASA Astrophysics Data System (ADS)

    Tims, S. G.; Hancock, G. J.; Wacker, L.; Fifield, L. K.

    2004-08-01

    Plutonium fallout from atmospheric nuclear weapons testing in the 1950s and 1960s constitutes an artificial tracer suitable for the study of recent soil erosion and sediment accumulation rates. Long-lived Pu isotopes provide an alternative tracer to the more widely used 137Cs (t1/2=30 a), the concentration of which is decaying at a rate that will limit its long-term application to these studies. For 239,240Pu, the sensitivity of AMS is more than an order of magnitude better than that afforded by α-spectroscopy. Furthermore, AMS can provide a simple, direct measure of the 240Pu/239Pu ratio. Sample profiles from two sites along eastern Australia have been determined with both AMS and α-spectroscopy to provide comparative measurements of the sediment accumulation rate in water bodies and of the soil erosion rate. The two methods are in good agreement. The 228Ra/226Ra ratio potentially provides a probe for tracing the dispersion of uranium mining residues into the neighboring environment. Soil depth profiles of the ratio may provide information on the rate at which mining-derived radioactivity is spread by surface waters, and could be used to assess the effectiveness of remediation and rehabilitation technologies. AMS offers several advantages over the more usual α- and γ-spectroscopy techniques in that it can directly and quickly measure both isotopes in a sample of small size and with simple sample preparation. We show that AMS can be used to measure these isotopes of radium at the sensitivity required for environmental samples using RaC2- as the injected beam species.

  2. A curve-fitting approach to estimate the arterial plasma input function for the assessment of glucose metabolic rate and response to treatment.

    PubMed

    Vriens, Dennis; de Geus-Oei, Lioe-Fee; Oyen, Wim J G; Visser, Eric P

    2009-12-01

    For the quantification of dynamic (18)F-FDG PET studies, the arterial plasma time-activity concentration curve (APTAC) needs to be available. This can be obtained using serial sampling of arterial blood or an image-derived input function (IDIF). Arterial sampling is invasive and often not feasible in practice; IDIFs are biased because of partial-volume effects and cannot be used when no large arterial blood pool is in the field of view. We propose a mathematic function, consisting of an initial linear rising activity concentration followed by a triexponential decay, to describe the APTAC. This function was fitted to 80 oncologic patients and verified for 40 different oncologic patients by area-under-the-curve (AUC) comparison, Patlak glucose metabolic rate (MR(glc)) estimation, and therapy response monitoring (Delta MR(glc)). The proposed function was compared with the gold standard (serial arterial sampling) and the IDIF. To determine the free parameters of the function, plasma time-activity curves based on arterial samples in 80 patients were fitted after normalization for administered activity (AA) and initial distribution volume (iDV) of (18)F-FDG. The medians of these free parameters were used for the model. In 40 other patients (20 baseline and 20 follow-up dynamic (18)F-FDG PET scans), this model was validated. The population-based curve, individually calibrated by AA and iDV (APTAC(AA/iDV)), by 1 late arterial sample (APTAC(1 sample)), and by the individual IDIF (APTAC(IDIF)), was compared with the gold standard of serial arterial sampling (APTAC(sampled)) using the AUC. Additionally, these 3 methods of APTAC determination were evaluated with Patlak MR(glc) estimation and with Delta MR(glc) for therapy effects using serial sampling as the gold standard. Excellent individual fits to the function were derived with significantly different decay constants (P < 0.001). Correlations between AUC from APTAC(AA/iDV), APTAC(1 sample), and APTAC(IDIF) with the gold standard (APTAC(sampled)) were 0.880, 0.994, and 0.856, respectively. For MR(glc), these correlations were 0.963, 0.994, and 0.966, respectively. In response monitoring, these correlations were 0.947, 0.982, and 0.949, respectively. Additional scaling by 1 late arterial sample showed a significant improvement (P < 0.001). The fitted input function calibrated for AA and iDV performed similarly to IDIF. Performance improved significantly using 1 late arterial sample. The proposed model can be used when an IDIF is not available or when serial arterial sampling is not feasible.

  3. Relative and absolute reliability of measures of linoleic acid-derived oxylipins in human plasma.

    PubMed

    Gouveia-Figueira, Sandra; Bosson, Jenny A; Unosson, Jon; Behndig, Annelie F; Nording, Malin L; Fowler, Christopher J

    2015-09-01

    Modern analytical techniques allow for the measurement of oxylipins derived from linoleic acid in biological samples. Most validatory work has concerned extraction techniques, repeated analysis of aliquots from the same biological sample, and the influence of external factors such as diet and heparin treatment upon their levels, whereas less is known about the relative and absolute reliability of measurements undertaken on different days. A cohort of nineteen healthy males were used, where samples were taken at the same time of day on two occasions, at least 7 days apart. Relative reliability was assessed using Lin's concordance correlation coefficients (CCC) and intraclass correlation coefficients (ICC). Absolute reliability was assessed by Bland-Altman analyses. Nine linoleic acid oxylipins were investigated. ICC and CCC values ranged from acceptable (0.56 [13-HODE]) to poor (near zero [9(10)- and 12(13)-EpOME]). Bland-Altman limits of agreement were in general quite wide, ranging from ±0.5 (12,13-DiHOME) to ±2 (9(10)-EpOME; log10 scale). It is concluded that relative reliability of linoleic acid-derived oxylipins varies between lipids with compounds such as the HODEs showing better relative reliability than compounds such as the EpOMEs. These differences should be kept in mind when designing and interpreting experiments correlating plasma levels of these lipids with factors such as age, body mass index, rating scales etc. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Consequences of sludge composition on combustion performance derived from thermogravimetry analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Meiyan; Xiao, Benyi; Wang, Xu

    Highlights: • Volatiles, particularly proteins, play a key role in sludge combustion. • Sludge combustion performance varies with different sludge organic concentrations. • Carbohydrates significantly affect the combustion rate in the second stage. • Combustion performance of digested sludge is more negative compared with others. - Abstract: Wastewater treatment plants produce millions of tons of sewage sludge. Sewage sludge is recognized as a promising feedstock for power generation via combustion and can be used for energy crisis adaption. We aimed to investigate the quantitative effects of various sludge characteristics on the overall sludge combustion process performance. Different types of sewagemore » sludge were derived from numerous wastewater treatment plants in Beijing for further thermogravimetric analysis. Thermogravimetric–differential thermogravimetric curves were used to compare the performance of the studied samples. Proximate analytical data, organic compositions, elementary composition, and calorific value of the samples were determined. The relationship between combustion performance and sludge composition was also investigated. Results showed that the performance of sludge combustion was significantly affected by the concentration of protein, which is the main component of volatiles. Carbohydrates and lipids were not correlated with combustion performance, unlike protein. Overall, combustion performance varied with different sludge organic composition. The combustion rate of carbohydrates was higher than those of protein and lipid, and carbohydrate weight loss mainly occurred during the second stage (175–300 °C). Carbohydrates have a substantial effect on the rate of system combustion during the second stage considering the specific combustion feature. Additionally, the combustion performance of digested sewage sludge is more negative than the others.« less

  5. Net community production and dark community respiration in a Karenia brevis (Davis) bloom in West Florida coastal waters, USA

    PubMed Central

    Hitchcock, Gary L.; Kirkpatrick, Gary; Minnett, Peter; Palubok, Valeriy

    2013-01-01

    Oxygen-based productivity and respiration rates were determined in West Florida coastal waters to evaluate the proportion of community respiration demands met by autotrophic production within a harmful algal bloom dominated by Karenia brevis. The field program was adaptive in that sampling during the 2006 bloom occurred where surveys by the Florida Wildlife Research Institute indicated locations with high cell abundances. Net community production (NCP) rates from light-dark bottle incubations during the bloom ranged from 10 to 42 µmole O2 L−1 day−1 with highest rates in bloom waters where abundances exceeded 105 cells L−1. Community dark respiration (R) rates in dark bottles ranged from <10 to 70 µmole O2 L−1 day−1 over 24 h. Gross primary production derived from the sum of NCP and R varied from ca. 20 to 120 µmole O2 L−1 day−1. The proportion of GPP attributed to NCP varied with the magnitude of R during day and night periods. Most surface communities exhibited net autotrophic production (NCP > R) over 24 h, although heterotrophy (NCP < R) characterized the densest sample where K. brevis cell densities exceed 106 cells L−1. PMID:24179460

  6. Net community production and dark community respiration in a Karenia brevis (Davis) bloom in West Florida coastal waters, USA.

    PubMed

    Hitchcock, Gary L; Kirkpatrick, Gary; Minnett, Peter; Palubok, Valeriy

    2010-05-01

    Oxygen-based productivity and respiration rates were determined in West Florida coastal waters to evaluate the proportion of community respiration demands met by autotrophic production within a harmful algal bloom dominated by Karenia brevis . The field program was adaptive in that sampling during the 2006 bloom occurred where surveys by the Florida Wildlife Research Institute indicated locations with high cell abundances. Net community production (NCP) rates from light-dark bottle incubations during the bloom ranged from 10 to 42 µmole O 2 L -1 day -1 with highest rates in bloom waters where abundances exceeded 10 5 cells L -1 . Community dark respiration ( R ) rates in dark bottles ranged from <10 to 70 µmole O 2 L -1 day -1 over 24 h. Gross primary production derived from the sum of NCP and R varied from ca. 20 to 120 µmole O 2 L -1 day -1 . The proportion of GPP attributed to NCP varied with the magnitude of R during day and night periods. Most surface communities exhibited net autotrophic production (NCP > R ) over 24 h, although heterotrophy (NCP < R ) characterized the densest sample where K. brevis cell densities exceed 10 6 cells L -1 .

  7. Salivary Secretory Immunoglobulin a secretion increases after 4-weeks ingestion of chlorella-derived multicomponent supplement in humans: a randomized cross over study

    PubMed Central

    2011-01-01

    Background Chlorella, a unicellular green alga that grows in fresh water, contains high levels of proteins, vitamins, minerals, and dietary fibers. Some studies have reported favorable immune function-related effects on biological secretions such as blood and breast milk in humans who have ingested a chlorella-derived multicomponent supplement. However, the effects of chlorella-derived supplement on mucosal immune functions remain unclear. The purpose of this study was to investigate whether chlorella ingestion increases the salivary secretory immunoglobulin A (SIgA) secretion in humans using a blind, randomized, crossover study design. Methods Fifteen men took 30 placebo and 30 chlorella tablets per day for 4 weeks separated by a 12-week washout period. Before and after each trial, saliva samples were collected from a sterile cotton ball that was chewed after overnight fasting. Salivary SIgA concentrations were measured using ELISA. Results Compliance rates for placebo and chlorella ingestions were 97.0 ± 1.0% and 95.3 ± 1.6%, respectively. No difference was observed in salivary SIgA concentrations before and after placebo ingestion (P = 0.38). However, salivary SIgA concentrations were significantly elevated after chlorella ingestion compared to baseline (P < 0.01). No trial × period interaction was identified for the saliva flow rates. Although the SIgA secretion rate was not affected by placebo ingestion (P = 0.36), it significantly increased after 4-week chlorella ingestion than before intake (P < 0.01). Conclusions These results suggest 4-week ingestion of a chlorella-derived multicomponent supplement increases salivary SIgA secretion and possibly improves mucosal immune function in humans. PMID:21906314

  8. Comparison of star formation rates from Hα and infrared luminosity as seen by Herschel

    NASA Astrophysics Data System (ADS)

    Domínguez Sánchez, H.; Mignoli, M.; Pozzi, F.; Calura, F.; Cimatti, A.; Gruppioni, C.; Cepa, J.; Sánchez Portal, M.; Zamorani, G.; Berta, S.; Elbaz, D.; Le Floc'h, E.; Granato, G. L.; Lutz, D.; Maiolino, R.; Matteucci, F.; Nair, P.; Nordon, R.; Pozzetti, L.; Silva, L.; Silverman, J.; Wuyts, S.; Carollo, C. M.; Contini, T.; Kneib, J.-P.; Le Fèvre, O.; Lilly, S. J.; Mainieri, V.; Renzini, A.; Scodeggio, M.; Bardelli, S.; Bolzonella, M.; Bongiorno, A.; Caputi, K.; Coppa, G.; Cucciati, O.; de la Torre, S.; de Ravel, L.; Franzetti, P.; Garilli, B.; Iovino, A.; Kampczyk, P.; Knobel, C.; Kovač, K.; Lamareille, F.; Le Borgne, J.-F.; Le Brun, V.; Maier, C.; Magnelli, B.; Pelló, R.; Peng, Y.; Perez-Montero, E.; Ricciardelli, E.; Riguccini, L.; Tanaka, M.; Tasca, L. A. M.; Tresse, L.; Vergani, D.; Zucca, E.

    2012-10-01

    We empirically MD test the relation between the SFR(LIR) derived from the infrared luminosity, LIR, and the SFR(Hα) derived from the Hα emission line luminosity using simple conversion relations. We use a sample of 474 galaxies at z = 0.06-0.46 with both Hα detection [from 20k redshift Cosmological Evolution (zCOSMOS) survey] and new far-IR Herschel data (100 and 160 μm). We derive SFR(Hα) from the Hα extinction corrected emission line luminosity. We find a very clear trend between E(B - V) and LIR that allows us to estimate extinction values for each galaxy even if the Hβ emission line measurement is not reliable. We calculate the LIR by integrating from 8 up to 1000 μm the spectral energy distribution (SED) that is best fitting our data. We compare the SFR(Hα) with the SFR(LIR). We find a very good agreement between the two star formation rate (SFR) estimates, with a slope of m = 1.01 ± 0.03 in the log SFR(LIR) versus log SFR(Hα) diagram, a normalization constant of a = -0.08 ± 0.03 and a dispersion of σ = 0.28 dex. We study the effect of some intrinsic properties of the galaxies in the SFR(LIR)-SFR(Hα) relation, such as the redshift, the mass, the specific star formation rate (SSFR) or the metallicity. The metallicity is the parameter that affects most the SFR comparison. The mean ratio of the two SFR estimators log[SFR(LIR)/SFR(Hα)] varies by ˜0.6 dex from metal-poor to metal-rich galaxies [8.1 < log (O/H) + 12 < 9.2]. This effect is consistent with the prediction of a theoretical model for the dust evolution in spiral galaxies. Considering different morphological types, we find a very good agreement between the two SFR indicators for the Sa, Sb and Sc morphologically classified galaxies, both in slope and in normalization. For the Sd, irregular sample (Sd/Irr), the formal best-fitting slope becomes much steeper (m = 1.62 ± 0.43), but it is still consistent with 1 at the 1.5σ level, because of the reduced statistics of this sub-sample. Herschel is a European Space Agency (ESA) space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  9. Gas, Stars, and Star Formation in Alfalfa Dwarf Galaxies

    NASA Technical Reports Server (NTRS)

    Huang, Shan; Haynes, Martha P.; Giovanelli, Riccardo; Brinchmann, Jarle; Stierwalt, Sabrina; Neff, Susan G.

    2012-01-01

    We examine the global properties of the stellar and Hi components of 229 low H i mass dwarf galaxies extracted from the ALFALFA survey, including a complete sample of 176 galaxies with H i masses <10(sup 7.7) solar mass and Hi line widths <80 kilometers per second. Sloan Digital Sky Survey (SDSS) data are combined with photometric properties derived from Galaxy Evolution Explorer to derive stellar masses (M*) and star formation rates (SFRs) by fitting their UV-optical spectral energy distributions (SEDs). In optical images, many of the ALFALFA dwarfs are faint and of low surface brightness; only 56% of those within the SDSS footprint have a counterpart in the SDSS spectroscopic survey. A large fraction of the dwarfs have high specific star formation rates (SSFRs), and estimates of their SFRs and M* obtained by SED fitting are systematically smaller than ones derived via standard formulae assuming a constant SFR. The increased dispersion of the SSFR distribution at M* approximately less than10(exp 8)M(sub 0) is driven by a set of dwarf galaxies that have low gas fractions and SSFRs; some of these are dE/dSphs in the Virgo Cluster. The imposition of an upper Hi mass limit yields the selection of a sample with lower gas fractions for their M* than found for the overall ALFALFA population. Many of the ALFALFA dwarfs, particularly the Virgo members, have H i depletion timescales shorter than a Hubble time. An examination of the dwarf galaxies within the full ALFALFA population in the context of global star formation (SF) laws is consistent with the general assumptions that gas-rich galaxies have lower SF efficiencies than do optically selected populations and that Hi disks are more extended than stellar ones.

  10. Molecular cloud-scale star formation in NGC 300

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faesi, Christopher M.; Lada, Charles J.; Forbrich, Jan

    2014-07-01

    We present the results of a galaxy-wide study of molecular gas and star formation in a sample of 76 H II regions in the nearby spiral galaxy NGC 300. We have measured the molecular gas at 250 pc scales using pointed CO(J = 2-1) observations with the Atacama Pathfinder Experiment telescope. We detect CO in 42 of our targets, deriving molecular gas masses ranging from our sensitivity limit of ∼10{sup 5} M {sub ☉} to 7 × 10{sup 5} M {sub ☉}. We find a clear decline in the CO detection rate with galactocentric distance, which we attribute primarily tomore » the decreasing radial metallicity gradient in NGC 300. We combine Galaxy Evolution Explorer far-ultraviolet, Spitzer 24 μm, and Hα narrowband imaging to measure the star formation activity in our sample. We have developed a new direct modeling approach for computing star formation rates (SFRs) that utilizes these data and population synthesis models to derive the masses and ages of the young stellar clusters associated with each of our H II region targets. We find a characteristic gas depletion time of 230 Myr at 250 pc scales in NGC 300, more similar to the results obtained for Milky Way giant molecular clouds than the longer (>2 Gyr) global depletion times derived for entire galaxies and kiloparsec-sized regions within them. This difference is partially due to the fact that our study accounts for only the gas and stars within the youngest star-forming regions. We also note a large scatter in the NGC 300 SFR-molecular gas mass scaling relation that is furthermore consistent with the Milky Way cloud results. This scatter likely represents real differences in giant molecular cloud physical properties such as the dense gas fraction.« less

  11. Biases and systematics in the observational derivation of galaxy properties: comparing different techniques on synthetic observations of simulated galaxies

    NASA Astrophysics Data System (ADS)

    Guidi, Giovanni; Scannapieco, Cecilia; Walcher, C. Jakob

    2015-12-01

    We study the sources of biases and systematics in the derivation of galaxy properties from observational studies, focusing on stellar masses, star formation rates, gas and stellar metallicities, stellar ages, magnitudes and colours. We use hydrodynamical cosmological simulations of galaxy formation, for which the real quantities are known, and apply observational techniques to derive the observables. We also analyse biases that are relevant for a proper comparison between simulations and observations. For our study, we post-process the simulation outputs to calculate the galaxies' spectral energy distributions (SEDs) using stellar population synthesis models and also generate the fully consistent far-UV-submillimetre wavelength SEDs with the radiative transfer code SUNRISE. We compared the direct results of simulations with the observationally derived quantities obtained in various ways, and found that systematic differences in all studied galaxy properties appear, which are caused by: (1) purely observational biases, (2) the use of mass-weighted and luminosity-weighted quantities, with preferential sampling of more massive and luminous regions, (3) the different ways of constructing the template of models when a fit to the spectra is performed, and (4) variations due to different calibrations, most notably for gas metallicities and star formation rates. Our results show that large differences can appear depending on the technique used to derive galaxy properties. Understanding these differences is of primary importance both for simulators, to allow a better judgement of similarities and differences with observations, and for observers, to allow a proper interpretation of the data.

  12. Elevation-dependent changes in n-alkane δD and soil GDGTs across the South Central Andes

    NASA Astrophysics Data System (ADS)

    Nieto-Moreno, Vanesa; Rohrmann, Alexander; van der Meer, Marcel T. J.; Sinninghe Damsté, Jaap S.; Sachse, Dirk; Tofelde, Stefanie; Niedermeyer, Eva M.; Strecker, Manfred R.; Mulch, Andreas

    2016-11-01

    Surface uplift of large plateaus may significantly influence regional climate and more specifically precipitation patterns and temperature, sometimes complicating paleoaltimetry interpretations. Thus, understanding the topographic evolution of tectonically active mountain belts benefits from continued development of reliable proxies to reduce uncertainties in paleoaltimetry reconstructions. Lipid biomarker-based proxies provide a novel approach to stable isotope paleoaltimetry and complement authigenic or pedogenic mineral proxy materials, in particular outside semi-arid climate zones where soil carbonates are not abundant but (soil) organic matter has a high preservation potential. Here we present δD values of soil-derived n-alkanes and mean annual air temperature (MAT) estimates based on branched glycerol dialkyl glycerol tetraether (brGDGT) distributions to assess their potential for paleoelevation reconstructions in the southern central Andes. We analyzed soil samples across two environmental and hydrological gradients that include a hillslope (26-28°S) and a valley (22-24°S) transect on the windward flanks of Central Andean Eastern Cordillera in NW Argentina. Our results show that present-day n-alkane δD values and brGDGT-based MAT estimates are both linearly related with elevation and in good agreement with present-day climate conditions. Soil n-alkanes show a δD lapse rate (Δ (δD)) of - 1.64 ‰ / 100 m (R2 = 0.91, p < 0.01) at the hillslope transect, within the range of δD lapse rates from precipitation and surface waters in other tropical regions in the Andes like the Eastern Cordillera in Colombia and Bolivia and the Equatorial and Peruvian Andes. BrGDGT-derived soil temperatures are similar to monitored winter temperatures in the region and show a lapse rate of ΔT = - 0.51 °C / 100 m (R2 = 0.91, p < 0.01), comparable with lapse rates from in situ soil temperature measurements, satellite-derived land-surface temperatures at this transect, and weather stations from the Eastern Cordillera at similar latitude. As a result of an increasing leeward sampling position along the valley transect lapse rates are biased towards lower values and display higher scatter (Δ (δD) = - 0.95 ‰ / 100 m, R2 = 0.76, p < 0.01 and ΔT = - 0.19 °C / 100 m, R2 = 0.48, p < 0.05). Despite this higher complexity, they are in line with lapse rates from stream-water samples and in situ soil temperature measurements along the same transect. Our results demonstrate that both soil n-alkane δD values and MAT reconstructions based on brGDGTs distributions from the hillslope transect (Δ (δD) = - 1.64 ‰ / 100 m, R2 = 0.91, p < 0.01 and ΔT = - 0.51 °C / 100 m, R2 = 0.91, p < 0.01) track the direct effects of orography on precipitation and temperature and hence the combined effects of local and regional hydrology as well as elevation.

  13. Patterns of risk for anxiety-depression amongst Vietnamese-immigrants: a comparison with source and host populations

    PubMed Central

    2013-01-01

    Background Studies suggest that immigrants have higher rates of anxiety-depression than compatriots in low-middle income countries and lower rates than populations in host high income countries. Elucidating the factors that underlie these stepwise variations in prevalence may throw new light on the pathogenesis of anxiety-depressive disorders globally. This study aimed to examine whether quantitative differences in exposure to, or the interaction between, risk factors account for these anxiety-depression prevalence differences amongst immigrant relative to source and host country populations. Methods Multistage population mental health surveys were conducted in three groups: 1) a Vietnamese-immigrant sample settled in Australia (n = 1161); 2) a Vietnamese source country sample residing in the Mekong Delta region (n = 3039); 3) an Australian-born host country sample (n = 7964). Multivariable logistic regression analyses compared risk factors between the Vietnamese-immigrant group and: 1) the Mekong Delta Vietnamese; and 2) the Australian-born group. Twelve month anxiety-depression diagnoses were the main outcome measures, derived from the Composite International Diagnostic Interview (CIDI), supplemented by an indigenously derived measure - the Phan Vietnamese Psychiatric Scale (PVPS) in both Vietnamese groups. Results The 12-month prevalence of anxiety-depression showed a stepwise increase across groups: Mekong Delta Vietnamese 4.8%; Vietnamese-immigrants 7.0%; Australian-born 10.2%. The two Vietnamese populations showed a similar risk profile with older age, exposure to potentially traumatic events (PTEs), multiple physical illnesses and substance use disorder (SUD) being associated with anxiety-depression, with the older Vietnamese-immigrants reporting greater exposure to these factors. The interaction between key risk factors differed fundamentally when comparing Vietnamese-immigrant and Australian-born samples. Age emerged as the major discriminator, with young Vietnamese-immigrants exhibiting particularly low rates of anxiety-depression. Conclusions The findings reported here suggest that core risk factors for anxiety-depression may be universal, but their patterning and interaction may differ according to country-of-origin. The study also highlights the importance of including both standard international and culturally-specific measures to index cross-cultural manifestations of common mental disorders. PMID:24294940

  14. Use of Mental Health–Related Services Among Immigrant and US-Born Asian Americans: Results From the National Latino and Asian American Study

    PubMed Central

    Abe-Kim, Jennifer; Takeuchi, David T.; Hong, Seunghye; Zane, Nolan; Sue, Stanley; Spencer, Michael S.; Appel, Hoa; Nicdao, Ethel; Alegría, Margarita

    2007-01-01

    Objectives. We examined rates of mental health–related service use (i.e., any, general medical, and specialty mental health services) as well as subjective satisfaction with and perceived helpfulness of care in a national sample of Asian Americans, with a particular focus on immigration-related factors. Methods. Data were derived from the National Latino and Asian American Study (2002–2003). Results. About 8.6% of the total sample (n=2095) sought any mental health–related services; 34.1% of individuals who had a probable diagnosis sought any services. Rates of mental health–related service use, subjective satisfaction, and perceived helpfulness varied by birthplace and by generation. US-born Asian Americans demonstrated higher rates of service use than did their immigrant counterparts. Third-generation or later individuals who had a probable diagnosis had high (62.6%) rates of service use in the previous 12 months. Conclusions. Asian Americans demonstrated lower rates of any type of mental health–related service use than did the general population, although there are important exceptions to this pattern according to nativity status and generation status. Our results underscore the importance of immigration-related factors in understanding service use among Asian Americans. PMID:17138905

  15. Probabilistic measurement of non-physical constructs during early childhood: Epistemological implications for advancing psychosocial science

    NASA Astrophysics Data System (ADS)

    Bezruczko, N.; Fatani, S. S.

    2010-07-01

    Social researchers commonly compute ordinal raw scores and ratings to quantify human aptitudes, attitudes, and abilities but without a clear understanding of their limitations for scientific knowledge. In this research, common ordinal measures were compared to higher order linear (equal interval) scale measures to clarify implications for objectivity, precision, ontological coherence, and meaningfulness. Raw score gains, residualized raw gains, and linear gains calculated with a Rasch model were compared between Time 1 and Time 2 for observations from two early childhood learning assessments. Comparisons show major inconsistencies between ratings and linear gains. When gain distribution was dense, relatively compact, and initial status near item mid-range, linear measures and ratings were indistinguishable. When Time 1 status was distributed more broadly and magnitude of change variable, ratings were unrelated to linear gain, which emphasizes problematic implications of ordinal measures. Surprisingly, residualized gain scores did not significantly improve ordinal measurement of change. In general, raw scores and ratings may be meaningful in specific samples to establish order and high/low rank, but raw score differences suffer from non-uniform units. Even meaningfulness of sample comparisons, as well as derived proportions and percentages, are seriously affected by rank order distortions and should be avoided.

  16. Using a detailed uncertainty analysis to adjust mapped rates of forest disturbance derived from Landsat time series data (Invited)

    NASA Astrophysics Data System (ADS)

    Cohen, W. B.; Yang, Z.; Stehman, S.; Huang, C.; Healey, S. P.

    2013-12-01

    Forest ecosystem process models require spatially and temporally detailed disturbance data to accurately predict fluxes of carbon or changes in biodiversity over time. A variety of new mapping algorithms using dense Landsat time series show great promise for providing disturbance characterizations at an annual time step. These algorithms provide unprecedented detail with respect to timing, magnitude, and duration of individual disturbance events, and causal agent. But all maps have error and disturbance maps in particular can have significant omission error because many disturbances are relatively subtle. Because disturbance, although ubiquitous, can be a relatively rare event spatially in any given year, omission errors can have a great impact on mapped rates. Using a high quality reference disturbance dataset, it is possible to not only characterize map errors but also to adjust mapped disturbance rates to provide unbiased rate estimates with confidence intervals. We present results from a national-level disturbance mapping project (the North American Forest Dynamics project) based on the Vegetation Change Tracker (VCT) with annual Landsat time series and uncertainty analyses that consist of three basic components: response design, statistical design, and analyses. The response design describes the reference data collection, in terms of the tool used (TimeSync), a formal description of interpretations, and the approach for data collection. The statistical design defines the selection of plot samples to be interpreted, whether stratification is used, and the sample size. Analyses involve derivation of standard agreement matrices between the map and the reference data, and use of inclusion probabilities and post-stratification to adjust mapped disturbance rates. Because for NAFD we use annual time series, both mapped and adjusted rates are provided at an annual time step from ~1985-present. Preliminary evaluations indicate that VCT captures most of the higher intensity disturbances, but that many of the lower intensity disturbances (thinnings, stress related to insects and disease, etc.) are missed. Because lower intensity disturbances are a large proportion of the total set of disturbances, adjusting mapped disturbance rates to include these can be important for inclusion in ecosystem process models. The described statistical disturbance rate adjustments are aspatial in nature, such that the basic underlying map is unchanged. For spatially explicit ecosystem modeling, such adjustments, although important, can be difficult to directly incorporate. One approach for improving the basic underlying map is an ensemble modeling approach that uses several different complementary maps, each derived from a different algorithm and having their own strengths and weaknesses relative to disturbance magnitude and causal agent of disturbance. We will present results from a pilot study associated with the Landscape Change Monitoring System (LCMS), an emerging national-level program that builds upon NAFD and the well-established Monitoring Trends in Burn Severity (MTBS) program.

  17. Uncertainty in Population Growth Rates: Determining Confidence Intervals from Point Estimates of Parameters

    PubMed Central

    Devenish Nelson, Eleanor S.; Harris, Stephen; Soulsbury, Carl D.; Richards, Shane A.; Stephens, Philip A.

    2010-01-01

    Background Demographic models are widely used in conservation and management, and their parameterisation often relies on data collected for other purposes. When underlying data lack clear indications of associated uncertainty, modellers often fail to account for that uncertainty in model outputs, such as estimates of population growth. Methodology/Principal Findings We applied a likelihood approach to infer uncertainty retrospectively from point estimates of vital rates. Combining this with resampling techniques and projection modelling, we show that confidence intervals for population growth estimates are easy to derive. We used similar techniques to examine the effects of sample size on uncertainty. Our approach is illustrated using data on the red fox, Vulpes vulpes, a predator of ecological and cultural importance, and the most widespread extant terrestrial mammal. We show that uncertainty surrounding estimated population growth rates can be high, even for relatively well-studied populations. Halving that uncertainty typically requires a quadrupling of sampling effort. Conclusions/Significance Our results compel caution when comparing demographic trends between populations without accounting for uncertainty. Our methods will be widely applicable to demographic studies of many species. PMID:21049049

  18. Output statistics of laser anemometers in sparsely seeded flows

    NASA Technical Reports Server (NTRS)

    Edwards, R. V.; Jensen, A. S.

    1982-01-01

    It is noted that until very recently, research on this topic concentrated on the particle arrival statistics and the influence of the optical parameters on them. Little attention has been paid to the influence of subsequent processing on the measurement statistics. There is also controversy over whether the effects of the particle statistics can be measured. It is shown here that some of the confusion derives from a lack of understanding of the experimental parameters that are to be controlled or known. A rigorous framework is presented for examining the measurement statistics of such systems. To provide examples, two problems are then addressed. The first has to do with a sample and hold processor, the second with what is called a saturable processor. The sample and hold processor converts the output to a continuous signal by holding the last reading until a new one is obtained. The saturable system is one where the maximum processable rate is arrived at by the dead time of some unit in the system. At high particle rates, the processed rate is determined through the dead time.

  19. Detection of Volatile Metabolites Derived from Garlic (Allium sativum) in Human Urine

    PubMed Central

    Scheffler, Laura; Sauermann, Yvonne; Heinlein, Anja; Sharapa, Constanze; Buettner, Andrea

    2016-01-01

    The metabolism and excretion of flavor constituents of garlic, a common plant used in flavoring foods and attributed with several health benefits, in humans is not fully understood. Likewise, the physiologically active principles of garlic have not been fully clarified to date. It is possible that not only the parent compounds present in garlic but also its metabolites are responsible for the specific physiological properties of garlic, including its influence on the characteristic body odor signature of humans after garlic consumption. Accordingly, the aim of this study was to investigate potential garlic-derived metabolites in human urine. To this aim, 14 sets of urine samples were obtained from 12 volunteers, whereby each set comprised one sample that was collected prior to consumption of food-relevant concentrations of garlic, followed by five to eight subsequent samples after garlic consumption that covered a time interval of up to 26 h. The samples were analyzed chemo-analytically using gas chromatography-mass spectrometry/olfactometry (GC-MS/O), as well as sensorially by a trained human panel. The analyses revealed three different garlic-derived metabolites in urine, namely allyl methyl sulfide (AMS), allyl methyl sulfoxide (AMSO) and allyl methyl sulfone (AMSO2), confirming our previous findings on human milk metabolite composition. The excretion rates of these metabolites into urine were strongly time-dependent with distinct inter-individual differences. These findings indicate that the volatile odorant fraction of garlic is heavily biotransformed in humans, opening up a window into substance circulation within the human body with potential wider ramifications in view of physiological effects of this aromatic plant that is appreciated by humans in their daily diet. PMID:27916960

  20. The Ages of the Thin Disk, Thick Disk, and the Halo from Nearby White Dwarfs

    NASA Astrophysics Data System (ADS)

    Kilic, Mukremin; Munn, Jeffrey A.; Harris, Hugh C.; von Hippel, Ted; Liebert, James W.; Williams, Kurtis A.; Jeffery, Elizabeth; DeGennaro, Steven

    2017-03-01

    We present a detailed analysis of the white dwarf luminosity functions derived from the local 40 pc sample and the deep proper motion catalog of Munn et al. Many previous studies have ignored the contribution of thick disk white dwarfs to the Galactic disk luminosity function, which results in an erroneous age measurement. We demonstrate that the ratio of thick/thin disk white dwarfs is roughly 20% in the local sample. Simultaneously fitting for both disk components, we derive ages of 6.8-7.0 Gyr for the thin disk and 8.7 ± 0.1 Gyr for the thick disk from the local 40 pc sample. Similarly, we derive ages of 7.4-8.2 Gyr for the thin disk and 9.5-9.9 Gyr for the thick disk from the deep proper motion catalog, which shows no evidence of a deviation from a constant star formation rate in the past 2.5 Gyr. We constrain the time difference between the onset of star formation in the thin disk and the thick disk to be {1.6}-0.4+0.3 Gyr. The faint end of the luminosity function for the halo white dwarfs is less constrained, resulting in an age estimate of {12.5}-3.4+1.4 Gyr for the Galactic inner halo. This is the first time that ages for all three major components of the Galaxy have been obtained from a sample of field white dwarfs that is large enough to contain significant numbers of disk and halo objects. The resultant ages agree reasonably well with the age estimates for the oldest open and globular clusters.

  1. Molecular characteristics of continuously released DOM during one year of root and leaf litter decomposition

    NASA Astrophysics Data System (ADS)

    Altmann, Jens; Jansen, Boris; Kalbitz, Karsten; Filley, Timothy

    2013-04-01

    Dissolved organic matter (DOM) is one of the most dynamic carbon pools linking the terrestrial with the aquatic carbon cycle. Besides the insecure contribution of terrestrial DOM to the greenhouse effect, DOM also plays an important role for the mobility and availability of heavy metals and organic pollutants in soils. These processes depend very much on the molecular characteristics of the DOM. Surprisingly the processes that determine the molecular composition of DOM are only poorly understood. DOM can originate from various sources, which influence its molecular composition. It has been recognized that DOM formation is not a static process and DOM characteristics vary not only between different carbon sources. However, molecular characteristics of DOM extracts have scarcely been studied continuously over a longer period of time. Due to constant molecular changes of the parent litter material or soil organic matter during microbial degradation, we assumed that also the molecular characteristics of litter derived DOM varies at different stages during root and needle decomposition. For this study we analyzed the chemical composition of root and leaf samples of 6 temperate tree species during one year of litter decomposition in a laboratory incubation. During this long-term experiment we measured continuously carbon and nitrogen contents of the water extracts and the remaining residues, C mineralization rates, and the chemical composition of water extracts and residues by Curie-point pyrolysis mass spectrometry with TMAH We focused on the following questions: (I) How mobile are molecules derived from plant polymers like tannin, lignin, suberin and cutin? (II) How does the composition of root and leaf derived DOM change over time in dependence on the stage of decomposition and species? Litter derived DOM was generally dominated by aromatic compounds. Substituded fatty acids as typically cutin or suberin derived were not detected in the water extracts. Fresh leaf and needle samples released a much higher amount of tannins than fresh root samples. At later litter decomposition stages the influence of tannins decreased and lignin derived phenols dominated the extracts. With ongoing litter degradation the degree of oxidation for the litter material increased, which was also reflected by the water extracted molecules.

  2. Toxic Hazards Research Unit Annual Technical Report: 1975

    DTIC Science & Technology

    1975-10-01

    by different sampling rates 160 21 Particle size distribution curves 167 22 Effect of 03 or N02 concentrations on rat lung weight 177 23 Relationship...previously, consisted of female C57 black/6 mice obtained from Jackson Laboratories, male CDF (Fischer 344 derived) albino rats from Charles River...the exposure phase of the study but made at the conclusion of the 5 ppm and 0. 5 ppm experiments were: Blood urea nitrogen SGOT Chloride Prothrombin

  3. Comparison of sampling designs for estimating deforestation from landsat TM and MODIS imagery: a case study in Mato Grosso, Brazil.

    PubMed

    Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin

    2014-01-01

    Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.

  4. Comparison of Sampling Designs for Estimating Deforestation from Landsat TM and MODIS Imagery: A Case Study in Mato Grosso, Brazil

    PubMed Central

    Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin

    2014-01-01

    Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block. PMID:25258742

  5. High-Dimensional Multivariate Repeated Measures Analysis with Unequal Covariance Matrices.

    PubMed

    Harrar, Solomon W; Kong, Xiaoli

    2015-03-01

    In this paper, test statistics for repeated measures design are introduced when the dimension is large. By large dimension is meant the number of repeated measures and the total sample size grow together but either one could be larger than the other. Asymptotic distribution of the statistics are derived for the equal as well as unequal covariance cases in the balanced as well as unbalanced cases. The asymptotic framework considered requires proportional growth of the sample sizes and the dimension of the repeated measures in the unequal covariance case. In the equal covariance case, one can grow at much faster rate than the other. The derivations of the asymptotic distributions mimic that of Central Limit Theorem with some important peculiarities addressed with sufficient rigor. Consistent and unbiased estimators of the asymptotic variances, which make efficient use of all the observations, are also derived. Simulation study provides favorable evidence for the accuracy of the asymptotic approximation under the null hypothesis. Power simulations have shown that the new methods have comparable power with a popular method known to work well in low-dimensional situation but the new methods have shown enormous advantage when the dimension is large. Data from Electroencephalograph (EEG) experiment is analyzed to illustrate the application of the results.

  6. High-Dimensional Multivariate Repeated Measures Analysis with Unequal Covariance Matrices

    PubMed Central

    Harrar, Solomon W.; Kong, Xiaoli

    2015-01-01

    In this paper, test statistics for repeated measures design are introduced when the dimension is large. By large dimension is meant the number of repeated measures and the total sample size grow together but either one could be larger than the other. Asymptotic distribution of the statistics are derived for the equal as well as unequal covariance cases in the balanced as well as unbalanced cases. The asymptotic framework considered requires proportional growth of the sample sizes and the dimension of the repeated measures in the unequal covariance case. In the equal covariance case, one can grow at much faster rate than the other. The derivations of the asymptotic distributions mimic that of Central Limit Theorem with some important peculiarities addressed with sufficient rigor. Consistent and unbiased estimators of the asymptotic variances, which make efficient use of all the observations, are also derived. Simulation study provides favorable evidence for the accuracy of the asymptotic approximation under the null hypothesis. Power simulations have shown that the new methods have comparable power with a popular method known to work well in low-dimensional situation but the new methods have shown enormous advantage when the dimension is large. Data from Electroencephalograph (EEG) experiment is analyzed to illustrate the application of the results. PMID:26778861

  7. Characterizing Drainage Multiphase Flow in Heterogeneous Sandstones

    NASA Astrophysics Data System (ADS)

    Jackson, Samuel J.; Agada, Simeon; Reynolds, Catriona A.; Krevor, Samuel

    2018-04-01

    In this work, we analyze the characterization of drainage multiphase flow properties on heterogeneous rock cores using a rich experimental data set and mm-m scale numerical simulations. Along with routine multiphase flow properties, 3-D submeter scale capillary pressure heterogeneity is characterized by combining experimental observations and numerical calibration, resulting in a 3-D numerical model of the rock core. The uniqueness and predictive capability of the numerical models are evaluated by accurately predicting the experimentally measured relative permeability of N2—DI water and CO2—brine systems in two distinct sandstone rock cores across multiple fractional flow regimes and total flow rates. The numerical models are used to derive equivalent relative permeabilities, which are upscaled functions incorporating the effects of submeter scale capillary pressure. The functions are obtained across capillary numbers which span four orders of magnitude, representative of the range of flow regimes that occur in subsurface CO2 injection. Removal of experimental boundary artifacts allows the derivation of equivalent functions which are characteristic of the continuous subsurface. We also demonstrate how heterogeneities can be reorientated and restructured to efficiently estimate flow properties in rock orientations differing from the original core sample. This analysis shows how combined experimental and numerical characterization of rock samples can be used to derive equivalent flow properties from heterogeneous rocks.

  8. Oil field management system

    DOEpatents

    Fincke, James R.

    2003-09-23

    Oil field management systems and methods for managing operation of one or more wells producing a high void fraction multiphase flow. The system includes a differential pressure flow meter which samples pressure readings at various points of interest throughout the system and uses pressure differentials derived from the pressure readings to determine gas and liquid phase mass flow rates of the high void fraction multiphase flow. One or both of the gas and liquid phase mass flow rates are then compared with predetermined criteria. In the event such mass flow rates satisfy the predetermined criteria, a well control system implements a correlating adjustment action respecting the multiphase flow. In this way, various parameters regarding the high void fraction multiphase flow are used as control inputs to the well control system and thus facilitate management of well operations.

  9. Evaluation of process errors in bed load sampling using a Dune Model

    USGS Publications Warehouse

    Gomez, Basil; Troutman, Brent M.

    1997-01-01

    Reliable estimates of the streamwide bed load discharge obtained using sampling devices are dependent upon good at-a-point knowledge across the full width of the channel. Using field data and information derived from a model that describes the geometric features of a dune train in terms of a spatial process observed at a fixed point in time, we show that sampling errors decrease as the number of samples collected increases, and the number of traverses of the channel over which the samples are collected increases. It also is preferable that bed load sampling be conducted at a pace which allows a number of bed forms to pass through the sampling cross section. The situations we analyze and simulate pertain to moderate transport conditions in small rivers. In such circumstances, bed load sampling schemes typically should involve four or five traverses of a river, and the collection of 20–40 samples at a rate of five or six samples per hour. By ensuring that spatial and temporal variability in the transport process is accounted for, such a sampling design reduces both random and systematic errors and hence minimizes the total error involved in the sampling process.

  10. The star formation rate cookbook at 1 < z < 3: Extinction-corrected relations for UV and [OII]λ3727 luminosities

    NASA Astrophysics Data System (ADS)

    Talia, M.; Cimatti, A.; Pozzetti, L.; Rodighiero, G.; Gruppioni, C.; Pozzi, F.; Daddi, E.; Maraston, C.; Mignoli, M.; Kurk, J.

    2015-10-01

    Aims: In this paper we use a well-controlled spectroscopic sample of galaxies at 1

  11. Estimation of absorbed radiation dose rates in wild rodents inhabiting a site severely contaminated by the Fukushima Dai-ichi nuclear power plant accident.

    PubMed

    Kubota, Yoshihisa; Takahashi, Hiroyuki; Watanabe, Yoshito; Fuma, Shoichi; Kawaguchi, Isao; Aoki, Masanari; Kubota, Masahide; Furuhata, Yoshiaki; Shigemura, Yusaku; Yamada, Fumio; Ishikawa, Takahiro; Obara, Satoshi; Yoshida, Satoshi

    2015-04-01

    The dose rates of radiation absorbed by wild rodents inhabiting a site severely contaminated by the Fukushima Dai-ichi Nuclear Power Plant accident were estimated. The large Japanese field mouse (Apodemus speciosus), also called the wood mouse, was the major rodent species captured in the sampling area, although other species of rodents, such as small field mice (Apodemus argenteus) and Japanese grass voles (Microtus montebelli), were also collected. The external exposure of rodents calculated from the activity concentrations of radiocesium ((134)Cs and (137)Cs) in litter and soil samples using the ERICA (Environmental Risk from Ionizing Contaminants: Assessment and Management) tool under the assumption that radionuclides existed as the infinite plane isotropic source was almost the same as those measured directly with glass dosimeters embedded in rodent abdomens. Our findings suggest that the ERICA tool is useful for estimating external dose rates to small animals inhabiting forest floors; however, the estimated dose rates showed large standard deviations. This could be an indication of the inhomogeneous distribution of radionuclides in the sampled litter and soil. There was a 50-fold difference between minimum and maximum whole-body activity concentrations measured in rodents at the time of capture. The radionuclides retained in rodents after capture decreased exponentially over time. Regression equations indicated that the biological half-life of radiocesium after capture was 3.31 d. At the time of capture, the lowest activity concentration was measured in the lung and was approximately half of the highest concentration measured in the mixture of muscle and bone. The average internal absorbed dose rate was markedly smaller than the average external dose rate (<10% of the total absorbed dose rate). The average total absorbed dose rate to wild rodents inhabiting the sampling area was estimated to be approximately 52 μGy h(-1) (1.2 mGy d(-1)), even 3 years after the accident. This dose rate exceeds 0.1-1 mGy d(-1) derived consideration reference level for Reference rat proposed by the International Commission on Radiological Protection (ICRP). Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Heart rate time series characteristics for early detection of infections in critically ill patients.

    PubMed

    Tambuyzer, T; Guiza, F; Boonen, E; Meersseman, P; Vervenne, H; Hansen, T K; Bjerre, M; Van den Berghe, G; Berckmans, D; Aerts, J M; Meyfroidt, G

    2017-04-01

    It is difficult to make a distinction between inflammation and infection. Therefore, new strategies are required to allow accurate detection of infection. Here, we hypothesize that we can distinguish infected from non-infected ICU patients based on dynamic features of serum cytokine concentrations and heart rate time series. Serum cytokine profiles and heart rate time series of 39 patients were available for this study. The serum concentration of ten cytokines were measured using blood sampled every 10 min between 2100 and 0600 hours. Heart rate was recorded every minute. Ten metrics were used to extract features from these time series to obtain an accurate classification of infected patients. The predictive power of the metrics derived from the heart rate time series was investigated using decision tree analysis. Finally, logistic regression methods were used to examine whether classification performance improved with inclusion of features derived from the cytokine time series. The AUC of a decision tree based on two heart rate features was 0.88. The model had good calibration with 0.09 Hosmer-Lemeshow p value. There was no significant additional value of adding static cytokine levels or cytokine time series information to the generated decision tree model. The results suggest that heart rate is a better marker for infection than information captured by cytokine time series when the exact stage of infection is not known. The predictive value of (expensive) biomarkers should always be weighed against the routinely monitored data, and such biomarkers have to demonstrate added value.

  13. Evaluation of pump pulsation in respirable size-selective sampling: Part III. Investigation of European standard methods.

    PubMed

    Soo, Jhy-Charm; Lee, Eun Gyung; Lee, Larry A; Kashon, Michael L; Harper, Martin

    2014-10-01

    Lee et al. (Evaluation of pump pulsation in respirable size-selective sampling: part I. Pulsation measurements. Ann Occup Hyg 2014a;58:60-73) introduced an approach to measure pump pulsation (PP) using a real-world sampling train, while the European Standards (EN) (EN 1232-1997 and EN 12919-1999) suggest measuring PP using a resistor in place of the sampler. The goal of this study is to characterize PP according to both EN methods and to determine the relationship of PP between the published method (Lee et al., 2014a) and the EN methods. Additional test parameters were investigated to determine whether the test conditions suggested by the EN methods were appropriate for measuring pulsations. Experiments were conducted using a factorial combination of personal sampling pumps (six medium- and two high-volumetric flow rate pumps), back pressures (six medium- and seven high-flow rate pumps), resistors (two types), tubing lengths between a pump and resistor (60 and 90 cm), and different flow rates (2 and 2.5 l min(-1) for the medium- and 4.4, 10, and 11.2 l min(-1) for the high-flow rate pumps). The selection of sampling pumps and the ranges of back pressure were based on measurements obtained in the previous study (Lee et al., 2014a). Among six medium-flow rate pumps, only the Gilian5000 and the Apex IS conformed to the 10% criterion specified in EN 1232-1997. Although the AirChek XR5000 exceeded the 10% limit, the average PP (10.9%) was close to the criterion. One high-flow rate pump, the Legacy (PP=8.1%), conformed to the 10% criterion in EN 12919-1999, while the Elite12 did not (PP=18.3%). Conducting supplemental tests with additional test parameters beyond those used in the two subject EN standards did not strengthen the characterization of PPs. For the selected test conditions, a linear regression model [PPEN=0.014+0.375×PPNIOSH (adjusted R2=0.871)] was developed to determine the PP relationship between the published method (Lee et al., 2014a) and the EN methods. The 25% PP criterion recommended by Lee et al. (2014a), average value derived from repetitive measurements, corresponds to 11% PPEN. The 10% pass/fail criterion in the EN Standards is not based on extensive laboratory evaluation and would unreasonably exclude at least one pump (i.e. AirChek XR5000 in this study) and, therefore, the more accurate criterion of average 11% from repetitive measurements should be substituted. This study suggests that users can measure PP using either a real-world sampling train or a resistor setup and obtain equivalent findings by applying the model herein derived. The findings of this study will be delivered to the consensus committees to be considered when those standards, including the EN 1232-1997, EN 12919-1999, and ISO 13137-2013, are revised. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2014.

  14. Temporal variability in stage-discharge relationships

    NASA Astrophysics Data System (ADS)

    Guerrero, José-Luis; Westerberg, Ida K.; Halldin, Sven; Xu, Chong-Yu; Lundin, Lars-Christer

    2012-06-01

    SummaryAlthough discharge estimations are central for water management and hydropower, there are few studies on the variability and uncertainty of their basis; deriving discharge from stage heights through the use of a rating curve that depends on riverbed geometry. A large fraction of the world's river-discharge stations are presumably located in alluvial channels where riverbed characteristics may change over time because of erosion and sedimentation. This study was conducted to analyse and quantify the dynamic relationship between stage and discharge and to determine to what degree currently used methods are able to account for such variability. The study was carried out for six hydrometric stations in the upper Choluteca River basin, Honduras, where a set of unusually frequent stage-discharge data are available. The temporal variability and the uncertainty of the rating curve and its parameters were analysed through a Monte Carlo (MC) analysis on a moving window of data using the Generalised Likelihood Uncertainty Estimation (GLUE) methodology. Acceptable ranges for the values of the rating-curve parameters were determined from riverbed surveys at the six stations, and the sampling space was constrained according to those ranges, using three-dimensional alpha shapes. Temporal variability was analysed in three ways: (i) with annually updated rating curves (simulating Honduran practices), (ii) a rating curve for each time window, and (iii) a smoothed, continuous dynamic rating curve derived from the MC analysis. The temporal variability of the rating parameters translated into a high rating-curve variability. The variability could turn out as increasing or decreasing trends and/or cyclic behaviour. There was a tendency at all stations to a seasonal variability. The discharge at a given stage could vary by a factor of two or more. The quotient in discharge volumes estimated from dynamic and static rating curves varied between 0.5 and 1.5. The difference between discharge volumes derived from static and dynamic curves was largest for sub-daily ratings but stayed large also for monthly and yearly totals. The relative uncertainty was largest for low flows but it was considerable also for intermediate and large flows. The standard procedure of adjusting rating curves when calculated and observed discharge differ by more than 5% would have required continuously updated rating curves at the studied locations. We believe that these findings can be applicable to many other discharge stations around the globe.

  15. Constant strain rate experiments and constitutive modeling for a class of bitumen

    NASA Astrophysics Data System (ADS)

    Reddy, Kommidi Santosh; Umakanthan, S.; Krishnan, J. Murali

    2012-08-01

    The mechanical properties of bitumen vary with the nature of the crude source and the processing methods employed. To understand the role of the processing conditions played in the mechanical properties, bitumen samples derived from the same crude source but processed differently (blown and blended) are investigated. The samples are subjected to constant strain rate experiments in a parallel plate rheometer. The torque applied to realize the prescribed angular velocity for the top plate and the normal force applied to maintain the gap between the top and bottom plate are measured. It is found that when the top plate is held stationary, the time taken by the torque to be reduced by a certain percentage of its maximum value is different from the time taken by the normal force to decrease by the same percentage of its maximum value. Further, the time at which the maximum torque occurs is different from the time at which the maximum normal force occurs. Since the existing constitutive relations for bitumen cannot capture the difference in the relaxation times for the torque and normal force, a new rate type constitutive model, incorporating this response, is proposed. Although the blended and blown bitumen samples used in this study correspond to the same grade, the mechanical responses of the two samples are not the same. This is also reflected in the difference in the values of the material parameters in the model proposed. The differences in the mechanical properties between the differently processed bitumen samples increase further with aging. This has implications for the long-term performance of the pavement.

  16. X-Ray Properties of Lyman Break Galaxies in the Hubble Deep Field North Region

    NASA Technical Reports Server (NTRS)

    Nandra, K.; Mushotzky, R. F.; Arnaud, K.; Steidel, C. C.; Adelberger, K. L.; Gardner, J. P.; Teplitz, H. I.; Windhorst, R. A.; White, Nicholas E. (Technical Monitor)

    2002-01-01

    We describe the X-ray properties of a large sample of z approximately 3 Lyman Break Galaxies (LBGs) in the region of the Hubble Deep Field North, derived from the 1 Ms public Chandra observation. Of our sample of 148 LBGs, four are detected individually. This immediately gives a measure of the bright AGN (active galactic nuclei) fraction in these galaxies of approximately 3 per cent, which is in agreement with that derived from the UV (ultraviolet) spectra. The X-ray color of the detected sources indicates that they are probably moderately obscured. Stacking of the remainder shows a significant detection (6 sigma) with an average luminosity of 3.5 x 10(exp 41) erg/s per galaxy in the rest frame 2-10 keV band. We have also studied a comparison sample of 95 z approximately 1 "Balmer Break" galaxies. Eight of these are detected directly, with at least two clear AGN based on their high X-ray luminosity and very hard X-ray spectra respectively. The remainder are of relatively low luminosity (< 10(exp 42) erg/s, and the X-rays could arise from either AGN or rapid star-formation. The X-ray colors and evidence from other wavebands favor the latter interpretation. Excluding the clear AGN, we deduce a mean X-ray luminosity of 6.6 x 10(exp 40) erg/s, a factor approximately 5 lower than the LBGs. The average ratio of the UV and X-ray luminosities of these star forming galaxies L(sub UV)/L (sub X), however, is approximately the same at z = 1 as it is at z = 3. This scaling implies that the X-ray emission follows the current star formation rate, as measured by the UV luminosity. We use our results to constrain the star formation rate at z approximately 3 from an X-ray perspective. Assuming the locally established correlation between X-ray and far-IR (infrared) luminosity, the average inferred star formation rate in each Lyman break galaxy is found to be approximately 60 solar mass/yr, in excellent agreement with the extinction-corrected UV estimates. This provides an external check on the UV estimates of the star formation rates, and on the use of X-ray luminosities to infer these rates in rapidly starforming galaxies at high redshift.

  17. Regolith formation rate from U-series nuclides: Implications from the study of a spheroidal weathering profile in the Rio Icacos watershed (Puerto Rico)

    NASA Astrophysics Data System (ADS)

    Chabaux, F.; Blaes, E.; Stille, P.; di Chiara Roupert, R.; Pelt, E.; Dosseto, A.; Ma, L.; Buss, H. L.; Brantley, S. L.

    2013-01-01

    A 2 m-thick spheroidal weathering profile, developed on a quartz diorite in the Rio Icacos watershed (Luquillo Mountains, eastern Puerto Rico), was analyzed for major and trace element concentrations, Sr and Nd isotopic ratios and U-series nuclides (238U-234U-230Th-226Ra). In this profile a 40 cm thick soil horizon is overlying a 150 cm thick saprolite which is separated from the basal corestone by a ˜40 cm thick rindlet zone. The Sr and Nd isotopic variations along the whole profile imply that, in addition to geochemical fractionations associated to water-rock interactions, the geochemical budget of the profile is influenced by a significant accretion of atmospheric dusts. The mineralogical and geochemical variations along the profile also confirm that the weathering front does not progress continuously from the top to the base of the profile. The upper part of the profile is probably associated with a different weathering system (lateral weathering of upper corestones) than the lower part, which consists of the basal corestone, the associated rindlet system and the saprolite in contact with these rindlets. Consequently, the determination of weathering rates from 238U-234U-230Th-226Ra disequilibrium in a series of samples collected along a vertical depth profile can only be attempted for samples collected in the lower part of the profile, i.e. the rindlet zone and the lower saprolite. Similar propagation rates were derived for the rindlet system and the saprolite by using classical models involving loss and gain processes for all nuclides to interpret the variation of U-series nuclides in the rindlet-saprolite subsystem. The consistency of these weathering rates with average weathering and erosion rates derived via other methods for the whole watershed provides a new and independent argument that, in the Rio Icacos watershed, the weathering system has reached a geomorphologic steady-state. Our study also indicates that even in environments with differential weathering, such as observed for the Puerto Rico site, the radioactive disequilibrium between the nuclides of a single radioactive series (here 238U-234U-230Th-226Ra) can still be interpreted in terms of a simplified scenario of congruent weathering. Incidentally, the U-Th-Ra disequilibrium in the corestone samples confirms that the outermost part of the corestone is already weathered.

  18. Search for Bs0 oscillations using inclusive lepton events

    NASA Astrophysics Data System (ADS)

    ALEPH Collaboration; Barate, R.; et al.

    1999-03-01

    A search for Bs0 oscillations is performed using a sample of semileptonic b-hadron decays collected by the ALEPH experiment during 1991-95. Compared to previous inclusive lepton analyses, the proper time resolution and b-flavour mistag rate are significantly improved. Additional sensitivity to Bs0 mixing is obtained by identifying subsamples of events having a Bs0 purity which is higher than the average for the whole data sample. Unbinned maximum likelihood amplitude fits are performed to derive a lower limit of Δ m s > 9.5 ps-1 at the 95% confidence limit (95% CL). Combining with the ALEPH Ds--based analyses yields Δ m s > 9.6 ps-1 at 95% CL.

  19. Probing the Properties of AGN Clustering in the Local Universe with Swift-BAT

    NASA Astrophysics Data System (ADS)

    Powell, M.; Cappelluti, N.; Urry, M.; Koss, M.; Allevato, V.; Ajello, M.

    2017-10-01

    I present the benchmark measurement of AGN clustering in the local universe with the all-sky Swift-BAT survey. The hard X-ray selection (14-195 keV) allows for the detection of some of the most obscured AGN, providing the largest, most unbiased sample of local AGN to date. We derive for the first time the halo occupation distribution (HOD) of the sample in various bins of black hole mass, accretion rate, and obscuration. In doing so, we characterize the cosmic environment of growing supermassive black holes with unprecedented precision, and determine which black hole parameters depend on environment. We then compare our results to the current evolutionary models of AGN.

  20. UV-to-IR spectral energy distributions of galaxies at z>1: the impact of Herschel data on dust attenuation and star formation determinations

    NASA Astrophysics Data System (ADS)

    Buat, V.; Heinis, S.; Boquien, M.

    2013-11-01

    We report on our recent works on the UV-to-IR SED fitting of a sample of distant (z>1) galaxies observed by Herschel in the CDFS as part of the GOODS-Herschel project. Combining stellar and dust emission in galaxies is found powerful to constrain their dust attenuation as well as their star formation activity. We focus on the caracterisation of dust attenuation and on the uncertainties on the derivation of the star formation rates and stellar masses, as a function of the range of wavelengths sampled by the data data and of the assumptions made on the star formation histories

  1. Presence of bile acids in human follicular fluid and their relation with embryo development in modified natural cycle IVF.

    PubMed

    Nagy, R A; van Montfoort, A P A; Dikkers, A; van Echten-Arends, J; Homminga, I; Land, J A; Hoek, A; Tietge, U J F

    2015-05-01

    Are bile acids (BA) and their respective subspecies present in human follicular fluid (FF) and do they relate to embryo quality in modified natural cycle IVF (MNC-IVF)? BA concentrations are 2-fold higher in follicular fluid than in serum and ursodeoxycholic acid (UDCA) derivatives were associated with development of top quality embryos on Day 3 after fertilization. Granulosa cells are capable of synthesizing BA, but a potential correlation with oocyte and embryo quality as well as information on the presence and role of BA subspecies in follicular fluid have yet to be investigated. Between January 2001 and June 2004, follicular fluid and serum samples were collected from 303 patients treated in a single academic centre that was involved in a multicentre cohort study on the effectiveness of MNC-IVF. Material from patients who underwent a first cycle of MNC-IVF was used. Serum was not stored from all patients, and the available material comprised 156 follicular fluid and 116 matching serum samples. Total BA and BA subspecies were measured in follicular fluid and in matching serum by enzymatic fluorimetric assay and liquid chromatography-mass spectrometry, respectively. The association of BA in follicular fluid with oocyte and embryo quality parameters, such as fertilization rate and cell number, presence of multinucleated blastomeres and percentage of fragmentation on Day 3, was analysed. Embryos with eight cells on Day 3 after oocyte retrieval were more likely to originate from follicles with a higher level of UDCA derivatives than those with fewer than eight cells (P < 0.05). Furthermore, follicular fluid levels of chenodeoxycholic derivatives were higher and deoxycholic derivatives were lower in the group of embryos with fragmentation compared with those without (each P < 0.05). Levels of total BA were 2-fold higher in follicular fluid compared with serum (P < 0.001), but had no predictive value for oocyte and embryo quality. Only samples originating from first cycle MNC-IVF were used, which resulted in 14 samples only from women with an ongoing pregnancy, therefore further prospective studies are required to confirm the association of UDCA with IVF pregnancy outcomes. The inter-cycle variability of BA levels in follicular fluid within individuals has yet to be investigated. We checked for macroscopic signs of contamination of follicular fluid by blood but the possibility that small traces of blood were present within the follicular fluid remains. Finally, although BA are considered stable when stored at -20°C, there was a time lag of 10 years between the collection and analysis of follicular fluid and serum samples. The favourable relation between UDCA derivatives in follicular fluid and good embryo development and quality deserves further prospective research, with live birth rates as the end-point. This work was supported by a grant from the Netherlands Organisation for Scientific Research (VIDI Grant 917-56-358 to U.J.F.T.). No competing interests are reported. © The Author 2015. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Chlorella intake attenuates reduced salivary SIgA secretion in kendo training camp participants

    PubMed Central

    2012-01-01

    Background The green alga Chlorella contains high levels of proteins, vitamins, and minerals. We previously reported that a chlorella-derived multicomponent supplement increased the secretion rate of salivary secretory immunoglobulin A (SIgA) in humans. Here, we investigated whether intake of this chlorella-derived supplement attenuated the reduced salivary SIgA secretion rate during a kendo training camp. Methods Ten female kendo athletes participated in inter-university 6-day spring and 4-day summer camps. They were randomized into two groups; one took placebo tablets during the spring camp and chlorella tablets during the summer camp, while the other took chlorella tablets during the spring camp and placebo tablets during the summer camp. Subjects took these tablets starting 4 weeks before the camp until post-camp saliva sampling. Salivary SIgA concentrations were measured by ELISA. Results All subjects participated in nearly all training programs, and body-mass changes and subjective physical well-being scores during the camps were comparable between the groups. However, salivary SIgA secretion rate changes were different between these groups. Salivary SIgA secretion rates decreased during the camp in the placebo group (before vs. second, middle, and final day of camp, and after the camp: 146 ± 89 vs. 87 ± 56, 70 ± 45, 94 ± 58, and 116 ± 71 μg/min), whereas no such decreases were observed in the chlorella group (121 ± 53 vs. 113 ± 68, 98 ± 69,115 ± 80, and 128 ± 59 μg/min). Conclusion Our results suggest that a use of a chlorella-derived dietary supplement attenuates reduced salivary SIgA secretion during a training camp for a competitive sport. PMID:23227811

  3. Water Retention and Rheology of Ti-doped, Synthetic Olivine

    NASA Astrophysics Data System (ADS)

    Faul, U.; Jackson, I.; Fitz Gerald, J. D.

    2012-12-01

    Upper mantle flow laws are currently based almost entirely on experiments with olivine from San Carlos in Arizona. Synthetically produced olivine enables the exploration of the effects of trace elements on the rheology. We have conducted a range of experiments in a gas medium apparatus with solution-gelation derived olivine that show that titanium is the most effective in binding water in the olivine structure. The FTIR signature of this structurally bound water is most similar to that of water-undersaturated natural olivine with absorption bands at 3575 and 3525 cm-1. Water added, titanium-free solgel contains little water after hotpressing and shows adsorption bands at wavenumbers near 3200 cm-1. Noble metal capsules such as Pt or AuPd, providing more oxidizing conditions, are more effective in retaining water. Experiments with NiFe-lined welded Pt capsules retain no more water than NiFe lined samples without Pt capsule. Water retention is, however, again dependent on trace element content, with Ti doped samples containing tens of ppm after hotpressing. By comparison undoped samples run under the same conditions contain little water, again with different FTIR spectra to Ti-doped samples. Our experiments suggest that Ti by itself, or with water contents at the FTIR detection limit enhances diffusion creep rates relative to undoped, dry solgel olivine. Water contents around 10 ppm in NiFe wrapped samples show an enhancement of strain rates of more than one order of magnitude. The addition of Ti, together with the presence of water, also enhances grain growth. For more coarse-grained samples in the dislocation creep regime the enhancement of the stain rate as a function of water content is approximately consistent with the flow laws of Hirth and Kohlstedt (2003).

  4. Pattern-based integer sample motion search strategies in the context of HEVC

    NASA Astrophysics Data System (ADS)

    Maier, Georg; Bross, Benjamin; Grois, Dan; Marpe, Detlev; Schwarz, Heiko; Veltkamp, Remco C.; Wiegand, Thomas

    2015-09-01

    The H.265/MPEG-H High Efficiency Video Coding (HEVC) standard provides a significant increase in coding efficiency compared to its predecessor, the H.264/MPEG-4 Advanced Video Coding (AVC) standard, which however comes at the cost of a high computational burden for a compliant encoder. Motion estimation (ME), which is a part of the inter-picture prediction process, typically consumes a high amount of computational resources, while significantly increasing the coding efficiency. In spite of the fact that both H.265/MPEG-H HEVC and H.264/MPEG-4 AVC standards allow processing motion information on a fractional sample level, the motion search algorithms based on the integer sample level remain to be an integral part of ME. In this paper, a flexible integer sample ME framework is proposed, thereby allowing to trade off significant reduction of ME computation time versus coding efficiency penalty in terms of bit rate overhead. As a result, through extensive experimentation, an integer sample ME algorithm that provides a good trade-off is derived, incorporating a combination and optimization of known predictive, pattern-based and early termination techniques. The proposed ME framework is implemented on a basis of the HEVC Test Model (HM) reference software, further being compared to the state-of-the-art fast search algorithm, which is a native part of HM. It is observed that for high resolution sequences, the integer sample ME process can be speed-up by factors varying from 3.2 to 7.6, resulting in the bit-rate overhead of 1.5% and 0.6% for Random Access (RA) and Low Delay P (LDP) configurations, respectively. In addition, the similar speed-up is observed for sequences with mainly Computer-Generated Imagery (CGI) content while trading off the bit rate overhead of up to 5.2%.

  5. Improved measurement of protein synthesis in human subjects using 2H-phenylalanine isotopomers and gas chromatography/mass spectrometry.

    PubMed

    Preston, Tom; Small, Alexandra C

    2010-03-15

    Sensitive methods to measure protein synthetic rate in vivo are required to assess changes in protein expression, especially when comparing healthy with infirm subjects. We have previously applied a 'flooding dose' procedure using (2)H(5)-phenylalanine ((2)H(5)-phe) and (2)H(8)-phe isotopomers as tracers, which has proven successful in measuring albumin and fibrinogen synthesis in response to feeding in cancer patients. Using tert-butyldimethylsilyl derivatives, we have observed that (2)H(7)-phe is formed with time in vivo from (2)H(8)-phe, probably during transamination. This increases errors when estimating the fractional synthetic rate (FSR) using the (2)H(8)-phe isotopomer compared with the (2)H(5)-phe isotopomer. We sought to improve this situation by use of an alternative derivative that overcomes this problem whilst also streamlining sample preparation. When using N-ethoxycarbonyltrifluoroethyl (ECTFE) amino acid esters, (2)H(8)-phe is effectively converted into (2)H(7)-phe through fragmentation under electron ionisation (EI), allowing both (2)H(8)-phe and (2)H(7)-phe isotopomers to be measured as a single intense C(7)(2)H(7)(+) fragment at 98 Th. To illustrate the improved situation, the mean RMS residual was calculated for all albumin data, for each isotopomer and for each derivative. Albumin-bound Phe was analysed as ECTFE-phe with improved precision, independent of the isotopomer used, confirming that the new derivative is superior. Copyright 2010 John Wiley & Sons, Ltd.

  6. Derivation of a clinical decision rule to guide the interhospital transfer of patients with blunt traumatic brain injury.

    PubMed

    Newgard, C D; Hedges, J R; Stone, J V; Lenfesty, B; Diggs, B; Arthur, M; Mullins, R J

    2005-12-01

    To derive a clinical decision rule for people with traumatic brain injury (TBI) that enables early identification of patients requiring specialised trauma care. We collected data from 1999 through 2003 on a retrospective cohort of consecutive people aged 18-65 years with a serious head injury (AIS > or =3), transported directly from the scene of injury, and evaluated in the ED. Information on 22 demographical, physiological, radiographic, and lab variables was collected. Resource based "high therapeutic intensity" measures occurring within 72 hours of ED arrival (the outcome measure) were identified a priori and included: neurosurgical intervention, exploratory laparotomy, intensive care interventions, or death. We used classification and regression tree analysis to derive and cross validate the decision rule. 504 consecutive trauma patients were identified as having a serious head injury: 246 (49%) required at least one of the HTI measures. Five ED variables (GCS, respiratory rate, age, temperature, and pulse rate) identified subjects requiring at least one of the HTI measures with 94% sensitivity (95% CI 91 to 97%) and 63% specificity (95% CI 57 to 69%) in the derivation sample, and 90% sensitivity and 55% specificity using cross validation. This decision rule identified among a cohort of head injured patients evaluated in the ED the majority of those who urgently required specialised trauma care. The rule will require prospective validation in injured people presenting to non-tertiary care hospitals before implementation can be recommended.

  7. Comparison of the image-derived radioactivity and blood-sample radioactivity for estimating the clinical indicators of the efficacy of boron neutron capture therapy (BNCT): 4-borono-2-18F-fluoro-phenylalanine (FBPA) PET study.

    PubMed

    Isohashi, Kayako; Shimosegawa, Eku; Naka, Sadahiro; Kanai, Yasukazu; Horitsugi, Genki; Mochida, Ikuko; Matsunaga, Keiko; Watabe, Tadashi; Kato, Hiroki; Tatsumi, Mitsuaki; Hatazawa, Jun

    2016-12-01

    In boron neutron capture therapy (BNCT), positron emission tomography (PET) with 4-borono-2- 18 F-fluoro-phenylalanine (FBPA) is the only method to estimate an accumulation of 10 B to target tumor and surrounding normal tissue after administering 10 B carrier of L-paraboronophenylalanine and to search the indication of BNCT for individual patient. Absolute concentration of 10 B in tumor has been estimated by multiplying 10 B concentration in blood during BNCT by tumor to blood radioactivity (T/B) ratio derived from FBPA PET. However, the method to measure blood radioactivity either by blood sampling or image data has not been standardized. We compared image-derived blood radioactivity of FBPA with blood sampling data and studied appropriate timing and location for measuring image-derived blood counts. We obtained 7 repeated whole-body PET scans in five healthy subjects. Arterialized venous blood samples were obtained from the antecubital vein, heated in a heating blanket. Time-activity curves (TACs) of image-derived blood radioactivity were obtained using volumes of interest (VOIs) over ascending aorta, aortic arch, pulmonary artery, left and right ventricles, inferior vena cava, and abdominal aorta. Image-derived blood radioactivity was compared with those measured by blood sampling data in each location. Both the TACs of blood sampling radioactivity in each subject, and the TACs of image-derived blood radioactivity showed a peak within 5 min after the tracer injection, and promptly decreased soon thereafter. Linear relationship was found between blood sampling radioactivity and image-derived blood radioactivity in all the VOIs at any timing of data sampling (p < 0.001). Image-derived radioactivity measured in the left and right ventricles 30 min after injection showed high correlation with blood radioactivity. Image-derived blood radioactivity was lower than blood sampling radioactivity data by 20 %. Reduction of blood radioactivity of FBPA in left ventricle after 30 min of FBPA injection was minimal. We conclude that the image-derived T/B ratio can be reliably used by setting the VOI on the left ventricle at 30 min after FBPA administration and correcting for underestimation due to partial volume effect and reduction of FBPA blood radioactivity.

  8. A User Guide for Smoothing Air Traffic Radar Data

    NASA Technical Reports Server (NTRS)

    Bach, Ralph E.; Paielli, Russell A.

    2014-01-01

    Matlab software was written to provide smoothing of radar tracking data to simulate ADS-B (Automatic Dependent Surveillance-Broadcast) data in order to test a tactical conflict probe. The probe, called TSAFE (Tactical Separation-Assured Flight Environment), is designed to handle air-traffic conflicts left undetected or unresolved when loss-of-separation is predicted to occur within approximately two minutes. The data stream that is down-linked from an aircraft equipped with an ADS-B system would include accurate GPS-derived position and velocity information at sample rates of 1 Hz. Nation-wide ADS-B equipage (mandated by 2020) should improve surveillance accuracy and TSAFE performance. Currently, position data are provided by Center radar (nominal 12-sec samples) and Terminal radar (nominal 4.8-sec samples). Aircraft ground speed and ground track are estimated using real-time filtering, causing lags up to 60 sec, compromising performance of a tactical resolution tool. Offline smoothing of radar data reduces wild-point errors, provides a sample rate as high as 1 Hz, and yields more accurate and lag-free estimates of ground speed, ground track, and climb rate. Until full ADS-B implementation is available, smoothed radar data should provide reasonable track estimates for testing TSAFE in an ADS-B-like environment. An example illustrates the smoothing of radar data and shows a comparison of smoothed-radar and ADS-B tracking. This document is intended to serve as a guide for using the smoothing software.

  9. Sources and ages of fine-grained sediment to streams using fallout radionuclides in the Midwestern United States

    USGS Publications Warehouse

    Gellis, Allen; Fuller, Christopher C.; Van Metre, Peter C.

    2017-01-01

    Fallout radionuclides, 7Be and 210Pbex, sampled in bed sediment for 99 watersheds in the Midwestern region of the United States and in 15 samples of suspended sediment from 3 of these watersheds were used to partition upland from channel sources and to estimate the age or the time since the surface-derived portion of sediment was on the land surface (0–∼1 year). Channel sources dominate: 78 of the 99 bed material sites (79%) have >50% channel-derived sediment, and 9 of the 15 suspended-sediment samples (60%) have >50% channel-derived sediment. 7Be was detected in 82 bed sediment samples and all 15 suspended-sediment samples. The surface-derived portion of 54 of the 80 (68%) streams with detectable 7Be and 210Pbex were ≤ 100 days old and the surface-derived portion of all suspended-sediment samples were ≤ 100 days old, indicating that surface-derived fine-grained sediment moves rapidly though these systems. The concentrations of two hydrophobic pesticides–DDE and bifenthrin–are correlated with the proportion of surface-derived sediment, indicating a link between geomorphic processes and particle-associated contaminants in streams. Urban areas had the highest pesticide concentrations and the largest percentage of surface-derived sediment. Although the percentage of surface-derived sediment is less than channel sources at most of the study sites, the relatively young age of the surface-derived sediment might indicate that management actions to reduce sediment contamination where the land surface is an important source could have noticeable effects.

  10. Forensic Applicability of Femur Subtrochanteric Shape to Ancestry Assessment in Thai and White American Males.

    PubMed

    Tallman, Sean D; Winburn, Allysha P

    2015-09-01

    Ancestry assessment from the postcranial skeleton presents a significant challenge to forensic anthropologists. However, metric dimensions of the femur subtrochanteric region are believed to distinguish between individuals of Asian and non-Asian descent. This study tests the discriminatory power of subtrochanteric shape using modern samples of 128 Thai and 77 White American males. Results indicate that the samples' platymeric index distributions are significantly different (p≤0.001), with the Thai platymeric index range generally lower and the White American range generally higher. While the application of ancestry assessment methods developed from Native American subtrochanteric data results in low correct classification rates for the Thai sample (50.8-57.8%), adapting these methods to the current samples leads to better classification. The Thai data may be more useful in forensic analysis than previously published subtrochanteric data derived from Native American samples. Adapting methods to include appropriate geographic and contemporaneous populations increases the accuracy of femur subtrochanteric ancestry methods. © 2015 American Academy of Forensic Sciences.

  11. The Type Ia Supernova Rate at z~0.5 from the Supernova Legacy Survey

    NASA Astrophysics Data System (ADS)

    Neill, J. D.; Sullivan, M.; Balam, D.; Pritchet, C. J.; Howell, D. A.; Perrett, K.; Astier, P.; Aubourg, E.; Basa, S.; Carlberg, R. G.; Conley, A.; Fabbro, S.; Fouchez, D.; Guy, J.; Hook, I.; Pain, R.; Palanque-Delabrouille, N.; Regnault, N.; Rich, J.; Taillet, R.; Aldering, G.; Antilogus, P.; Arsenijevic, V.; Balland, C.; Baumont, S.; Bronder, J.; Ellis, R. S.; Filiol, M.; Gonçalves, A. C.; Hardin, D.; Kowalski, M.; Lidman, C.; Lusset, V.; Mouchet, M.; Mourao, A.; Perlmutter, S.; Ripoche, P.; Schlegel, D.; Tao, C.

    2006-09-01

    We present a measurement of the distant Type Ia supernova (SN Ia) rate derived from the first 2 yr of the Canada-France-Hawaii Telescope Supernova Legacy Survey. We observed four 1deg×1deg fields with a typical temporal frequency of <Δt>~4 observer-frame days over time spans of 158-211 days per season for each field, with breaks during the full Moon. We used 8-10 m class telescopes for spectroscopic follow-up to confirm our candidates and determine their redshifts. Our starting sample consists of 73 spectroscopically verified SNe Ia in the redshift range 0.2=0.47)=[0.42+0.13-0.09(syst.)+/-0.06(stat.)×10-4 yr-1 Mpc3, assuming h=0.7, Ωm=0.3, and a flat cosmology. Using recently published galaxy luminosity functions derived in our redshift range, we derive a SN Ia rate per unit luminosity of rL(=0.47)=0.154+0.048-0.033(syst.)+0.039-0.031(stat.) SN units. Using our rate alone, we place an upper limit on the component of SN Ia production that tracks the cosmic star formation history of 1 SN Ia per 103 Msolar of stars formed. Our rate and other rates from surveys using spectroscopic sample confirmation display only a modest evolution out to z=0.55. Based on observations obtained with MegaPrime/MegaCam, a joint project of the Canada-France-Hawaii Telescope (CFHT) and CEA/DAPNIA, at CFHT, which is operated by the National Research Council (NRC) of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This work is based in part on data products produced at the Canadian Astronomy Data Centre as part of the Canada-France-Hawaii Telescope Legacy Survey, a collaborative project of NRC and CNRS. This work is also based on observations obtained at the European Southern Observatory using the Very Large Telescope on the Cerro Paranal (ESO Large Program 171.A-0486), and on observations (programs GN-2004A-Q-19, GS-2004A-Q-11, GN-2003B-Q-9, and GS-2003B-Q-8) obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the National Science Foundation (NSF) on behalf of the Gemini partnership: the NSF (United States), the Particle Physics and Astronomy Research Council (United Kingdom), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), CNPq (Brazil), and CONICET (Argentina). This work is also based on observations obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation.

  12. LAMP: the long-term accretion monitoring programme of T Tauri stars in Chamaeleon I

    NASA Astrophysics Data System (ADS)

    Costigan, G.; Scholz, A.; Stelzer, B.; Ray, T.; Vink, J. S.; Mohanty, S.

    2012-12-01

    We present the results of a variability study of accreting young stellar objects in the Chameleon I star-forming region, based on ˜300 high-resolution optical spectra from the Fibre Large Area Multi-Element Spectrograph (FLAMES) at the European Southern Observatory (ESO) Very Large Telescope (VLT). 25 objects with spectral types from G2-M5.75 were observed 12 times over the course of 15 months. Using the emission lines Hα (6562.81 Å) and Ca II (8662.1 Å) as accretion indicators, we found 10 accreting and 15 non-accreting objects. We derived accretion rates for all accretors in the sample using the Hα equivalent width, Hα 10 per cent width and Ca II (8662.1 Å) equivalent width. We found that the Hα equivalent widths of accretors varied by ˜7-100 Å over the 15-month period. This corresponds to a mean amplitude of variations in the derived accretion rate of ˜0.37 dex. The amplitudes of variations in the derived accretion rate from Ca II equivalent width were ˜0.83 dex and those from Hα 10 per cent width were ˜1.11 dex. Based on the large amplitudes of variations in accretion rate derived from the Hα 10 per cent width with respect to the other diagnostics, we do not consider it to be a reliable accretion rate estimator. Assuming the variations in Hα and Ca II equivalent width accretion rates to be closer to the true value, these suggest that the spread that was found around the accretion rate to stellar-mass relation is not due to the variability of individual objects on time-scales of weeks to ˜1 year. From these variations, we can also infer that the accretion rates are stable within <0.37 dex over time-scales of less than 15 months. A major portion of the accretion variability was found to occur over periods shorter than the shortest time-scales in our observations, 8-25 days, which are comparable with the rotation periods of these young stellar objects. This could be an indication that what we are probing is spatial structure in the accretion flows and it also suggests that observations on time-scales of ˜a couple of weeks are sufficient to limit the total extent of accretion-rate variations in typical young stars. No episodic accretion was observed: all 10 accretors accreted continuously for the entire period of observations and, though they may have undetected low accretion rates, the non-accretors never showed any large changes in their emission that would imply a jump in accretion rate.

  13. System design of the annular suspension and pointing system /ASPS/

    NASA Technical Reports Server (NTRS)

    Cunningham, D. C.; Gismondi, T. P.; Wilson, G. W.

    1978-01-01

    This paper presents the control system design for the Annular Suspension and Pointing System. Actuator sizing and configuration of the system are explained, and the control laws developed for linearizing and compensating the magnetic bearings, roll induction motor and gimbal torquers are given. Decoupling, feedforward and error compensation for the vernier and gimbal controllers is developed. The algorithm for computing the strapdown attitude reference is derived, and the allowable sampling rates, time delays and quantization of control signals are specified.

  14. Temperature calibration of amino acid racemization: age implications for the Yuha skeleton

    USGS Publications Warehouse

    Bischoff, J.L.; Childers, W.M.

    1979-01-01

    D/L of aspartic acid ranged from 0.52 to 0.56 for femur samples of the Yuha skeleton. Subsurface temperature measurements made at the burial site indicate average annual temperature is 18??C and diagenetic temperature is 21.6??C. These data and a relation derived for the dependence of the aspartic acid rate constant on diagenetic temperature indicate an age of 23,600. The result is consistent with 14C and 230Th dating of calcrete found coating the bones. ?? 1979.

  15. Transport and dispersion of pollutants in surface impoundments: a finite difference model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yeh, G.T.

    1980-07-01

    A surface impoundment model by finite-difference (SIMFD) has been developed. SIMFD computes the flow rate, velocity field, and the concentration distribution of pollutants in surface impoundments with any number of islands located within the region of interest. Theoretical derivations and numerical algorithm are described in detail. Instructions for the application of SIMFD and listings of the FORTRAN IV source program are provided. Two sample problems are given to illustrate the application and validity of the model.

  16. What Is the Shape of Developmental Change?

    PubMed Central

    Adolph, Karen E.; Robinson, Scott R.; Young, Jesse W.; Gill-Alvarez, Felix

    2009-01-01

    Developmental trajectories provide the empirical foundation for theories about change processes during development. However, the ability to distinguish among alternative trajectories depends on how frequently observations are sampled. This study used real behavioral data, with real patterns of variability, to examine the effects of sampling at different intervals on characterization of the underlying trajectory. Data were derived from a set of 32 infant motor skills indexed daily during the first 18 months. Larger sampling intervals (2-31 days) were simulated by systematically removing observations from the daily data and interpolating over the gaps. Infrequent sampling caused decreasing sensitivity to fluctuations in the daily data: Variable trajectories erroneously appeared as step-functions and estimates of onset ages were increasingly off target. Sensitivity to variation decreased as an inverse power function of sampling interval, resulting in severe degradation of the trajectory with intervals longer than 7 days. These findings suggest that sampling rates typically used by developmental researchers may be inadequate to accurately depict patterns of variability and the shape of developmental change. Inadequate sampling regimes therefore may seriously compromise theories of development. PMID:18729590

  17. A sequential bioequivalence design with a potential ethical advantage.

    PubMed

    Fuglsang, Anders

    2014-07-01

    This paper introduces a two-stage approach for evaluation of bioequivalence, where, in contrast to the designs of Diane Potvin and co-workers, two stages are mandatory regardless of the data obtained at stage 1. The approach is derived from Potvin's method C. It is shown that under circumstances with relatively high variability and relatively low initial sample size, this method has an advantage over Potvin's approaches in terms of sample sizes while controlling type I error rates at or below 5% with a minute occasional trade-off in power. Ethically and economically, the method may thus be an attractive alternative to the Potvin designs. It is also shown that when using the method introduced here, average total sample sizes are rather independent of initial sample size. Finally, it is shown that when a futility rule in terms of sample size for stage 2 is incorporated into this method, i.e., when a second stage can be abolished due to sample size considerations, there is often an advantage in terms of power or sample size as compared to the previously published methods.

  18. Sampling theorem for geometric moment determination and its application to a laser beam position detector.

    PubMed

    Loce, R P; Jodoin, R E

    1990-09-10

    Using the tools of Fourier analysis, a sampling requirement is derived that assures that sufficient information is contained within the samples of a distribution to calculate accurately geometric moments of that distribution. The derivation follows the standard textbook derivation of the Whittaker-Shannon sampling theorem, which is used for reconstruction, but further insight leads to a coarser minimum sampling interval for moment determination. The need for fewer samples to determine moments agrees with intuition since less information should be required to determine a characteristic of a distribution compared with that required to construct the distribution. A formula for calculation of the moments from these samples is also derived. A numerical analysis is performed to quantify the accuracy of the calculated first moment for practical nonideal sampling conditions. The theory is applied to a high speed laser beam position detector, which uses the normalized first moment to measure raster line positional accuracy in a laser printer. The effects of the laser irradiance profile, sampling aperture, number of samples acquired, quantization, and noise are taken into account.

  19. Tracking the conversion of nitrogen during pyrolysis of antibiotic mycelial fermentation residues using XPS and TG-FTIR-MS technology.

    PubMed

    Zhu, Xiangdong; Yang, Shijun; Wang, Liang; Liu, Yuchen; Qian, Feng; Yao, Wenqing; Zhang, Shicheng; Chen, Jianmin

    2016-04-01

    Antibiotic mycelial fermentation residues (AMFRs), which are emerging solid pollutants, have been recognized as hazardous waste in China since 2008. Nitrogen (N), which is an environmental sensitivity element, is largely retained in AMFR samples derived from fermentation substrates. Pyrolysis is a promising technology for the treatment of solid waste. However, the outcomes of N element during the pyrolysis of AMFRs are still unknown. In this study, the conversion of N element during the pyrolysis of AMFRs was tracked using XPS (X-ray photoelectron spectroscopy) and online TG-FTIR-MS (Thermogravimetry-Fourier transform infrared-Mass spectrometry) technology. In the AMFR sample, organic amine-N, pyrrolic-N, protein-N, pyridinic-N, was the main N-containing species. XPS results indicated that pyrrolic-N and pyridinic-N were retained in the AMFR-derived pyrolysis char. More stable species, such as N-oxide and quaternary-N, were also produced in the char. TG-FTIR-MS results indicated that NH3 and HCN were the main gaseous species, and their contents were closely related to the contents of amine-N and protein-N, and pyrrolic-N and pyridinic-N of AMFRs, respectively. Increases in heating rate enhanced the amounts of NH3 and HCN, but had less of an effect on the degradation degree of AMFRs. N-containing organic compounds, including amine-N, nitrile-N and heterocyclic-N, were discerned from the AMFR pyrolysis process. Their release range was extended with increasing of heating rate and carbon content of AMFR sample. This work will help to take appropriate measure to reduce secondary pollution from the treatment of AMFRs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Advanced clinical interpretation of the Delis-Kaplan Executive Function System: multivariate base rates of low scores.

    PubMed

    Karr, Justin E; Garcia-Barrera, Mauricio A; Holdnack, James A; Iverson, Grant L

    2018-01-01

    Multivariate base rates allow for the simultaneous statistical interpretation of multiple test scores, quantifying the normal frequency of low scores on a test battery. This study provides multivariate base rates for the Delis-Kaplan Executive Function System (D-KEFS). The D-KEFS consists of 9 tests with 16 Total Achievement scores (i.e. primary indicators of executive function ability). Stratified by education and intelligence, multivariate base rates were derived for the full D-KEFS and an abbreviated four-test battery (i.e. Trail Making, Color-Word Interference, Verbal Fluency, and Tower Test) using the adult portion of the normative sample (ages 16-89). Multivariate base rates are provided for the full and four-test D-KEFS batteries, calculated using five low score cutoffs (i.e. ≤25th, 16th, 9th, 5th, and 2nd percentiles). Low scores occurred commonly among the D-KEFS normative sample, with 82.6 and 71.8% of participants obtaining at least one score ≤16th percentile for the full and four-test batteries, respectively. Intelligence and education were inversely related to low score frequency. The base rates provided herein allow clinicians to interpret multiple D-KEFS scores simultaneously for the full D-KEFS and an abbreviated battery of commonly administered tests. The use of these base rates will support clinicians when differentiating between normal variations in cognitive performance and true executive function deficits.

  1. THE zCOSMOS-SINFONI PROJECT. I. SAMPLE SELECTION AND NATURAL-SEEING OBSERVATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mancini, C.; Renzini, A.; Foerster Schreiber, N. M.

    2011-12-10

    The zCOSMOS-SINFONI project is aimed at studying the physical and kinematical properties of a sample of massive z {approx} 1.4-2.5 star-forming galaxies, through SINFONI near-infrared integral field spectroscopy (IFS), combined with the multiwavelength information from the zCOSMOS (COSMOS) survey. The project is based on one hour of natural-seeing observations per target, and adaptive optics (AO) follow-up for a major part of the sample, which includes 30 galaxies selected from the zCOSMOS/VIMOS spectroscopic survey. This first paper presents the sample selection, and the global physical characterization of the target galaxies from multicolor photometry, i.e., star formation rate (SFR), stellar mass, age,more » etc. The H{alpha} integrated properties, such as, flux, velocity dispersion, and size, are derived from the natural-seeing observations, while the follow-up AO observations will be presented in the next paper of this series. Our sample appears to be well representative of star-forming galaxies at z {approx} 2, covering a wide range in mass and SFR. The H{alpha} integrated properties of the 25 H{alpha} detected galaxies are similar to those of other IFS samples at the same redshifts. Good agreement is found among the SFRs derived from H{alpha} luminosity and other diagnostic methods, provided the extinction affecting the H{alpha} luminosity is about twice that affecting the continuum. A preliminary kinematic analysis, based on the maximum observed velocity difference across the source and on the integrated velocity dispersion, indicates that the sample splits nearly 50-50 into rotation-dominated and velocity-dispersion-dominated galaxies, in good agreement with previous surveys.« less

  2. Practical quantum random number generator based on measuring the shot noise of vacuum states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen Yong; Zou Hongxin; Tian Liang

    2010-06-15

    The shot noise of vacuum states is a kind of quantum noise and is totally random. In this paper a nondeterministic random number generation scheme based on measuring the shot noise of vacuum states is presented and experimentally demonstrated. We use a homodyne detector to measure the shot noise of vacuum states. Considering that the frequency bandwidth of our detector is limited, we derive the optimal sampling rate so that sampling points have the least correlation with each other. We also choose a method to extract random numbers from sampling values, and prove that the influence of classical noise canmore » be avoided with this method so that the detector does not have to be shot-noise limited. The random numbers generated with this scheme have passed ent and diehard tests.« less

  3. Comparison of base composition analysis and Sanger sequencing of mitochondrial DNA for four U.S. population groups.

    PubMed

    Kiesler, Kevin M; Coble, Michael D; Hall, Thomas A; Vallone, Peter M

    2014-01-01

    A set of 711 samples from four U.S. population groups was analyzed using a novel mass spectrometry based method for mitochondrial DNA (mtDNA) base composition profiling. Comparison of the mass spectrometry results with Sanger sequencing derived data yielded a concordance rate of 99.97%. Length heteroplasmy was identified in 46% of samples and point heteroplasmy was observed in 6.6% of samples in the combined mass spectral and Sanger data set. Using discrimination capacity as a metric, Sanger sequencing of the full control region had the highest discriminatory power, followed by the mass spectrometry base composition method, which was more discriminating than Sanger sequencing of just the hypervariable regions. This trend is in agreement with the number of nucleotides covered by each of the three assays. Published by Elsevier Ireland Ltd.

  4. Iodine isotopes in precipitation: Four-year time series variations before and after 2011 Fukushima nuclear accident.

    PubMed

    Xu, Sheng; Zhang, Luyuan; Freeman, Stewart P H T; Hou, Xiaolin; Watanabe, Akira; Sanderson, David C W; Cresswell, Alan; Yamaguchi, Katsuhiko

    2016-05-01

    Rainwater samples were collected monthly from Fukushima, Japan, in 2012-2014 and analysed for (127)I and (129)I. These are combined with previously reported data to investigate atmospheric levels and behaviour of Fukushima-derived (129)I before and after the 2011 nuclear accident. In the new datasets, (127)I and (129)I concentrations between October 2012 and October 2014 varied from 0.5 to 10 μg/L and from 1.2 × 10(8) to 6.9 × 10(9) atoms/L respectively, resulting in (129)I/(127)I atomic ratio ranges from 3 × 10(-8) to 2 × 10(-7). The (127)I concentrations were in good agreement with those in the previous period from March 2011 to September 2012, whereas the (129)I concentrations and (129)I/(127)I ratios followed declining trends since the accident. Although (129)I concentrations in five samples during the period of 2013-2014 have approached the pre-accident levels, (129)I concentrations in most samples remained higher values in winter and spring-summer. The high (129)I levels in winter and spring-summer are most likely attributed to local resuspension of the Fukushima-derived radionuclide-bearing fine soil particles deposited on land surfaces, and re-emission through vegetation taking up (129)I from contaminated soil and water, respectively. Long-term declining rate suggests that contribution of the Fukushima-derived (129)I to the atmosphere would become less since 2014. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Analysis of an all-digital maximum likelihood carrier phase and clock timing synchronizer for eight phase-shift keying modulation

    NASA Astrophysics Data System (ADS)

    Degaudenzi, Riccardo; Vanghi, Vieri

    1994-02-01

    In all-digital Trellis-Coded 8PSK (TC-8PSK) demodulator well suited for VLSI implementation, including maximum likelihood estimation decision-directed (MLE-DD) carrier phase and clock timing recovery, is introduced and analyzed. By simply removing the trellis decoder the demodulator can efficiently cope with uncoded 8PSK signals. The proposed MLE-DD synchronization algorithm requires one sample for the phase and two samples per symbol for the timing loop. The joint phase and timing discriminator characteristics are analytically derived and numerical results checked by means of computer simulations. An approximated expression for steady-state carrier phase and clock timing mean square error has been derived and successfully checked with simulation findings. Synchronizer deviation from the Cramer Rao bound is also discussed. Mean acquisition time for the digital synchronizer has also been computed and checked, using the Monte Carlo simulation technique. Finally, TC-8PSK digital demodulator performance in terms of bit error rate and mean time to lose lock, including digital interpolators and synchronization loops, is presented.

  6. Enhancement of oxidation resistance of graphite foams by polymer derived-silicon carbide coating for concentrated solar power applications

    DOE PAGES

    Kim, T.; Singh, D.; Singh, M.

    2015-05-01

    Graphite foam with extremely high thermal conductivity has been investigated to enhance heat transfer of latent heat thermal energy storage (LHTES) systems. However, the use of graphite foam for elevated temperature applications (>600 °C) is limited due to poor oxidation resistance of graphite. In the present study, oxidation resistance of graphite foam coated with silicon carbide (SiC) was investigated. A pre-ceramic polymer derived coating (PDC) method was used to form a SiC coating on the graphite foams. Post coating deposition, the samples were analyzed by scanning electron microscopy and energy dispersive spectroscopy. The oxidation resistance of PDC-SiC coating was quantifiedmore » by measuring the weight of the samples at several measuring points. The experiments were conducted under static argon atmosphere in a furnace. After the experiments, oxidation rates (%/hour) were calculated to predict the lifetime of the graphite foams. The experimental results showed that the PDC-SiC coating could prevent the oxidation of graphite foam under static argon atmosphere up to 900 °C.« less

  7. Physical therapists' perceptions of the roles of the physical therapist assistant.

    PubMed

    Robinson, A J; McCall, M; DePalma, M T; Clayton-Krasinski, D; Tingley, S; Simoncelli, S; Harnish, L

    1994-06-01

    This longitudinal study investigated physical therapists' perceptions of the roles of physical therapist assistants (PTAs). In 1986, a questionnaire describing 79 physical therapy activities was distributed to a random sample (n = 400) of physical therapists derived from the American Physical Therapy Association (APTA) membership. In 1992, a similar questionnaire was distributed to a representative sample (n = 400) of physical therapists derived from the APTA membership. Response rates were 53% and 55% in 1986 and 1992, respectively. Respondents indicated whether each activity was included in the documentation describing PTA roles. Results revealed considerable agreement between therapists' perceptions of PTA roles and those outlined by PTA practice guidelines, and these perceptions changed little over time. Discriminant analyses suggested that therapists' perceptions of PTA roles were, in general, not predicted by supervisory experience with PTAs, therapist experience, or content of entry-level professional education curricula. Generally, therapists' perceptions of PTA roles are consistent with published practice guidelines. Therapists' perceptions on selected activities, however, were incongruent with PTA practice guidelines, suggesting the potential for inefficient or inappropriate utilization of the PTA in the delivery of selected services.

  8. A novel passive water sampler for in situ sampling of antibiotics.

    PubMed

    Chen, Chang-Er; Zhang, Hao; Jones, Kevin C

    2012-05-01

    Passive water sampling has several advantages over active methods; it provides time-integrated data, can save on time and cost compared to active methods, and yield high spatial resolution data through co-deployment of simple, cheap units. However, one problem with many sampler designs in current use is that their uptake rates for trace substances of interest are flow-rate dependent, thereby requiring calibration data and other information to enable water concentrations to be derived from the mass per sampler. However, the 'family' of samplers employing the principle of diffusive gradients in thin films (DGT) provides an in situ means of quantitatively measuring labile species in aquatic systems without field calibration. So far, this technique has only been tested and applied in inorganic substances: metals, radionuclides, nutrients, etc. Design and applications of DGT to trace organic contaminants ('o-DGT') would be of widespread interest. This study describes the laboratory testing and performance characteristics of o-DGT, with the antibiotic sulfamethoxazole (SMX) as a model compound and XAD18 as the novel binding agent. o-DGT uptake of SMX increased with time and decreased with diffusion layer thickness, confirming the principle for SMX. XAD18 showed sufficiently high capacity for SMX for routine field applications. o-DGT measurement of SMX was independent of pH (6-9) and ionic strength (0.001-0.1 M) and not affected by flow rate once above static conditions. The diffusion coefficient of SMX in the sampler was measured using an independent diffusion cell and information is presented to allow temperature correction and derivation of aqueous concentrations from deployed samplers. The potential use of o-DGT for in situ measurement of pharmaceutical antibiotics is confirmed by this study and applications are briefly discussed.

  9. Empirically based assessment and taxonomy of psychopathology for ages 1½-90+ years: Developmental, multi-informant, and multicultural findings.

    PubMed

    Achenbach, Thomas M; Ivanova, Masha Y; Rescorla, Leslie A

    2017-11-01

    Originating in the 1960s, the Achenbach System of Empirically Based Assessment (ASEBA) comprises a family of instruments for assessing problems and strengths for ages 1½-90+ years. To provide an overview of the ASEBA, related research, and future directions for empirically based assessment and taxonomy. Standardized, multi-informant ratings of transdiagnostic dimensions of behavioral, emotional, social, and thought problems are hierarchically scored on narrow-spectrum syndrome scales, broad-spectrum internalizing and externalizing scales, and a total problems (general psychopathology) scale. DSM-oriented and strengths scales are also scored. The instruments and scales have been iteratively developed from assessments of clinical and population samples of hundreds of thousands of individuals. Items, instruments, scales, and norms are tailored to different kinds of informants for ages 1½-5, 6-18, 18-59, and 60-90+ years. To take account of differences between informants' ratings, parallel instruments are completed by parents, teachers, youths, adult probands, and adult collaterals. Syndromes and Internalizing/Externalizing scales derived from factor analyses of each instrument capture variations in patterns of problems that reflect different informants' perspectives. Confirmatory factor analyses have supported the syndrome structures in dozens of societies. Software displays scale scores in relation to user-selected multicultural norms for the age and gender of the person being assessed, according to ratings by each type of informant. Multicultural norms are derived from population samples in 57 societies on every inhabited continent. Ongoing and future research includes multicultural assessment of elders; advancing transdiagnostic progress and outcomes assessment; and testing higher order structures of psychopathology. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. The absolute counting of red cell-derived microparticles with red cell bead by flow rate based assay.

    PubMed

    Nantakomol, Duangdao; Imwong, Malika; Soontarawirat, Ingfar; Kotjanya, Duangporn; Khakhai, Chulalak; Ohashi, Jun; Nuchnoi, Pornlada

    2009-05-01

    Activation of red blood cell is associated with the formation of red cell-derived microparticles (RMPs). Analysis of circulating RMPs is becoming more refined and clinically useful. A quantitative Trucount tube method is the conventional method uses for quantitating RMPs. In this study, we validated a quantitative method called "flow rate based assay using red cell bead (FCB)" to measure circulating RMPs in the peripheral blood of healthy subjects. Citrated blood samples collected from 30 cases of healthy subjects were determined the RMPs count by using double labeling of annexin V-FITC and anti-glycophorin A-PE. The absolute RMPs numbers were measured by FCB, and the results were compared with the Trucount or with flow rate based calibration (FR). Statistical correlation and agreement were analyzed using linear regression and Bland-Altman analysis. There was no significant difference in the absolute number of RMPs quantitated by FCB when compared with those two reference methods including the Trucount tube and FR method. The absolute RMPs count obtained from FCB method was highly correlated with those obtained from Trucount tube (r(2) = 0.98, mean bias 4 cell/microl, limit of agreement [LOA] -20.3 to 28.3 cell/microl), and FR method (r(2) = 1, mean bias 10.3 cell/microl, and LOA -5.5 to 26.2 cell/microl). This study demonstrates that FCB is suitable and more affordable for RMPs quantitation in the clinical samples. This method is a low cost and interchangeable to latex bead-based method for generating the absolute counts in the resource-limited areas. (c) 2008 Clinical Cytometry Society.

  11. Terrestrial gamma radiation baseline mapping using ultra low density sampling methods.

    PubMed

    Kleinschmidt, R; Watson, D

    2016-01-01

    Baseline terrestrial gamma radiation maps are indispensable for providing basic reference information that may be used in assessing the impact of a radiation related incident, performing epidemiological studies, remediating land contaminated with radioactive materials, assessment of land use applications and resource prospectivity. For a large land mass, such as Queensland, Australia (over 1.7 million km(2)), it is prohibitively expensive and practically difficult to undertake detailed in-situ radiometric surveys of this scale. It is proposed that an existing, ultra-low density sampling program already undertaken for the purpose of a nationwide soil survey project be utilised to develop a baseline terrestrial gamma radiation map. Geoelement data derived from the National Geochemistry Survey of Australia (NGSA) was used to construct a baseline terrestrial gamma air kerma rate map, delineated by major drainage catchments, for Queensland. Three drainage catchments (sampled at the catchment outlet) spanning low, medium and high radioelement concentrations were selected for validation of the methodology using radiometric techniques including in-situ measurements and soil sampling for high resolution gamma spectrometry, and comparative non-radiometric analysis. A Queensland mean terrestrial air kerma rate, as calculated from the NGSA outlet sediment uranium, thorium and potassium concentrations, of 49 ± 69 nGy h(-1) (n = 311, 3σ 99% confidence level) is proposed as being suitable for use as a generic terrestrial air kerma rate background range. Validation results indicate that catchment outlet measurements are representative of the range of results obtained across the catchment and that the NGSA geoelement data is suitable for calculation and mapping of terrestrial air kerma rate. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  12. An Hα Imaging Survey of the Low-surface-brightness Galaxies Selected from the Fall Sky Region of the 40% ALFALFA H I Survey

    NASA Astrophysics Data System (ADS)

    Lei, Feng-Jie; Wu, Hong; Du, Wei; Zhu, Yi-Nan; Lam, Man-I.; Zhou, Zhi-Min; He, Min; Jin, Jun-Jie; Cao, Tian-Wen; Zhao, Pin-Song; Yang, Fan; Wu, Chao-Jian; Li, Hong-Bin; Ren, Juan-Juan

    2018-03-01

    We present the observed Hα flux and derived star formation rates (SFRs) for a fall sample of low-surface-brightness galaxies (LSBGs). The sample is selected from the fall sky region of the 40% ALFALFA H I Survey–SDSS DR7 photometric data, and all the Hα images were obtained using the 2.16 m telescope, operated by the National Astronomy Observatories, Chinese Academy of Sciences. A total of 111 LSBGs were observed and Hα flux was measured in 92 of them. Though almost all the LSBGs in our sample are H I-rich, their SFRs, derived from the extinction and filter-transmission-corrected Hα flux, are less than 1 M ⊙ yr‑1. LSBGs and star-forming galaxies have similar H I surface densities, but LSBGs have much lower SFRs and SFR surface densities than star-forming galaxies. Our results show that LSBGs deviate from the Kennicutt–Schmidt law significantly, which indicates that they have low star formation efficiency. The SFRs of LSBGs are close to average SFRs in Hubble time and support previous arguments that most of the LSBGs are stable systems and they tend to seldom contain strong interactions or major mergers in their star formation histories.

  13. Evaluation of the Biotoxicity of Tree Wood Ashes in Zebrafish Embryos.

    PubMed

    Consigli, Veronica; Guarienti, Michela; Bilo, Fabjola; Benassi, Laura; Depero, Laura E; Bontempi, Elza; Presta, Marco

    2016-10-01

    Ashes derived from biomass combustion and used as soil fertilizers can generate negative environmental and human health risks, related to leaching of heavy metals and other putative toxic elements. Tree wood ash composition may vary depending on geographical location and surrounding industrial processes. In this study, we evaluated the biotoxicity of lixiviated tree wood ash samples from trees of the Ash (Fraxinus), Cherry (Pronus), Hazel (Corylus), and Black locust (Robinia) genus collected in an industrialized region in Northern Italy. Elemental chemical analysis of the samples was performed by total reflection X-ray fluorescence technique and their biotoxicity was assessed in zebrafish (Danio rerio) embryos. Ashes from Ash, Cherry, and Hazel trees, but not Black locust trees, had a high concentration of heavy metals and other putative toxic elements. Accordingly, a dose-dependent increase in mortality rate and morphological and teratogenic defects was observed in zebrafish embryos treated with lixiviated Ash, Cherry, and Hazel tree wood samples, whereas the toxicity of Black locust tree wood ashes was negligible. In conclusion, lixiviated wood ashes from different plants show a different content of toxic elements that correlate with their biotoxic effects on zebrafish embryos. Tree wood ashes derived from biomass combustion may represent a potential risk for the environment and human health.

  14. Evaluating understanding of popular press reports of health research.

    PubMed

    Yeaton, W H; Smith, D; Rogers, K

    1990-01-01

    This research assessed the ability of a sample of persons on a college campus to understand media reports of health research. Three or four articles on each of five contemporary health topics (dietary cholesterol and heart disease, treatment for breast cancer, starch blockers, drug treatment for heart disease, test tube skin) were selected from widely circulated newspapers (e.g., New York Times) and magazines (e.g., Newsweek). A sample of 144 college students responded to content-based and application-based questions derived from photocopies of these popular press articles. The overall rate of reader misunderstanding approached 40% and generally fell between one third and one half for each of 16 articles representing five health topics. Several strengths and weaknesses of the research are considered as they relate to the accuracy of estimated error rates and to the generality of study findings. The implications of these findings for other areas of health (e.g., AIDS risk factor research) are also discussed.

  15. Influence of Sampling Effort on the Estimated Richness of Road-Killed Vertebrate Wildlife

    NASA Astrophysics Data System (ADS)

    Bager, Alex; da Rosa, Clarissa A.

    2011-05-01

    Road-killed mammals, birds, and reptiles were collected weekly from highways in southern Brazil in 2002 and 2005. The objective was to assess variation in estimates of road-kill impacts on species richness produced by different sampling efforts, and to provide information to aid in the experimental design of future sampling. Richness observed in weekly samples was compared with sampling for different periods. In each period, the list of road-killed species was evaluated based on estimates the community structure derived from weekly samplings, and by the presence of the ten species most subject to road mortality, and also of threatened species. Weekly samples were sufficient only for reptiles and mammals, considered separately. Richness estimated from the biweekly samples was equal to that found in the weekly samples, and gave satisfactory results for sampling the most abundant and threatened species. The ten most affected species showed constant road-mortality rates, independent of sampling interval, and also maintained their dominance structure. Birds required greater sampling effort. When the composition of road-killed species varies seasonally, it is necessary to take biweekly samples for a minimum of one year. Weekly or more-frequent sampling for periods longer than two years is necessary to provide a reliable estimate of total species richness.

  16. Influence of sampling effort on the estimated richness of road-killed vertebrate wildlife.

    PubMed

    Bager, Alex; da Rosa, Clarissa A

    2011-05-01

    Road-killed mammals, birds, and reptiles were collected weekly from highways in southern Brazil in 2002 and 2005. The objective was to assess variation in estimates of road-kill impacts on species richness produced by different sampling efforts, and to provide information to aid in the experimental design of future sampling. Richness observed in weekly samples was compared with sampling for different periods. In each period, the list of road-killed species was evaluated based on estimates the community structure derived from weekly samplings, and by the presence of the ten species most subject to road mortality, and also of threatened species. Weekly samples were sufficient only for reptiles and mammals, considered separately. Richness estimated from the biweekly samples was equal to that found in the weekly samples, and gave satisfactory results for sampling the most abundant and threatened species. The ten most affected species showed constant road-mortality rates, independent of sampling interval, and also maintained their dominance structure. Birds required greater sampling effort. When the composition of road-killed species varies seasonally, it is necessary to take biweekly samples for a minimum of one year. Weekly or more-frequent sampling for periods longer than two years is necessary to provide a reliable estimate of total species richness.

  17. Understanding the Derivative through the Calculus Triangle

    ERIC Educational Resources Information Center

    Weber, Eric; Tallman, Michael; Byerley, Cameron; Thompson, Patrick W.

    2012-01-01

    Typical treatments of the derivative do not clearly convey the idea that the derivative function represents the original function's rate of change. Revealing the relationship between a function and its rate-of-change function for static values of "x" does not facilitate productive ways of thinking about generating the rate-of-change function or…

  18. Judged effectiveness of threat and coping appraisal anti-speeding messages.

    PubMed

    Cathcart, Rachel L; Glendon, A Ian

    2016-11-01

    Using a young driver sample, this experimental study sought to identify which combinations of threat-appraisal (TA) and coping-appraisal (CA) messages derived from protection motivation theory (PMT) participants would judge as most effective for themselves, and for other drivers. The criterion variable was reported intention to drive within a signed speed limit. All possible TA/CA combinations of 18 previously highly-rated anti-speeding messages were presented both simultaneously and sequentially. These represented PMT's three TA components: severity, vulnerability, and rewards, and three CA components: self-efficacy, response efficacy, and response costs. Eighty-eight young drivers (34 males) each rated 54 messages for perceived effectiveness for self and other drivers. Messages derived from the TA severity component were judged the most effective. Response cost messages were most effective for females. Reverse third-person effects were found for both females and males, which suggested that combining TA and CA components may increase the perceived relevance of anti-speeding messages for males. The findings have potential value for creating effective roadside anti-speeding messages, meriting further investigation in field studies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Rating Communication in GP Consultations: The Association Between Ratings Made by Patients and Trained Clinical Raters

    PubMed Central

    Burt, Jenni; Abel, Gary; Elmore, Natasha; Newbould, Jenny; Davey, Antoinette; Llanwarne, Nadia; Maramba, Inocencio; Paddison, Charlotte; Benson, John; Silverman, Jonathan; Elliott, Marc N.; Campbell, John; Roland, Martin

    2016-01-01

    Patient evaluations of physician communication are widely used, but we know little about how these relate to professionally agreed norms of communication quality. We report an investigation into the association between patient assessments of communication quality and an observer-rated measure of communication competence. Consent was obtained to video record consultations with Family Practitioners in England, following which patients rated the physician’s communication skills. A sample of consultation videos was subsequently evaluated by trained clinical raters using an instrument derived from the Calgary-Cambridge guide to the medical interview. Consultations scored highly for communication by clinical raters were also scored highly by patients. However, when clinical raters judged communication to be of lower quality, patient scores ranged from “poor” to “very good.” Some patients may be inhibited from rating poor communication negatively. Patient evaluations can be useful for measuring relative performance of physicians’ communication skills, but absolute scores should be interpreted with caution. PMID:27698072

  20. Serial Assessment of Trauma Care Capacity in Ghana in 2004 and 2014.

    PubMed

    Stewart, Barclay T; Quansah, Robert; Gyedu, Adam; Boakye, Godfred; Abantanga, Francis; Ankomah, James; Donkor, Peter; Mock, Charles

    2016-02-01

    Trauma care capacity assessments in developing countries have generated evidence to support advocacy, detailed baseline capabilities, and informed targeted interventions. However, serial assessments to determine the effect of capacity improvements or changes over time have rarely been performed. To compare the availability of trauma care resources in Ghana between 2004 and 2014 to assess the effects of a decade of change in the trauma care landscape and derive recommendations for improvements. Capacity assessments were performed using direct inspection and structured interviews derived from the World Health Organization's Guidelines for Essential Trauma Care. In Ghana, 10 hospitals in 2004 and 32 hospitals in 2014 were purposively sampled to represent those most likely to care for injuries. Clinical staff, administrators, logistic/procurement officers, and technicians/biomedical engineers who interacted, directly or indirectly, with trauma care resources were interviewed at each hospital. Availability of items for trauma care was rated from 0 (complete absence) to 3 (fully available). Factors contributing to deficiency in 2014 were determined for items rated lower than 3. Each item rated lower than 3 at a specific hospital was defined as a hospital-item deficiency. Scores for total number of hospital-item deficiencies were derived for each contributing factor. There were significant improvements in mean ratings for trauma care resources: district-level (smaller) hospitals had a mean rating of 0.8 for all items in 2004 vs 1.3 in 2014 (P = .002); regional (larger) hospitals had a mean rating of 1.1 in 2004 vs 1.4 in 2014 (P = .01). However, a number of critical deficiencies remain (eg, chest tubes, diagnostics, and orthopedic and neurosurgical care; mean ratings ≤ 2). Leading contributing factors were item absence (503 hospital-item deficiencies), lack of training (335 hospital-item deficiencies), and stockout of consumables (137 hospital-item deficiencies). There has been significant improvement in trauma care capacity during the past decade in Ghana; however, critical deficiencies remain and require urgent redress to avert preventable death and disability. Serial capacity assessment is a valuable tool for monitoring efforts to strengthen trauma care systems, identifying what has been successful, and highlighting needs.

  1. Use of a (137)Cs re-sampling technique to investigate temporal changes in soil erosion and sediment mobilisation for a small forested catchment in southern Italy.

    PubMed

    Porto, Paolo; Walling, Des E; Alewell, Christine; Callegari, Giovanni; Mabit, Lionel; Mallimo, Nicola; Meusburger, Katrin; Zehringer, Markus

    2014-12-01

    Soil erosion and both its on-site and off-site impacts are increasingly seen as a serious environmental problem across the world. The need for an improved evidence base on soil loss and soil redistribution rates has directed attention to the use of fallout radionuclides, and particularly (137)Cs, for documenting soil redistribution rates. This approach possesses important advantages over more traditional means of documenting soil erosion and soil redistribution. However, one key limitation of the approach is the time-averaged or lumped nature of the estimated erosion rates. In nearly all cases, these will relate to the period extending from the main period of bomb fallout to the time of sampling. Increasing concern for the impact of global change, particularly that related to changing land use and climate change, has frequently directed attention to the need to document changes in soil redistribution rates within this period. Re-sampling techniques, which should be distinguished from repeat-sampling techniques, have the potential to meet this requirement. As an example, the use of a re-sampling technique to derive estimates of the mean annual net soil loss from a small (1.38 ha) forested catchment in southern Italy is reported. The catchment was originally sampled in 1998 and samples were collected from points very close to the original sampling points again in 2013. This made it possible to compare the estimate of mean annual erosion for the period 1954-1998 with that for the period 1999-2013. The availability of measurements of sediment yield from the catchment for parts of the overall period made it possible to compare the results provided by the (137)Cs re-sampling study with the estimates of sediment yield for the same periods. In order to compare the estimates of soil loss and sediment yield for the two different periods, it was necessary to establish the uncertainty associated with the individual estimates. In the absence of a generally accepted procedure for such calculations, key factors influencing the uncertainty of the estimates were identified and a procedure developed. The results of the study demonstrated that there had been no significant change in mean annual soil loss in recent years and this was consistent with the information provided by the estimates of sediment yield from the catchment for the same periods. The study demonstrates the potential for using a re-sampling technique to document recent changes in soil redistribution rates. Copyright © 2014. Published by Elsevier Ltd.

  2. Testing Photoionization Calculations Using Chandra X-ray Spectra

    NASA Technical Reports Server (NTRS)

    Kallman, Tim

    2008-01-01

    A great deal of work has been devoted to the accumulation of accurate quantities describing atomic processes for use in analysis of astrophysical spectra. But in many situations of interest the interpretation of a quantity which is observed, such as a line flux, depends on the results of a modeling- or spectrum synthesis code. The results of such a code depends in turn on many atomic rates or cross sections, and the sensitivity of the observable quantity on the various rates and cross sections may be non-linear and if so cannot easily be derived analytically. In such cases the most practical approach to understanding the sensitivity of observables to atomic cross sections is to perform numerical experiments, by calculating models with various rates perturbed by random (but known) factors. In addition, it is useful to compare the results of such experiments with some sample observations, in order to focus attention on the rates which are of the greatest relevance to real observations. In this paper I will present some attempts to carry out this program, focussing on two sample datasets taken with the Chandra HETG. I will discuss the sensitivity of synthetic spectra to atomic data affecting ionization balance, temperature, and line opacity or emissivity, and discuss the implications for the ultimate goal of inferring astrophysical parameters.

  3. Sensitivity Analysis Applied to Atomic Data Used for X-ray Spectrum Synthesis

    NASA Technical Reports Server (NTRS)

    Kallman, Tim

    2006-01-01

    A great deal of work has been devoted to the accumulation of accurate quantities describing atomic processes for use in analysis of astrophysical spectra. But in many situations of interest the interpretation of a quantity which is observed, such as a line flux, depends on the results of a modeling- or spectrum synthesis code. The results of such a code depends in turn 011 many atomic rates or cross sections, and the sensitivity of the observable quantity on the various rates and cross sections may be non-linear and if so cannot easily be derived analytically. In such cases the most practical approach to understanding the sensitivity of observables to atomic cross sections is to perform numerical experiments, by calculating models with various rates perturbed by random (but known) factors. In addition, it is useful to compare the results of such experiments with some sample observations, in order to focus attention on the rates which are of the greatest relevance to real observations. In this paper I will present some attempts to carry out this program, focussing on two sample datasets taken with the Chandra HETG. I will discuss the sensitivity of synthetic spectra to atomic data affecting ionization balance, temperature, and line opacity or emissivity, and discuss the implications for the ultimate goal of inferring astrophysical parameters.

  4. Sensitivity Analysis Applied to Atomic Data Used for X-ray Spectrum Synthesis

    NASA Technical Reports Server (NTRS)

    Kallman, Tim

    2006-01-01

    A great deal of work has been devoted to the accumulation of accurate quantities describing atomic processes for use in analysis of astrophysical spectra. But in many situations of interest the interpretation of a quantity which is observed, such as a line flux, depends on the results of a modeling- or spectrum synthesis code. The results of such a code depends in turn on many atomic rates or cross sections, and the sensitivity of the observable quantity on the various rates and cross sections may be non-linear and if so cannot easily be derived analytically. In such cases the most practical approach to understanding the sensitivity of observables to atomic cross sections is to perform numerical experiments, by calculating models with various rates perturbed by random (but known) factors. In addition, it is useful to compare the results of such experiments with some sample observations, in order to focus attention on the rates which are of the greatest relevance to real observations. In this paper I will present some attempts to carry out this program, focussing on two sample datasets taken with the Chandra HETG. I will discuss the sensitivity of synthetic spectra to atomic data affecting ionization balance, temperature, and line opacity or emissivity, and discuss the implications for the ultimate goal of inferring astrophysical parameters.

  5. Photocatalytic and antibacterial properties of Au-TiO2 nanocomposite on monolayer graphene: From experiment to theory

    NASA Astrophysics Data System (ADS)

    He, Wangxiao; Huang, Hongen; Yan, Jin; Zhu, Jian

    2013-11-01

    The formation of the Au-TiO2 nanocomposite on monolayer Graphene (GTA) by sequentially depositing titanium dioxide particles and gold nanoparticles on graphene sheet was synthesized and analyzed in our work. The structural, morphological, and physicochemical properties of samples were thoroughly investigated by UV-Vis spectrophotometer, Raman spectroscopy, Fourier transform infrared spectroscopy, atomic force microscopy, scanning electron microscope, and transmission electron microscope. Photocatalytic performance of GTA, graphene (GR), TiO2, and TiO2 -graphene nanocomposite (GT) were comparatively studied for degradation of methyl orange, and it was found that GTA had highest performance among all samples. More importantly, antibacterial performance of this novel composite against Gram-positive bacteria, Gram-negative bacteria, and fungus was predominant compared to GR, TiO2, and GT. And the result of biomolecules oxidation tests suggested that antimicrobial actions were contributed by oxidation stress on both membrane and antioxidant systems. Besides, the rate of two decisive processes during photocatalytic reaction, the rate of the charge transfer (kCT) and the rate of the electron-hole recombination (kR) have been studied by Perturbation theory, Radiation theory, and Schottky barrier theory. Calculation and derivation results show that GTA possesses superior charge separation and transfer rate, which gives an explanation for the excellent oxidation properties of GTA.

  6. Received response based heuristic LDPC code for short-range non-line-of-sight ultraviolet communication.

    PubMed

    Qin, Heng; Zuo, Yong; Zhang, Dong; Li, Yinghui; Wu, Jian

    2017-03-06

    Through slight modification on typical photon multiplier tube (PMT) receiver output statistics, a generalized received response model considering both scattered propagation and random detection is presented to investigate the impact of inter-symbol interference (ISI) on link data rate of short-range non-line-of-sight (NLOS) ultraviolet communication. Good agreement with the experimental results by numerical simulation is shown. Based on the received response characteristics, a heuristic check matrix construction algorithm of low-density-parity-check (LDPC) code is further proposed to approach the data rate bound derived in a delayed sampling (DS) binary pulse position modulation (PPM) system. Compared to conventional LDPC coding methods, better bit error ratio (BER) below 1E-05 is achieved for short-range NLOS UVC systems operating at data rate of 2Mbps.

  7. Coeval Starburst and AGN Activity in the CDFS

    NASA Astrophysics Data System (ADS)

    Brusa, M.; Fiore, F.

    2009-10-01

    Here we present a study on the host galaxies properties of obscured Active Galactic Nuclei (AGN) detected in the CDFS 1Ms observation and for which deep K-band observations obtained with ISAAC@VLT are available. The aim of this study is to characterize the host galaxies properties of obscured AGN in terms of their stellar masses, star formation rates, and specific star formation rates. To this purpose we refined the X-ray/optical association of 179 1 Ms sources in the MUSIC area, using a three-bands (optical, K, and IRAC) catalog for the counterparts search and we derived the rest frame properties from SED fitting. We found that the host of obscured AGN at z>1 are associated with luminous, massive, red galaxies with significant star formation rates episodes still ongoing in about 50% of the sample.

  8. Close encounters and collisions of comets with the earth

    NASA Technical Reports Server (NTRS)

    Sekanina, Z.; Yeomans, D. K.

    1984-01-01

    A computer search for earth-approaching comets among those listed in Marsden's (1983) updated orbit catalog has identified 36 cases at which minimum separation distance was less than 2500 earth radii. A strong representation of short period comets in the sample is noted, and the constant rate of the close approaching comets in the last 300 years is interpreted to suggest the lack of long-period comets intrinsically fainter than an absolute magnitude of about 11. A comet-earth collision rate derived from the statistics of these close encounters implies an average period of 33-64 million years between any two events. This rate is comparable with the frequency of geologically recent global catastrophes which appear to be associated with extraterrestrial object impacts, such as the Cretaceous-Tertiary extinction 65 million years ago and the late Eocene event 34 million years ago.

  9. Long-term background denudation rates of southern and southeastern Brazilian watersheds estimated with cosmogenic 10Be

    NASA Astrophysics Data System (ADS)

    Sosa Gonzalez, Veronica; Bierman, Paul R.; Fernandes, Nelson F.; Rood, Dylan H.

    2016-09-01

    In comparison to humid temperate regions of the Northern Hemisphere, less is known about the long-term (millennial scale) background rates of erosion in Southern Hemisphere tropical watersheds. In order to better understand the rate at which watersheds in southern and southeastern Brazil erode, and the relationship of that erosion to climate and landscape characteristics, we made new measurements of in situ produced 10Be in river sediments and we compiled all extant measurements from this part of the country. New data from 14 watersheds in the states of Santa Catarina (n = 7) and Rio de Janeiro (n = 7) show that erosion rates vary there from 13 to 90 m/My (mean = 32 m/My; median = 23 m/My) and that the difference between erosion rates of basins we sampled in the two states is not significant. Sampled basin area ranges between 3 and 14,987 km2, mean basin elevation between 235 and 1606 m, and mean basin slope between 11 and 29°. Basins sampled in Rio de Janeiro, including three that drain the Serra do Mar escarpment, have an average basin slope of 19°, whereas the average slope for the Santa Catarina basins is 14°. Mean basin slope (R2 = 0.73) and annual precipitation (R2 = 0.57) are most strongly correlated with erosion in the basins we studied. At three sites where we sampled river sand and cobbles, the 10Be concentration in river sand was greater than in the cobbles, suggesting that these grain sizes are sourced from different parts of the landscape. Compiling all cosmogenic 10Be-derived erosion rates previously published for southern and southeastern Brazil watersheds to date (n = 76) with our 14 sampled basins, we find that regional erosion rates (though low) are higher than those of watersheds also located on other passive margins including Namibia and the southeastern North America. Brazilian basins erode at a pace similar to escarpments in southeastern North America. Erosion rates in southern and southeastern Brazil are directly and positively related to mean basin slope (R2 = 0.33) and weakly but significantly to mean annual precipitation (R2 = 0.05). These relationships are weaker when considering all southern and southeastern Brazil samples than they are in our smaller, localized data set. We find that smaller, steeper headwater catchments (many on escarpments) erode faster than the larger, higher-order but lower slope catchments. Erosion in southern and southeastern Brazil appears to be controlled largely by mean basin slope with lesser influence by climate and lithology.

  10. A Sub-Sampling Approach for Data Acquisition in Gamma Ray Emission Tomography

    NASA Astrophysics Data System (ADS)

    Fysikopoulos, Eleftherios; Kopsinis, Yannis; Georgiou, Maria; Loudos, George

    2016-06-01

    State of the art data acquisition systems for small animal imaging gamma ray detectors often rely on free running Analog to Digital Converters (ADCs) and high density Field Programmable Gate Arrays (FPGA) devices for digital signal processing. In this work, a sub-sampling acquisition approach, which exploits a priori information regarding the shape of the obtained detector pulses is proposed. Output pulses shape depends on the response of the scintillation crystal, photodetector's properties and amplifier/shaper operation. Using these known characteristics of the detector pulses prior to digitization, one can model the voltage pulse derived from the shaper (a low-pass filter, last in the front-end electronics chain), in order to reduce the desirable sampling rate of ADCs. Fitting with a small number of measurements, pulse shape estimation is then feasible. In particular, the proposed sub-sampling acquisition approach relies on a bi-exponential modeling of the pulse shape. We show that the properties of the pulse that are relevant for Single Photon Emission Computed Tomography (SPECT) event detection (i.e., position and energy) can be calculated by collecting just a small fraction of the number of samples usually collected in data acquisition systems used so far. Compared to the standard digitization process, the proposed sub-sampling approach allows the use of free running ADCs with sampling rate reduced by a factor of 5. Two small detectors consisting of Cerium doped Gadolinium Aluminum Gallium Garnet (Gd3Al2Ga3O12 : Ce or GAGG:Ce) pixelated arrays (array elements: 2 × 2 × 5 mm3 and 1 × 1 × 10 mm3 respectively) coupled to a Position Sensitive Photomultiplier Tube (PSPMT) were used for experimental evaluation. The two detectors were used to obtain raw images and energy histograms under 140 keV and 661.7 keV irradiation respectively. The sub-sampling acquisition technique (10 MHz sampling rate) was compared with a standard acquisition method (52 MHz sampling rate), in terms of energy resolution and image signal to noise ratio for both gamma ray energies. The Levenberg-Marquardt (LM) non-linear least-squares algorithm was used, in post processing, in order to fit the acquired data with the proposed model. The results showed that analog pulses prior to digitization are being estimated with high accuracy after fitting with the bi-exponential model.

  11. Taking a step back: Himalayan erosion as seen from Bangladesh

    NASA Astrophysics Data System (ADS)

    Lupker, M.; France-Lanord, C.; Lavé, J.; Blard, P.; Galy, V.

    2012-12-01

    The Himalayan range represents the archetype of mountain building and is considered in many studies as the locus of intense interactions between climate, denudation and tectonics. A better understanding of these interactions requires that the flux of material removed from the system through erosion is known. The products of Himalayan erosion are exported to the Bengal fan and the Indian Ocean by two major rivers: the Ganga and Brahmaputra. These rivers provide the opportunity to quantify the Himalayan denudation rates as they integrate surface and tectonic processes across the entire basin. Basin wide erosion or denudation rates have classically been derived from the gauging of sediments fluxes. By coping with the inherent spatial and temporal variability of sediment concentration in rivers, sediment budgets yield average denudation rate over the observational period ranging from years to decades. Cosmogenic nuclides such as 10-Be allow the estimation of basin-wide denudation rates averaged over typical time scales of hundreds to thousand of years, from a single measurement in river sediments. We compare these methods for the case of the Ganga basin that drains the central part of the Himalayan range. By using a distal point of view, i.e. by sampling and evaluating the sediment flux at the outlet of the Ganga in Bangladesh we are able to propose an average denudation rate of the entire, central part of the Himalayan range. This sampling location offers the benefit of integrating the entire basin and its distance from the sediment source makes it also less prone to perturbations in the headwaters. However, the effects of 500 to 1000 km floodplain transfer on the sedimentary signal needs to be correctly evaluated. The gauged sediment flux can mainly be impacted by the sequestration of sediments in the floodplain. For the Ganga basin, sequestration is limited to ca. 10 % of the eroded sediment flux as deduced from geochemical mass balance approaches [1]. On their side, cosmogenic derived denudation rates in Bangladesh may also be biased by the exposure to cosmic-rays during sediment transfer in the floodplain. The comparison of the 10-Be concentration of sediments in the main Himalayan Rivers, upstream of the floodplain with sediments in Bangladesh and the use of modeling approaches suggests that this effect is nearly negligible [2]. The 10-Be concentration in sediments sampled in Bangladesh can therefore be used to infer the denudation rate of the entire range drained by the basin. Gauged sediment fluxes and 10-Be in sediments constrain the Himalayan denudation rate to ca. 0.8 and 1.0 mm/yr, respectively. Both independent methods yield similar denudations rates. However the uncertainties on both methods remain high, which does not allow us to speculate on the origin of the small difference between both rates. [1] Lupker et al., 2011 - JGR Earth Surf. 116 [2] Lupker et al., 2012 - EPSL 333-334 - p146:156

  12. Sediment accumulation and net storage determined by field observation and numerical modelling for an extensive tropical floodplain: Beni River, Bolivian Llanos

    NASA Astrophysics Data System (ADS)

    Schwendel, Arved; Aalto, Rolf; Nicholas, Andrew

    2014-05-01

    Lowland floodplains in subsiding basins form major depocentres responsible for the storage and cycling of large quantities of fine sediment and associated nutrients and contaminants. Obtaining reliable estimates of sediment storage in such environments is problematic due to the high degree of spatial and temporal variability exhibited by overbank sediment accumulation rates, combined with the logistical difficulties inherent in sampling locations far away from the channel. Further complexity is added by the high channel mobility, which recycles sediment and reconfigures the relationships between channel and floodplain morphology, sediment transport and overbank sedimentation. Estimates of floodplain accretion can be derived using a range of numerical sedimentation models of varying complexity. However, data required for model calibration are rarely available for the vast floodplains associated with tropical rivers. We present results from a study of channel-floodplain sediment exchange fluxes on the Rio Beni, a highly dynamic, tropical sand-bed tributary of the Amazon in northern Bolivia. The Beni transports high concentrations of suspended sediment, generated in the river's Andean headwaters, and disperses this material across an extensive floodplain wetland that experiences annual inundation over an area of up to 40000 km2. We utilise estimates of overbank sedimentation rates over the past century derived from 210Pb analysis of floodplain sediment cores collected along a 375 km length of channel, including sampling a range of channel-floodplain configurations within the channel belt and on the distal floodplain (up to 60 km from the channel). These data are used to investigate spatial and temporal variations in rates of floodplain sediment accumulation for a range of grain sizes. Specifically, we examine relationships between sedimentation rate and distance from the channel, and characterise within channel belt variability in sedimentation linked to patterns of channel migration and associated levee reworking. Field data are used to inform a hydrodynamically-driven model of overbank sedimentation and to derive uncertainty-bounded estimates of total floodplain sediment accumulation. Sediment exchange due to planform channel mobility is quantified using a numerical model of meander migration, calibrated using analysis of remote sensing imagery to determine rates and geometry of channel migration. Our combined data and model analysis allows the construction of a mean annual sediment budget for the Beni, which suggests channel-sediment exchange fluxes in the order of 100 Mt a-1, equivalent to 10% of the sediment load of the mainstem Amazon.

  13. Accounting for planet-shaped planetary nebulae

    NASA Astrophysics Data System (ADS)

    Sabach, Efrat; Soker, Noam

    2018-01-01

    By following the evolution of several observed exoplanetary systems, we show that by lowering the mass-loss rate of single solar-like stars during their two giant branches, these stars will swallow their planets at the tip of their asymptotic giant branch (AGB) phase. This will most likely lead the stars to form elliptical planetary nebulae (PNe). Under the traditional mass-loss rate these stars will hardly form observable PNe. Stars with a lower mass-loss rate as we propose, about 15 per cent of the traditional mass-loss rate of single stars, leave the AGB with much higher luminosities than what traditional evolution produces. Hence, the assumed lower mass-loss rate might also account for the presence of bright PNe in old stellar populations. We present the evolution of four exoplanetary systems that represent stellar masses in the range of 0.9-1.3 M⊙. The justification for this low mass-loss rate is our assumption that the stellar samples that were used to derive the traditional average single-star mass-loss rate were contaminated by stars that suffer binary interaction.

  14. Phylogenetic analysis accounting for age-dependent death and sampling with applications to epidemics.

    PubMed

    Lambert, Amaury; Alexander, Helen K; Stadler, Tanja

    2014-07-07

    The reconstruction of phylogenetic trees based on viral genetic sequence data sequentially sampled from an epidemic provides estimates of the past transmission dynamics, by fitting epidemiological models to these trees. To our knowledge, none of the epidemiological models currently used in phylogenetics can account for recovery rates and sampling rates dependent on the time elapsed since transmission, i.e. age of infection. Here we introduce an epidemiological model where infectives leave the epidemic, by either recovery or sampling, after some random time which may follow an arbitrary distribution. We derive an expression for the likelihood of the phylogenetic tree of sampled infectives under our general epidemiological model. The analytic concept developed in this paper will facilitate inference of past epidemiological dynamics and provide an analytical framework for performing very efficient simulations of phylogenetic trees under our model. The main idea of our analytic study is that the non-Markovian epidemiological model giving rise to phylogenetic trees growing vertically as time goes by can be represented by a Markovian "coalescent point process" growing horizontally by the sequential addition of pairs of coalescence and sampling times. As examples, we discuss two special cases of our general model, described in terms of influenza and HIV epidemics. Though phrased in epidemiological terms, our framework can also be used for instance to fit macroevolutionary models to phylogenies of extant and extinct species, accounting for general species lifetime distributions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. 76 FR 37030 - Financial Derivatives Transactions To Offset Interest Rate Risk; Investment and Deposit Activities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-24

    ..., with certain exceptions, are financial derivatives such as futures, options, interest rate swaps and... to evaluate any hedge transaction using derivatives must include the ability to capture all options... NATIONAL CREDIT UNION ADMINISTRATION 12 CFR Part 703 Financial Derivatives Transactions To Offset...

  16. The influence of reagent type on the kinetics of ultrafine coal flotation

    USGS Publications Warehouse

    Read, R.B.; Camp, L.R.; Summers, M.S.; Rapp, D.M.

    1989-01-01

    A kinetic study has been conducted to determine the influence of reagent type on flotation rates of ultrafine coal. Two ultrafine coal samples, the Illinois No. 5 (Springfield) and Pittsburgh No. 8, have been evaluated with various reagent types in order to derive the rate constants for coal (kc), ash (ka), and pyrite (kc). The reagents used in the study include anionic surfactants, anionic surfactant-alcohol mixtures, and frothing alcohols. In general, the surfactant-alcohol mixtures tend to float ultrafine coal at a rate three to four times faster than either pure alcohols or pure anionic surfactants. Pine oil, a mixture of terpene alcohols and hydrocarbons, was an exception to this finding; it exhibited higher rate constants than the pure aliphatic alcohols or other pure anionic surfactants studied; this may be explained by the fact that the sample of pine oil used (70% alpha-terpineol) acted as a frother/collector system similar to alcohol/kerosene. The separation efficiencies of ash and pyrite from coal, as evidenced by the ratios of kc/ka or kc/kp, tend to indicate, however, that commercially available surfactant-alcohol mixtures are not as selective as pure alcohols such as 2-ethyl-1-hexanol or methylisobutylcarbinol. Some distinct differences in various rate constants, or their ratios, were noted between the two coals studied, and are possibly attributable to surface chemistry effects. ?? 1989.

  17. A probabilistic approach to the assessment of some life history pattern parameters in a Middle Pleistocene human population.

    PubMed

    Durand, A I; Ipina, S L; Bermúdez de Castro, J M

    2000-06-01

    Parameters of a Middle Pleistocene human population such as the expected length of the female reproductive period (E(Y)), the expected interbirth interval (E(X)), the survival rate (tau) for females after the expected reproductive period, the rate (phi(2)) of women who, given that they reach first birth, do not survive to the end of the expected reproductive period, and the female infant plus juvenile mortality rate (phi(1)) have been assessed from a probabilistic standpoint provided that such a population were stationary. The hominid sample studied, the Sima de los Huesos (SH) cave site, Sierra de Atapuerca (Spain), is the most exhaustive human fossil sample currently available. Results suggest that the Atapuerca (SH) sample can derive from a stationary population. Further, in the case that the expected reproductive period ends between 37 and 40 yr of age, then 24 less, similarE(Y) less, similar27 yr, E(X)=3 yr, 0.224

  18. User preference as quality markers of paediatric web sites.

    PubMed

    Hernández-Borges, Angel A; Macías-Cervi, Pablo; Gaspar-Guardado, Asunción; Torres-Alvarez De Arcaya, María Luisa; Ruíz-Rabaza, Ana; Jiménez-Sosa, Alejandro

    2003-09-01

    Little is known about the ability of internet users to distinguish the best medical resources online, and how their preferences, measured by usage and popularity indexes, correlate with established quality criteria. Our objective was to analyse whether the number of inbound links and/or daily visits to a sample of paediatric web pages are reliable quality markers of the pages. Two-year follow-up study of 363 web pages with paediatric information. The number of inbound links and the average number of daily visits to the pages were calculated on a yearly basis. In addition, their rates of compliance with the codes of conduct, guidelines and/or principles of three international organizations were evaluated. The quality code most widely met by the sample web pages was the Health on the Net Foundation Code of Conduct (overall rate, 60.2%). Sample pages showed a low degree of compliance with principles related to privacy, confidentiality and electronic commerce (overall rate less than 45%). Most importantly, we observed a moderate, significant correlation between compliance with quality criteria and the number of inbound links (p < 0.001). However, no correlation was found between the number of daily visits to a page and its degree of compliance with the principles. Some indexes derived from the analysis of webmasters' hyperlinks could be reliable quality markers of medical web resources.

  19. Wound healing outcomes: Using big data and a modified intent-to-treat method as a metric for reporting healing rates.

    PubMed

    Ennis, William J; Hoffman, Rachel A; Gurtner, Geoffrey C; Kirsner, Robert S; Gordon, Hanna M

    2017-08-01

    Chronic wounds are increasing in prevalence and are a costly problem for the US healthcare system and throughout the world. Typically outcomes studies in the field of wound care have been limited to small clinical trials, comparative effectiveness cohorts and attempts to extrapolate results from claims databases. As a result, outcomes in real world clinical settings may differ from these published studies. This study presents a modified intent-to-treat framework for measuring wound outcomes and measures the consistency of population based outcomes across two distinct settings. In this retrospective observational analysis, we describe the largest to date, cohort of patient wound outcomes derived from 626 hospital based clinics and one academic tertiary care clinic. We present the results of a modified intent-to-treat analysis of wound outcomes as well as demographic and descriptive data. After applying the exclusion criteria, the final analytic sample includes the outcomes from 667,291 wounds in the national sample and 1,788 wounds in the academic sample. We found a consistent modified intent to treat healing rate of 74.6% from the 626 clinics and 77.6% in the academic center. We recommend that a standard modified intent to treat healing rate be used to report wound outcomes to allow for consistency and comparability in measurement across providers, payers and healthcare systems. © 2017 by the Wound Healing Society.

  20. The production rate of cosmogenic deuterium at the Moon's surface

    NASA Astrophysics Data System (ADS)

    Füri, Evelyn; Deloule, Etienne; Trappitsch, Reto

    2017-09-01

    The hydrogen (D/H) isotope ratio is a key tracer for the source of planetary water. However, secondary processes such as solar wind implantation and cosmic ray induced spallation reactions have modified the primordial D/H signature of 'water' in all rocks and soils recovered on the Moon. Here, we re-evaluate the production rate of cosmogenic deuterium (D) at the Moon's surface through ion microprobe analyses of hydrogen isotopes in olivines from eight Apollo 12 and 15 mare basalts. These in situ measurements are complemented by CO2 laser extraction-static mass spectrometry analyses of cosmogenic noble gas nuclides (3He, 21Ne, 38Ar). Cosmic ray exposure (CRE) ages of the mare basalts, derived from their cosmogenic 21Ne content, range from 60 to 422 Ma. These CRE ages are 35% higher, on average, than the published values for the same samples. The amount of D detected in the olivines increases linearly with increasing CRE ages, consistent with a production rate of (2.17 ± 0.11) ×10-12 mol(g rock)-1 Ma-1. This value is more than twice as high as previous estimates for the production of D by galactic cosmic rays, indicating that for water-poor lunar samples, i.e., samples with water concentrations ≤50 ppm, corrected D/H ratios have been severely overestimated.

  1. The production rate of cosmogenic deuterium at the Moon's surface

    DOE PAGES

    Füri, Evelyn; Deloule, Etienne; Trappitsch, Reto

    2017-07-03

    The hydrogen (D/H) isotope ratio is a key tracer for the source of planetary water. However, secondary processes such as solar wind implantation and cosmic ray induced spallation reactions have modified the primordial D/H signature of ‘water’ in all rocks and soils recovered on the Moon. We re-evaluate the production rate of cosmogenic deuterium (D) at the Moon's surface through ion microprobe analyses of hydrogen isotopes in olivines from eight Apollo 12 and 15 mare basalts. Furthermore, these in situ measurements are complemented by CO2 laser extraction-static mass spectrometry analyses of cosmogenic noble gas nuclides ( 3He, 21Ne, 38Ar). Cosmicmore » ray exposure (CRE) ages of the mare basalts, derived from their cosmogenic 21Ne content, range from 60 to 422 Ma. These CRE ages are 35% higher, on average, than the published values for the same samples. The amount of D detected in the olivines increases linearly with increasing CRE ages, consistent with a production rate of (2.17±0.11)×10 -12 mol(g rock) -1 Ma -1. This value is more than twice as high as previous estimates for the production of D by galactic cosmic rays, indicating that for water-poor lunar samples, i.e., samples with water concentrations ≤50 ppm, corrected D/H ratios have been severely overestimated.« less

  2. Are special read alignment strategies necessary and cost-effective when handling sequencing reads from patient-derived tumor xenografts?

    PubMed

    Tso, Kai-Yuen; Lee, Sau Dan; Lo, Kwok-Wai; Yip, Kevin Y

    2014-12-23

    Patient-derived tumor xenografts in mice are widely used in cancer research and have become important in developing personalized therapies. When these xenografts are subject to DNA sequencing, the samples could contain various amounts of mouse DNA. It has been unclear how the mouse reads would affect data analyses. We conducted comprehensive simulations to compare three alignment strategies at different mutation rates, read lengths, sequencing error rates, human-mouse mixing ratios and sequenced regions. We also sequenced a nasopharyngeal carcinoma xenograft and a cell line to test how the strategies work on real data. We found the "filtering" and "combined reference" strategies performed better than aligning reads directly to human reference in terms of alignment and variant calling accuracies. The combined reference strategy was particularly good at reducing false negative variants calls without significantly increasing the false positive rate. In some scenarios the performance gain of these two special handling strategies was too small for special handling to be cost-effective, but it was found crucial when false non-synonymous SNVs should be minimized, especially in exome sequencing. Our study systematically analyzes the effects of mouse contamination in the sequencing data of human-in-mouse xenografts. Our findings provide information for designing data analysis pipelines for these data.

  3. Osteotome-Mediated Sinus Lift without Grafting Material: A Review of Literature and a Technique Proposal

    PubMed Central

    Taschieri, Silvio; Corbella, Stefano; Saita, Massimo; Tsesis, Igor; Del Fabbro, Massimo

    2012-01-01

    Implant rehabilitation of the edentulous posterior maxilla may be a challenging procedure in the presence of insufficient bone volume for implant placement. Maxillary sinus augmentation with or without using grafting materials aims to provide adequate bone volume. The aim of the present study was to systematically review the existing literature on transalveolar maxillary sinus augmentation without grafting materials and to propose and describe an osteotome-mediated approach in postextraction sites in combination with platelet derivative. The systematic review showed that high implant survival rate (more than 96% after 5 years) can be achieved even without grafting the site, with a low rate of complications. Available alveolar bone height before surgery was not correlated to survival rate. In the described case report, three implants were placed in posterior maxilla after extraction of two teeth. An osteotome-mediated sinus lifting technique was performed with the use of platelet derivative (PRGF); a synthetic bone substitute was used to fill the gaps between implant and socket walls. No complications occurred, and implants were successfully in site after 1 year from prosthetic loading. The presented technique might represent a viable alternative for the treatment of edentulous posterior maxilla with atrophy of the alveolar bone though it needs to be validated by studies with a large sample size. PMID:22792108

  4. Standardizing Plasmodium falciparum infection prevalence measured via microscopy versus rapid diagnostic test.

    PubMed

    Mappin, Bonnie; Cameron, Ewan; Dalrymple, Ursula; Weiss, Daniel J; Bisanzio, Donal; Bhatt, Samir; Gething, Peter W

    2015-11-17

    Large-scale mapping of Plasmodium falciparum infection prevalence relies on opportunistic assemblies of infection prevalence data arising from thousands of P. falciparum parasite rate (PfPR) surveys conducted worldwide. Variance in these data is driven by both signal, the true underlying pattern of infection prevalence, and a range of factors contributing to 'noise', including sampling error, differing age ranges of subjects and differing parasite detection methods. Whilst the former two noise components have been addressed in previous studies, the effect of different diagnostic methods used to determine PfPR in different studies has not. In particular, the majority of PfPR data are based on positivity rates determined by either microscopy or rapid diagnostic test (RDT), yet these approaches are not equivalent; therefore a method is needed for standardizing RDT and microscopy-based prevalence estimates prior to use in mapping. Twenty-five recent Demographic and Health surveys (DHS) datasets from sub-Saharan Africa provide child diagnostic test results derived using both RDT and microscopy for each individual. These prevalence estimates were aggregated across level one administrative zones and a Bayesian probit regression model fit to the microscopy- versus RDT-derived prevalence relationship. An errors-in-variables approach was employed to account for sampling error in both the dependent and independent variables. In addition to the diagnostic outcome, RDT type, fever status and recent anti-malarial treatment were extracted from the datasets in order to analyse their effect on observed malaria prevalence. A strong non-linear relationship between the microscopy and RDT-derived prevalence was found. The results of regressions stratified by the additional diagnostic variables (RDT type, fever status and recent anti-malarial treatment) indicate that there is a distinct and consistent difference in the relationship when the data are stratified by febrile status and RDT brand. The relationships defined in this research can be applied to RDT-derived PfPR data to effectively convert them to an estimate of the parasite prevalence expected using microscopy (or vice versa), thereby standardizing the dataset and improving the signal-to-noise ratio. Additionally, the results provide insight on the importance of RDT brands, febrile status and recent anti-malarial treatment for explaining inconsistencies between observed prevalence derived from different diagnostics.

  5. Single-Cell Growth Rates in Photoautotrophic Populations Measured by Stable Isotope Probing and Resonance Raman Microspectrometry

    PubMed Central

    Taylor, Gordon T.; Suter, Elizabeth A.; Li, Zhuo Q.; Chow, Stephanie; Stinton, Dallyce; Zaliznyak, Tatiana; Beaupré, Steven R.

    2017-01-01

    A new method to measure growth rates of individual photoautotrophic cells by combining stable isotope probing (SIP) and single-cell resonance Raman microspectrometry is introduced. This report explores optimal experimental design and the theoretical underpinnings for quantitative responses of Raman spectra to cellular isotopic composition. Resonance Raman spectra of isogenic cultures of the cyanobacterium, Synechococcus sp., grown in 13C-bicarbonate revealed linear covariance between wavenumber (cm−1) shifts in dominant carotenoid Raman peaks and a broad range of cellular 13C fractional isotopic abundance. Single-cell growth rates were calculated from spectra-derived isotopic content and empirical relationships. Growth rates among any 25 cells in a sample varied considerably; mean coefficient of variation, CV, was 29 ± 3% (σ/x¯), of which only ~2% was propagated analytical error. Instantaneous population growth rates measured independently by in vivo fluorescence also varied daily (CV ≈ 53%) and were statistically indistinguishable from single-cell growth rates at all but the lowest levels of cell labeling. SCRR censuses of mixtures prepared from Synechococcus sp. and T. pseudonana (a diatom) populations with varying 13C-content and growth rates closely approximated predicted spectral responses and fractional labeling of cells added to the sample. This approach enables direct microspectrometric interrogation of isotopically- and phylogenetically-labeled cells and detects as little as 3% changes in cellular fractional labeling. This is the first description of a non-destructive technique to measure single-cell photoautotrophic growth rates based on Raman spectroscopy and well-constrained assumptions, while requiring few ancillary measurements. PMID:28824580

  6. Automated sample exchange and tracking system for neutron research at cryogenic temperatures

    NASA Astrophysics Data System (ADS)

    Rix, J. E.; Weber, J. K. R.; Santodonato, L. J.; Hill, B.; Walker, L. M.; McPherson, R.; Wenzel, J.; Hammons, S. E.; Hodges, J.; Rennich, M.; Volin, K. J.

    2007-01-01

    An automated system for sample exchange and tracking in a cryogenic environment and under remote computer control was developed. Up to 24 sample "cans" per cycle can be inserted and retrieved in a programed sequence. A video camera acquires a unique identification marked on the sample can to provide a record of the sequence. All operations are coordinated via a LABVIEW™ program that can be operated locally or over a network. The samples are contained in vanadium cans of 6-10mm in diameter and equipped with a hermetically sealed lid that interfaces with the sample handler. The system uses a closed-cycle refrigerator (CCR) for cooling. The sample was delivered to a precooling location that was at a temperature of ˜25K, after several minutes, it was moved onto a "landing pad" at ˜10K that locates the sample in the probe beam. After the sample was released onto the landing pad, the sample handler was retracted. Reading the sample identification and the exchange operation takes approximately 2min. The time to cool the sample from ambient temperature to ˜10K was approximately 7min including precooling time. The cooling time increases to approximately 12min if precooling is not used. Small differences in cooling rate were observed between sample materials and for different sample can sizes. Filling the sample well and the sample can with low pressure helium is essential to provide heat transfer and to achieve useful cooling rates. A resistive heating coil can be used to offset the refrigeration so that temperatures up to ˜350K can be accessed and controlled using a proportional-integral-derivative control loop. The time for the landing pad to cool to ˜10K after it has been heated to ˜240K was approximately 20min.

  7. The CERAD Neuropsychologic Battery Total Score and the progression of Alzheimer disease.

    PubMed

    Rossetti, Heidi C; Munro Cullum, C; Hynan, Linda S; Lacritz, Laura H

    2010-01-01

    To establish the Consortium to Establish a Registry for Alzheimer's Disease (CERAD) neuropsychologic battery as a valid measure of cognitive progression in Alzheimer disease (AD) by deriving annualized CERAD Total Change Scores and corresponding confidence intervals in AD and controls from which to define clinically meaningful change. Subjects included 383 normal control (NC) and 655 AD subjects with serial data from the CERAD registry database. Annualized CERAD Total Change Scores were derived and Reliable Change Indexes (RCIs) calculated to establish statistically reliable change values. CERAD Change Scores were compared with annualized change scores from the Mini-Mental State Examination (MMSE), Clinical Dementia Rating Scale (CDR) Sum of Boxes, and Blessed Dementia Rating Scale (BDRS). For the CERAD Total Score, the AD sample showed significantly greater decline than the NC sample over the 4-year interval, with AD subjects declining an average of 22.2 points compared with the NCs' improving an average 2.8 points from baseline to last visit [Group x Time interaction [F(4,1031)=246.08, P<0.001)]. By Visit 3, the majority of AD subjects (65.2%) showed a degree of cognitive decline that fell outside the RCI. CERAD Change Scores significantly correlated (P<0.001) with MMSE (r=-0.66), CDR (r=-0.42), and BDRS (r=-0.38) change scores. Results support the utility of the CERAD Total Score as a measure of AD progression and provide comparative data for annualized change in CERAD Total Score and other summary measures.

  8. Galaxy and Mass Assembly (GAMA): Mid-infrared Properties and Empirical Relations from WISE

    NASA Astrophysics Data System (ADS)

    Cluver, M. E.; Jarrett, T. H.; Hopkins, A. M.; Driver, S. P.; Liske, J.; Gunawardhana, M. L. P.; Taylor, E. N.; Robotham, A. S. G.; Alpaslan, M.; Baldry, I.; Brown, M. J. I.; Peacock, J. A.; Popescu, C. C.; Tuffs, R. J.; Bauer, A. E.; Bland-Hawthorn, J.; Colless, M.; Holwerda, B. W.; Lara-López, M. A.; Leschinski, K.; López-Sánchez, A. R.; Norberg, P.; Owers, M. S.; Wang, L.; Wilkins, S. M.

    2014-02-01

    The Galaxy And Mass Assembly (GAMA) survey furnishes a deep redshift catalog that, when combined with the Wide-field Infrared Survey Explorer (WISE), allows us to explore for the first time the mid-infrared properties of >110, 000 galaxies over 120 deg2 to z ~= 0.5. In this paper we detail the procedure for producing the matched GAMA-WISE catalog for the G12 and G15 fields, in particular characterizing and measuring resolved sources; the complete catalogs for all three GAMA equatorial fields will be made available through the GAMA public releases. The wealth of multiwavelength photometry and optical spectroscopy allows us to explore empirical relations between optically determined stellar mass (derived from synthetic stellar population models) and 3.4 μm and 4.6 μm WISE measurements. Similarly dust-corrected Hα-derived star formation rates can be compared to 12 μm and 22 μm luminosities to quantify correlations that can be applied to large samples to z < 0.5. To illustrate the applications of these relations, we use the 12 μm star formation prescription to investigate the behavior of specific star formation within the GAMA-WISE sample and underscore the ability of WISE to detect star-forming systems at z ~ 0.5. Within galaxy groups (determined by a sophisticated friends-of-friends scheme), results suggest that galaxies with a neighbor within 100 h -1 kpc have, on average, lower specific star formation rates than typical GAMA galaxies with the same stellar mass.

  9. New poly(ester urea) derived from L-leucine: electrospun scaffolds loaded with antibacterial drugs and enzymes.

    PubMed

    Díaz, Angélica; del Valle, Luis J; Tugushi, David; Katsarava, Ramaz; Puiggalí, Jordi

    2015-01-01

    Electrospun scaffolds from an amino acid containing poly(ester urea) (PEU) were developed as promising materials in the biomedical field and specifically in tissue engineering applications. The selected poly(ester urea) was obtained with a high yield and molecular weight by reaction of phosgene with a bis(α-aminoacyl)-α,ω-diol-diester monomer. The polymer having L-leucine, 1,6-hexanediol and carbonic acid units had a semicrystalline character and relatively high glass transition and melting temperatures. Furthermore it was highly soluble in most organic solvents, an interesting feature that facilitated the electrospinning process and the effective incorporation of drugs with bactericidal activity (e.g. biguanide derivatives such as clorhexidine and polyhexamethylenebiguanide) and enzymes (e.g. α-chymotrypsin) that accelerated the degradation process. Continuous micro/nanofibers were obtained under a wide range of processing conditions, being diameters of electrospun fibers dependent on the drug and solvent used. Poly(ester urea) samples were degradable in media containing lipases and proteinases but the degradation rate was highly dependent on the surface area, being specifically greater for scaffolds with respect to films. The high hydrophobicity of new scaffolds had repercussions on enzymatic degradability since different weight loss rates were found depending on how samples were exposed to the medium (e.g. forced or non-forced immersion). New scaffolds were biocompatible, as demonstrated by adhesion and proliferation assays performed with fibroblast and epithelial cells. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Infrared excesses in stars with and without planets using revised WISE photometry

    NASA Astrophysics Data System (ADS)

    Maldonado, Raul F.; Chavez, Miguel; Bertone, Emanuele; Cruz-Saenz de Miera, Fernando

    2017-11-01

    We present an analysis on the potential prevalence of mid-infrared excesses in stars with and without planetary companions. Based on an extended data base of stars detected with the Wide Infrared Survey Explorer (WISE) satellite, we studied two stellar samples: one with 236 planet hosts and another with 986 objects for which planets have been searched, but not found. We determined the presence of an excess over the photosphere by comparing the observed flux ratio at 22 and 12 μm (f22/f12) with the corresponding synthetic value, derived from results of classical model photospheres. We found a detection rate of 0.85 per cent at 22 μm (two excesses) in the sample of stars with planets and 0.1 per cent (1 detection) for the stars without planets. The difference of the detection rate between the two samples is not statistically significant, a result that is independent of the different approaches found in the literature to define an excess in the wavelength range covered by WISE observations. As an additional result, we found that the WISE fluxes required a normalization procedure to make them compatible with synthetic data, probably pointing out a revision of the WISE data calibration.

  11. Interaction of attentional and motor control processes in handwriting.

    PubMed

    Brown, T L; Donnenwirth, E E

    1990-01-01

    The interaction between attentional capacity, motor control processes, and strategic adaptations to changing task demands was investigated in handwriting, a continuous (rather than discrete) skilled performance. Twenty-four subjects completed 12 two-minute handwriting samples under instructions stressing speeded handwriting, normal handwriting, or highly legible handwriting. For half of the writing samples, a concurrent auditory monitoring task was imposed. Subjects copied either familiar (English) or unfamiliar (Latin) passages. Writing speed, legibility ratings, errors in writing and in the secondary auditory task, and a derived measure of the average number of characters held in short-term memory during each sample ("planning unit size") were the dependent variables. The results indicated that the ability to adapt to instructions stressing speed or legibility was substantially constrained by the concurrent listening task and by text familiarity. Interactions between instructions, task concurrence, and text familiarity in the legibility ratings, combined with further analyses of planning unit size, indicated that information throughput from temporary storage mechanisms to motor processes mediated the loss of flexibility effect. Overall, the results suggest that strategic adaptations of a skilled performance to changing task circumstances are sensitive to concurrent attentional demands and that departures from "normal" or "modal" performance require attention.

  12. Sampling Errors of SSM/I and TRMM Rainfall Averages: Comparison with Error Estimates from Surface Data and a Sample Model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.

  13. Estimating the Term Structure With a Semiparametric Bayesian Hierarchical Model: An Application to Corporate Bonds.

    PubMed

    Cruz-Marcelo, Alejandro; Ensor, Katherine B; Rosner, Gary L

    2011-06-01

    The term structure of interest rates is used to price defaultable bonds and credit derivatives, as well as to infer the quality of bonds for risk management purposes. We introduce a model that jointly estimates term structures by means of a Bayesian hierarchical model with a prior probability model based on Dirichlet process mixtures. The modeling methodology borrows strength across term structures for purposes of estimation. The main advantage of our framework is its ability to produce reliable estimators at the company level even when there are only a few bonds per company. After describing the proposed model, we discuss an empirical application in which the term structure of 197 individual companies is estimated. The sample of 197 consists of 143 companies with only one or two bonds. In-sample and out-of-sample tests are used to quantify the improvement in accuracy that results from approximating the term structure of corporate bonds with estimators by company rather than by credit rating, the latter being a popular choice in the financial literature. A complete description of a Markov chain Monte Carlo (MCMC) scheme for the proposed model is available as Supplementary Material.

  14. Estimating the Term Structure With a Semiparametric Bayesian Hierarchical Model: An Application to Corporate Bonds1

    PubMed Central

    Cruz-Marcelo, Alejandro; Ensor, Katherine B.; Rosner, Gary L.

    2011-01-01

    The term structure of interest rates is used to price defaultable bonds and credit derivatives, as well as to infer the quality of bonds for risk management purposes. We introduce a model that jointly estimates term structures by means of a Bayesian hierarchical model with a prior probability model based on Dirichlet process mixtures. The modeling methodology borrows strength across term structures for purposes of estimation. The main advantage of our framework is its ability to produce reliable estimators at the company level even when there are only a few bonds per company. After describing the proposed model, we discuss an empirical application in which the term structure of 197 individual companies is estimated. The sample of 197 consists of 143 companies with only one or two bonds. In-sample and out-of-sample tests are used to quantify the improvement in accuracy that results from approximating the term structure of corporate bonds with estimators by company rather than by credit rating, the latter being a popular choice in the financial literature. A complete description of a Markov chain Monte Carlo (MCMC) scheme for the proposed model is available as Supplementary Material. PMID:21765566

  15. Interactions of task and subject variables among continuous performance tests.

    PubMed

    Denney, Colin B; Rapport, Mark D; Chung, Kyong-Mee

    2005-04-01

    Contemporary models of working memory suggest that target paradigm (TP) and target density (TD) should interact as influences on error rates derived from continuous performance tests (CPTs). The present study evaluated this hypothesis empirically in a typically developing, ethnically diverse sample of children. The extent to which scores based on different combinations of these task parameters showed different patterns of relationship to age, intelligence, and gender was also assessed. Four continuous performance tests were derived by combining two target paradigms (AX and repeated letter target stimuli) with two levels of target density (8.3% and 33%). Variations in mean omission (OE) and commission (CE) error rates were examined within and across combinations of TP and TD. In addition, a nested series of structural equation models was utilized to examine patterns of relationship among error rates, age, intelligence, and gender. Target paradigm and target density interacted as influences on error rates. Increasing density resulted in higher OE and CE rates for the AX paradigm. In contrast, the high density condition yielded a decline in OE rates accompanied by a small increase in CEs using the repeated letter CPT. Target paradigms were also distinguishable on the basis of age when using OEs as the performance measure, whereas combinations of age and intelligence distinguished between density levels but not target paradigms using CEs as the dependent measure. Different combinations of target paradigm and target density appear to yield scores that are conceptually and psychometrically distinguishable. Consequently, developmentally appropriate interpretation of error rates across tasks may require (a) careful analysis of working memory and attentional resources required for successful performance, and (b) normative data bases that are differently stratified with respect to combinations of age and intelligence.

  16. Kinetic dissolution of carbonates and Mn oxides in acidic water: Measurement of in situ field rates and reactive transport modeling

    USGS Publications Warehouse

    Brown, J.G.; Glynn, P.D.

    2003-01-01

    The kinetics of carbonate and Mn oxide dissolution under acidic conditions were examined through the in situ exposure of pure phase samples to acidic ground water in Pinal Creek Basin, Arizona. The average long-term calculated in situ dissolution rates for calcite and dolomite were 1.65??10-7 and 3.64??10-10 mmol/(cm2 s), respectively, which were about 3 orders of magnitude slower than rates derived in laboratory experiments by other investigators. Application of both in situ and lab-derived calcite and dolomite dissolution rates to equilibrium reactive transport simulations of a column experiment did not improve the fit to measured outflow chemistry: at the spatial and temporal scales of the column experiment, the use of an equilibrium model adequately simulated carbonate dissolution in the column. Pyrolusite (MnO2) exposed to acidic ground water for 595 days increased slightly in weight despite thermodynamic conditions that favored dissolution. This result might be related to a recent finding by another investigator that the reductive dissolution of pyrolusite is accompanied by the precipitation of a mixed Mn-Fe oxide species. In PHREEQC reactive transport simulations, the incorporation of Mn kinetics improved the fit between observed and simulated behavior at the column and field scales, although the column-fitted rate for Mn-oxide dissolution was about 4 orders of magnitude greater than the field-fitted rate. Remaining differences between observed and simulated contaminant transport trends at the Pinal Creek site were likely related to factors other than the Mn oxide dissolution rate, such as the concentration of Fe oxide surface sites available for adsorption, the effects of competition among dissolved species for available surface sites, or reactions not included in the model.

  17. Biocide-mediated corrosion of coiled tubing.

    PubMed

    Sharma, Mohita; An, Dongshan; Liu, Tao; Pinnock, Tijan; Cheng, Frank; Voordouw, Gerrit

    2017-01-01

    Coiled tubing corrosion was investigated for 16 field water samples (S5 to S20) from a Canadian shale gas field. Weight loss corrosion rates of carbon steel beads incubated with these field water samples averaged 0.2 mm/yr, but injection water sample S19 had 1.25±0.07 mm/yr. S19 had a most probable number of zero acid-producing bacteria and incubation of S19 with carbon steel beads or coupons did not lead to big changes in microbial community composition. In contrast other field water samples had most probable numbers of APB of 102/mL to 107/mL and incubation of these field water samples with carbon steel beads or coupons often gave large changes in microbial community composition. HPLC analysis indicated that all field water samples had elevated concentrations of bromide (average 1.6 mM), which may be derived from bronopol, which was used as a biocide. S19 had the highest bromide concentration (4.2 mM) and was the only water sample with a high concentration of active bronopol (13.8 mM, 2760 ppm). Corrosion rates increased linearly with bronopol concentration, as determined by weight loss of carbon steel beads, for experiments with S19, with filtered S19 and with bronopol dissolved in defined medium. This indicated that the high corrosion rate found for S19 was due to its high bronopol concentration. The corrosion rate of coiled tubing coupons also increased linearly with bronopol concentration as determined by electrochemical methods. Profilometry measurements also showed formation of multiple pits on the surface of coiled tubing coupon with an average pit depth of 60 μm after 1 week of incubation with 1 mM bronopol. At the recommended dosage of 100 ppm the corrosiveness of bronopol towards carbon steel beads was modest (0.011 mm/yr). Higher concentrations, resulting if biocide is added repeatedly as commonly done in shale gas operations, are more corrosive and should be avoided. Overdosing may be avoided by assaying the presence of residual biocide by HPLC, rather than by assaying the presence of residual surviving bacteria.

  18. Biocide-mediated corrosion of coiled tubing

    PubMed Central

    An, Dongshan; Liu, Tao; Pinnock, Tijan; Cheng, Frank; Voordouw, Gerrit

    2017-01-01

    Coiled tubing corrosion was investigated for 16 field water samples (S5 to S20) from a Canadian shale gas field. Weight loss corrosion rates of carbon steel beads incubated with these field water samples averaged 0.2 mm/yr, but injection water sample S19 had 1.25±0.07 mm/yr. S19 had a most probable number of zero acid-producing bacteria and incubation of S19 with carbon steel beads or coupons did not lead to big changes in microbial community composition. In contrast other field water samples had most probable numbers of APB of 102/mL to 107/mL and incubation of these field water samples with carbon steel beads or coupons often gave large changes in microbial community composition. HPLC analysis indicated that all field water samples had elevated concentrations of bromide (average 1.6 mM), which may be derived from bronopol, which was used as a biocide. S19 had the highest bromide concentration (4.2 mM) and was the only water sample with a high concentration of active bronopol (13.8 mM, 2760 ppm). Corrosion rates increased linearly with bronopol concentration, as determined by weight loss of carbon steel beads, for experiments with S19, with filtered S19 and with bronopol dissolved in defined medium. This indicated that the high corrosion rate found for S19 was due to its high bronopol concentration. The corrosion rate of coiled tubing coupons also increased linearly with bronopol concentration as determined by electrochemical methods. Profilometry measurements also showed formation of multiple pits on the surface of coiled tubing coupon with an average pit depth of 60 μm after 1 week of incubation with 1 mM bronopol. At the recommended dosage of 100 ppm the corrosiveness of bronopol towards carbon steel beads was modest (0.011 mm/yr). Higher concentrations, resulting if biocide is added repeatedly as commonly done in shale gas operations, are more corrosive and should be avoided. Overdosing may be avoided by assaying the presence of residual biocide by HPLC, rather than by assaying the presence of residual surviving bacteria. PMID:28746397

  19. Microbial Degradation of Lobster Shells to Extract Chitin Derivatives for Plant Disease Management.

    PubMed

    Ilangumaran, Gayathri; Stratton, Glenn; Ravichandran, Sridhar; Shukla, Pushp S; Potin, Philippe; Asiedu, Samuel; Prithiviraj, Balakrishnan

    2017-01-01

    Biodegradation of lobster shells by chitinolytic microorganisms are an environment safe approach to utilize lobster processing wastes for chitin derivation. In this study, we report degradation activities of two microbes, "S223" and "S224" isolated from soil samples that had the highest rate of deproteinization, demineralization and chitinolysis among ten microorganisms screened. Isolates S223 and S224 had 27.3 and 103.8 protease units mg -1 protein and 12.3 and 11.2 μg ml -1 of calcium in their samples, respectively, after 1 week of incubation with raw lobster shells. Further, S223 contained 23.8 μg ml -1 of N -Acetylglucosamine on day 3, while S224 had 27.3 μg ml -1 on day 7 of incubation with chitin. Morphological observations and 16S rDNA sequencing suggested both the isolates were Streptomyces . The culture conditions were optimized for efficient degradation of lobster shells and chitinase (∼30 kDa) was purified from crude extract by affinity chromatography. The digested lobster shell extracts induced disease resistance in Arabidopsis by induction of defense related genes ( PR1 > 500-fold, PDF1.2 > 40-fold) upon Pseudomonas syringae and Botrytis cinerea infection. The study suggests that soil microbes aid in sustainable bioconversion of lobster shells and extraction of chitin derivatives that could be applied in plant protection.

  20. Microbial Degradation of Lobster Shells to Extract Chitin Derivatives for Plant Disease Management

    PubMed Central

    Ilangumaran, Gayathri; Stratton, Glenn; Ravichandran, Sridhar; Shukla, Pushp S.; Potin, Philippe; Asiedu, Samuel; Prithiviraj, Balakrishnan

    2017-01-01

    Biodegradation of lobster shells by chitinolytic microorganisms are an environment safe approach to utilize lobster processing wastes for chitin derivation. In this study, we report degradation activities of two microbes, “S223” and “S224” isolated from soil samples that had the highest rate of deproteinization, demineralization and chitinolysis among ten microorganisms screened. Isolates S223 and S224 had 27.3 and 103.8 protease units mg-1 protein and 12.3 and 11.2 μg ml-1 of calcium in their samples, respectively, after 1 week of incubation with raw lobster shells. Further, S223 contained 23.8 μg ml-1 of N-Acetylglucosamine on day 3, while S224 had 27.3 μg ml-1 on day 7 of incubation with chitin. Morphological observations and 16S rDNA sequencing suggested both the isolates were Streptomyces. The culture conditions were optimized for efficient degradation of lobster shells and chitinase (∼30 kDa) was purified from crude extract by affinity chromatography. The digested lobster shell extracts induced disease resistance in Arabidopsis by induction of defense related genes (PR1 > 500-fold, PDF1.2 > 40-fold) upon Pseudomonas syringae and Botrytis cinerea infection. The study suggests that soil microbes aid in sustainable bioconversion of lobster shells and extraction of chitin derivatives that could be applied in plant protection. PMID:28529501

  1. Empirically derived subtypes of serious emotional disturbance in a large adolescent sample.

    PubMed

    Peiper, Nicholas; Clayton, Richard; Wilson, Richard; Illback, Robert; O'Brien, Elizabeth; Kerber, Richard; Baumgartner, Richard; Hornung, Carlton

    2015-06-01

    The heterogeneity of serious emotional disturbance has been thoroughly documented among adolescents with nationally representative data derived from structured interviews, although use of these interviews may not be feasible within the context of brief and self-administered school surveys. This study seeks to identify distinct subtypes of serious emotional disturbance in a large school-based sample. A total of 108,736 students fully completed the K6 scale that was included on the 2012 Kentucky Incentives for Prevention Survey. Latent class analysis was used to derive subtypes of serious emotional disturbance among students receiving a positive screen (n = 15,147). To determine significant predictors of class membership, adjusted rate ratios and 95 % confidence intervals were calculated using multinomial logistic regression. A four-class model was the most parsimonious, with four distinct subtypes emerging that varied by both symptom type and severity: comorbid moderate severity, comorbid high severity, anxious moderate severity, and depressed high severity. Age, gender, race/ethnicity, family structure, substance use, antisocial behavior, role impairments, and peer victimization were significant predictors of class membership, although the magnitude of these effects was stronger for the two high severity groups. Our results suggest heterogeneity of serious emotional disturbance by both symptom type and severity. Prevention programs may benefit by shifting focus from specific disorders to the core features of serious emotional disturbance, including psychological distress, high comorbidity, and role impairments.

  2. Statistical variability and confidence intervals for planar dose QA pass rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, Daniel W.; Nelms, Benjamin E.; Attwood, Kristopher

    Purpose: The most common metric for comparing measured to calculated dose, such as for pretreatment quality assurance of intensity-modulated photon fields, is a pass rate (%) generated using percent difference (%Diff), distance-to-agreement (DTA), or some combination of the two (e.g., gamma evaluation). For many dosimeters, the grid of analyzed points corresponds to an array with a low areal density of point detectors. In these cases, the pass rates for any given comparison criteria are not absolute but exhibit statistical variability that is a function, in part, on the detector sampling geometry. In this work, the authors analyze the statistics ofmore » various methods commonly used to calculate pass rates and propose methods for establishing confidence intervals for pass rates obtained with low-density arrays. Methods: Dose planes were acquired for 25 prostate and 79 head and neck intensity-modulated fields via diode array and electronic portal imaging device (EPID), and matching calculated dose planes were created via a commercial treatment planning system. Pass rates for each dose plane pair (both centered to the beam central axis) were calculated with several common comparison methods: %Diff/DTA composite analysis and gamma evaluation, using absolute dose comparison with both local and global normalization. Specialized software was designed to selectively sample the measured EPID response (very high data density) down to discrete points to simulate low-density measurements. The software was used to realign the simulated detector grid at many simulated positions with respect to the beam central axis, thereby altering the low-density sampled grid. Simulations were repeated with 100 positional iterations using a 1 detector/cm{sup 2} uniform grid, a 2 detector/cm{sup 2} uniform grid, and similar random detector grids. For each simulation, %/DTA composite pass rates were calculated with various %Diff/DTA criteria and for both local and global %Diff normalization techniques. Results: For the prostate and head/neck cases studied, the pass rates obtained with gamma analysis of high density dose planes were 2%-5% higher than respective %/DTA composite analysis on average (ranging as high as 11%), depending on tolerances and normalization. Meanwhile, the pass rates obtained via local normalization were 2%-12% lower than with global maximum normalization on average (ranging as high as 27%), depending on tolerances and calculation method. Repositioning of simulated low-density sampled grids leads to a distribution of possible pass rates for each measured/calculated dose plane pair. These distributions can be predicted using a binomial distribution in order to establish confidence intervals that depend largely on the sampling density and the observed pass rate (i.e., the degree of difference between measured and calculated dose). These results can be extended to apply to 3D arrays of detectors, as well. Conclusions: Dose plane QA analysis can be greatly affected by choice of calculation metric and user-defined parameters, and so all pass rates should be reported with a complete description of calculation method. Pass rates for low-density arrays are subject to statistical uncertainty (vs. the high-density pass rate), but these sampling errors can be modeled using statistical confidence intervals derived from the sampled pass rate and detector density. Thus, pass rates for low-density array measurements should be accompanied by a confidence interval indicating the uncertainty of each pass rate.« less

  3. Field-based evaluation of semipermeable membrane devices (SPMDs) as passive air samplers of polyaromatic hydrocarbons (PAHs)

    USGS Publications Warehouse

    Bartkow, M.E.; Huckins, J.N.; Muller, J.F.

    2004-01-01

    Semipermeable membrane devices (SPMDs) have been used as passive air samplers of semivolatile organic compounds in a range of studies. However, due to a lack of calibration data for polyaromatic hydrocarbons (PAHs), SPMD data have not been used to estimate air concentrations of target PAHs. In this study, SPMDs were deployed for 32 days at two sites in a major metropolitan area in Australia. High-volume active sampling systems (HiVol) were co-deployed at both sites. Using the HiVol air concentration data from one site, SPMD sampling rates were measured for 12 US EPA Priority Pollutant PAHs and then these values were used to determine air concentrations at the second site from SPMD concentrations. Air concentrations were also measured at the second site with co-deployed HiVols to validate the SPMD results. PAHs mostly associated with the vapour phase (Fluorene to Pyrene) dominated both the HiVol and passive air samples. Reproducibility between replicate passive samplers was satisfactory (CV<20%) for the majority of compounds. Sampling rates ranged between 0.6 and 6.1 m3 d-1. SPMD-based air concentrations were calculated at the second site for each compound using these sampling rates and the differences between SPMD-derived air concentrations and those measured using a HiVol were, on average, within a factor of 1.5. The dominant processes for the uptake of PAHs by SPMDs were also assessed. Using the SPMD method described herein, estimates of particulate sorbed airborne PAHs with five rings or greater were within 1.8-fold of HiVol measured values. ?? 2004 Elsevier Ltd. All rights reserved.

  4. The utility of online panel surveys versus computer-assisted interviews in obtaining substance-use prevalence estimates in the Netherlands.

    PubMed

    Spijkerman, Renske; Knibbe, Ronald; Knoops, Kim; Van De Mheen, Dike; Van Den Eijnden, Regina

    2009-10-01

    Rather than using the traditional, costly method of personal interviews in a general population sample, substance-use prevalence rates can be derived more conveniently from data collected among members of an online access panel. To examine the utility of this method, we compared the outcomes of an online survey with those obtained with the computer-assisted personal interviews (CAPI) method. Data were gathered from a large sample of online panellists and in a two-stage stratified sample of the Dutch population using the CAPI method. The Netherlands. Participants  The online sample comprised 57 125 Dutch online panellists (15-64 years) of Survey Sampling International LLC (SSI), and the CAPI cohort 7204 respondents (15-64 years). All participants answered identical questions about their use of alcohol, cannabis, ecstasy, cocaine and performance-enhancing drugs. The CAPI respondents were asked additionally about internet access and online panel membership. Both data sets were weighted statistically according to the distribution of demographic characteristics of the general Dutch population. Response rates were 35.5% (n = 20 282) for the online panel cohort and 62.7% (n = 4516) for the CAPI cohort. The data showed almost consistently lower substance-use prevalence rates for the CAPI respondents. Although the observed differences could be due to bias in both data sets, coverage and non-response bias were higher in the online panel survey. Despite its economic advantage, the online panel survey showed stronger non-response and coverage bias than the CAPI survey, leading to less reliable estimates of substance use in the general population. © 2009 The Authors. Journal compilation © 2009 Society for the Study of Addiction.

  5. MicroRNA array normalization: an evaluation using a randomized dataset as the benchmark.

    PubMed

    Qin, Li-Xuan; Zhou, Qin

    2014-01-01

    MicroRNA arrays possess a number of unique data features that challenge the assumption key to many normalization methods. We assessed the performance of existing normalization methods using two microRNA array datasets derived from the same set of tumor samples: one dataset was generated using a blocked randomization design when assigning arrays to samples and hence was free of confounding array effects; the second dataset was generated without blocking or randomization and exhibited array effects. The randomized dataset was assessed for differential expression between two tumor groups and treated as the benchmark. The non-randomized dataset was assessed for differential expression after normalization and compared against the benchmark. Normalization improved the true positive rate significantly in the non-randomized data but still possessed a false discovery rate as high as 50%. Adding a batch adjustment step before normalization further reduced the number of false positive markers while maintaining a similar number of true positive markers, which resulted in a false discovery rate of 32% to 48%, depending on the specific normalization method. We concluded the paper with some insights on possible causes of false discoveries to shed light on how to improve normalization for microRNA arrays.

  6. MicroRNA Array Normalization: An Evaluation Using a Randomized Dataset as the Benchmark

    PubMed Central

    Qin, Li-Xuan; Zhou, Qin

    2014-01-01

    MicroRNA arrays possess a number of unique data features that challenge the assumption key to many normalization methods. We assessed the performance of existing normalization methods using two microRNA array datasets derived from the same set of tumor samples: one dataset was generated using a blocked randomization design when assigning arrays to samples and hence was free of confounding array effects; the second dataset was generated without blocking or randomization and exhibited array effects. The randomized dataset was assessed for differential expression between two tumor groups and treated as the benchmark. The non-randomized dataset was assessed for differential expression after normalization and compared against the benchmark. Normalization improved the true positive rate significantly in the non-randomized data but still possessed a false discovery rate as high as 50%. Adding a batch adjustment step before normalization further reduced the number of false positive markers while maintaining a similar number of true positive markers, which resulted in a false discovery rate of 32% to 48%, depending on the specific normalization method. We concluded the paper with some insights on possible causes of false discoveries to shed light on how to improve normalization for microRNA arrays. PMID:24905456

  7. Microstructure and Tensile/Corrosion Properties Relationships of Directionally Solidified Al-Cu-Ni Alloys

    NASA Astrophysics Data System (ADS)

    Rodrigues, Adilson V.; Lima, Thiago S.; Vida, Talita A.; Brito, Crystopher; Garcia, Amauri; Cheung, Noé

    2018-03-01

    Al-Cu-Ni alloys are of scientific and technological interest due to high strength/high temperature applications, based on the reinforcement originated from the interaction between the Al-rich phase and intermetallic composites. The nature, morphology, size, volume fraction and dispersion of IMCs particles throughout the Al-rich matrix are important factors determining the resulting mechanical and chemical properties. The present work aims to evaluate the effect of the addition of 1wt%Ni into Al-5wt%Cu and Al-15wt%Cu alloys on the solidification rate, macrosegregation, microstructure features and the interrelations of such characteristics on tensile and corrosion properties. A directional solidification technique is used permitting a wide range of microstructural scales to be examined. Experimental growth laws relating the primary and secondary dendritic spacings to growth rate and solidification cooling rate are proposed, and Hall-Petch type equations are derived relating the ultimate tensile strength and elongation to the primary dendritic spacing. Considering a compromise between ultimate tensile strength and corrosion resistance of the examined alloys samples from both alloys castings it is shown that the samples having more refined microstructures are associated with the highest values of such properties.

  8. Fertility and mortality in India during 1951-1971.

    PubMed

    Sinha, U P

    1976-03-01

    Statistical corrections found necessary to draw up a set of life tab les for the International Institute for Population Studies are discussed . The 1971 Census of India enumerated 547,949,809 persons, 13 million short of the estimate by the Census Actuary. Although the fluid migration situation may have complicated the figures, the real problem seems to be overestimation of the drop in death rates and underestimation of the adoption of family planning. Also, the Census Actuary borrowed infant mortality rates observed by the National Sample Survey without adjusting for underestimation of vital events. This results in underestimation of childhood mortality. The smoothed age distributions of 1961 seem different from the ones of 1951. There is a 2% difference in the proportion of children below 5 years of age, a figure that does not seem possible due to mortality decline alone in the 10-year period. Population projections using the method derived have been very close to enumerated populations and the vital rates are close to estimates of other authors and surveys. It is possible the Sample Re gistration Scheme did not make adjustments for events missed by the enumerator and, subsequently, by the supervisor.

  9. Classification based upon gene expression data: bias and precision of error rates.

    PubMed

    Wood, Ian A; Visscher, Peter M; Mengersen, Kerrie L

    2007-06-01

    Gene expression data offer a large number of potentially useful predictors for the classification of tissue samples into classes, such as diseased and non-diseased. The predictive error rate of classifiers can be estimated using methods such as cross-validation. We have investigated issues of interpretation and potential bias in the reporting of error rate estimates. The issues considered here are optimization and selection biases, sampling effects, measures of misclassification rate, baseline error rates, two-level external cross-validation and a novel proposal for detection of bias using the permutation mean. Reporting an optimal estimated error rate incurs an optimization bias. Downward bias of 3-5% was found in an existing study of classification based on gene expression data and may be endemic in similar studies. Using a simulated non-informative dataset and two example datasets from existing studies, we show how bias can be detected through the use of label permutations and avoided using two-level external cross-validation. Some studies avoid optimization bias by using single-level cross-validation and a test set, but error rates can be more accurately estimated via two-level cross-validation. In addition to estimating the simple overall error rate, we recommend reporting class error rates plus where possible the conditional risk incorporating prior class probabilities and a misclassification cost matrix. We also describe baseline error rates derived from three trivial classifiers which ignore the predictors. R code which implements two-level external cross-validation with the PAMR package, experiment code, dataset details and additional figures are freely available for non-commercial use from http://www.maths.qut.edu.au/profiles/wood/permr.jsp

  10. Detrital thermochronology of Rhine, Elbe and Meuse river sediment (Central Europe): implications for provenance, erosion and mineral fertility

    NASA Astrophysics Data System (ADS)

    Glotzbach, C.; Busschers, F. S.; Winsemann, J.

    2018-03-01

    Here we present detrital apatite fission track (AFT), zircon fission track (ZFT) and a few apatite (U-Th)/He (AHe) data of Middle Pleistocene to modern Rhine, Meuse and Elbe river sediments in order to resolve processes that control detrital age distributions (provenance, erosion and mineral fertility). We used a modelling approach to compare observed with theoretically predicted age distributions from an interpolated in situ AFT and ZFT age map. In situ cooling ages do show large differences in the Rhine drainage basin, facilitating the differentiation between different source regions. Inconsistencies between observed and theoretical age distributions of the Meuse and Elbe samples can be explained by mixing and reworking of sediments with different provenances (Meuse Middle Pleistocene terrace sediment) and a yet unexplored source region with old AFT ages (Elbe samples). Overall, the results show that detrital thermochronology is capable of identifying the provenance of Middle Pleistocene to modern sediments. The AFT age distributions of Rhine sediments are dominated ( 70%) by AFT ages representing the Alps. A possible explanation is higher erosion rates in the Alps as compared to areas outside the Alps. A Late Pleistocene sample from the Upper Rhine Graben contains apatite grains from the Molasse and Hegau volcanics, which we explain with a shift of the headwaters of the Rhine to the north as a result of intense Middle Pleistocene Riss glaciation. Contrary to the observed dominance of Alpine-derived AFT ages in Rhine sediments, the relative contribution of zircon ages with sources in the Alps is lower and significantly decreases downstream, suggesting a major source of zircons outside the Alps. This can be explained by increased zircon fertility of sediments derived from the Rhenish massif. Therefore, we conclude that erosion and mineral fertility are the main processes controlling detrital AFT and ZFT age distributions of the sampled river sediment. In case of the Rhine samples, AFT age distributions are mainly controlled by differences in erosion rates, whereas this impact is completely balanced by differences in mineral fertility for the ZFT data.

  11. Gram-negative and -positive bacteria differentiation in blood culture samples by headspace volatile compound analysis.

    PubMed

    Dolch, Michael E; Janitza, Silke; Boulesteix, Anne-Laure; Graßmann-Lichtenauer, Carola; Praun, Siegfried; Denzer, Wolfgang; Schelling, Gustav; Schubert, Sören

    2016-12-01

    Identification of microorganisms in positive blood cultures still relies on standard techniques such as Gram staining followed by culturing with definite microorganism identification. Alternatively, matrix-assisted laser desorption/ionization time-of-flight mass spectrometry or the analysis of headspace volatile compound (VC) composition produced by cultures can help to differentiate between microorganisms under experimental conditions. This study assessed the efficacy of volatile compound based microorganism differentiation into Gram-negatives and -positives in unselected positive blood culture samples from patients. Headspace gas samples of positive blood culture samples were transferred to sterilized, sealed, and evacuated 20 ml glass vials and stored at -30 °C until batch analysis. Headspace gas VC content analysis was carried out via an auto sampler connected to an ion-molecule reaction mass spectrometer (IMR-MS). Measurements covered a mass range from 16 to 135 u including CO2, H2, N2, and O2. Prediction rules for microorganism identification based on VC composition were derived using a training data set and evaluated using a validation data set within a random split validation procedure. One-hundred-fifty-two aerobic samples growing 27 Gram-negatives, 106 Gram-positives, and 19 fungi and 130 anaerobic samples growing 37 Gram-negatives, 91 Gram-positives, and two fungi were analysed. In anaerobic samples, ten discriminators were identified by the random forest method allowing for bacteria differentiation into Gram-negative and -positive (error rate: 16.7 % in validation data set). For aerobic samples the error rate was not better than random. In anaerobic blood culture samples of patients IMR-MS based headspace VC composition analysis facilitates bacteria differentiation into Gram-negative and -positive.

  12. An enzyme-linked immunosorbent assay for the determination of dioxins in contaminated sediment and soil samples

    PubMed Central

    Van Emon, Jeanette M.; Chuang, Jane C.; Lordo, Robert A.; Schrock, Mary E.; Nichkova, Mikaela; Gee, Shirley J.; Hammock, Bruce D.

    2010-01-01

    A 96-microwell enzyme-linked immunosorbent assay (ELISA) method was evaluated to determine PCDDs/PCDFs in sediment and soil samples from an EPA Superfund site. Samples were prepared and analyzed by both the ELISA and a gas chromatography/high resolution mass spectrometry (GC/HRMS) method. Comparable method precision, accuracy, and detection level (8 ng kg−1) were achieved by the ELISA method with respect to GC/HRMS. However, the extraction and cleanup method developed for the ELISA requires refinement for the soil type that yielded a waxy residue after sample processing. Four types of statistical analyses (Pearson correlation coefficient, paired t-test, nonparametric tests, and McNemar’s test of association) were performed to determine whether the two methods produced statistically different results. The log-transformed ELISA-derived 2,3,7,8-tetrachlorodibenzo-p-dioxin values and logtransformed GC/HRMS-derived TEQ values were significantly correlated (r = 0.79) at the 0.05 level. The median difference in values between ELISA and GC/HRMS was not significant at the 0.05 level. Low false negative and false positive rates (<10%) were observed for the ELISA when compared to the GC/HRMS at 1000 ng TEQ kg−1. The findings suggest that immunochemical technology could be a complementary monitoring tool for determining concentrations at the 1000 ng TEQ kg−1 action level for contaminated sediment and soil. The ELISA could also be used in an analytical triage approach to screen and rank samples prior to instrumental analysis. PMID:18313102

  13. Response of human bone marrow-derived MSCs on triphasic Ca-P substrate with various HA/TCP ratio.

    PubMed

    Bajpai, Indu; Kim, Duk Yeon; Kyong-Jin, Jung; Song, In-Hwan; Kim, Sukyoung

    2017-01-01

    Calcium phosphates (Ca-P) are used commonly as artificial bone substitutes to control the biodegradation rate of an implant in the body fluid. This study examined the in vitro proliferation of human bone marrow-derived mesenchymal stem cells (hBMSCs) on triphasic Ca-P samples. For this aspect, hydroxyapatite (HA), dicalcium phosphate dehydrate (DCPD), and calcium hydroxide (Ca(OH) 2 ) were mixed at various ratios, cold compacted, and sintered at 1250°C in air. X-ray diffraction showed that the β-tricalcium phosphate (TCP) to α-TCP phase transformation increased with increasing DCPD/HA ratio. The micro-hardness deceased with increasing TCP content, whereas the mean grain size and porosity increased with increasing TCP concentration. To evaluate the in vitro degree of adhesion and proliferation on the HA/TCP samples, human BMSCs were incubated on the HA/TCP samples and analyzed by a cells proliferation assay, expression of the extracellular matrix (ECM) genes, such as α-smooth muscle actin (α-SMA) and fibronectin (FN), and FITC-phalloidin fluorescent staining. In terms of the interactions of human BMSCs with the triphasic Ca-P samples, H50T50 (Ca/P = 1.59) markedly enhanced cell spreading, proliferation, FN, and α-SMA compared with H100T0 (Ca/P = 1.67). Interestingly, these results show that among the five HA/TCP samples, H50T50 is the optimal Ca-P composition for in vitro cell proliferation. © 2015 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater, 105B: 72-80, 2017. © 2015 Wiley Periodicals, Inc.

  14. Using data from an encounter sampler to model fish dispersal

    USGS Publications Warehouse

    Obaza, A.; DeAngelis, D.L.; Trexler, J.C.

    2011-01-01

    A method to estimate speed of free-ranging fishes using a passive sampling device is described and illustrated with data from the Everglades, U.S.A. Catch per unit effort (CPUE) from minnow traps embedded in drift fences was treated as an encounter rate and used to estimate speed, when combined with an independent estimate of density obtained by use of throw traps that enclose 1 m2 of marsh habitat. Underwater video was used to evaluate capture efficiency and species-specific bias of minnow traps and two sampling studies were used to estimate trap saturation and diel-movement patterns; these results were used to optimize sampling and derive correction factors to adjust species-specific encounter rates for bias and capture efficiency. Sailfin mollies Poecilia latipinna displayed a high frequency of escape from traps, whereas eastern mosquitofish Gambusia holbrooki were most likely to avoid a trap once they encountered it; dollar sunfish Lepomis marginatus were least likely to avoid the trap once they encountered it or to escape once they were captured. Length of sampling and time of day affected CPUE; fishes generally had a very low retention rate over a 24 h sample time and only the Everglades pygmy sunfish Elassoma evergladei were commonly captured at night. Dispersal speed of fishes in the Florida Everglades, U.S.A., was shown to vary seasonally and among species, ranging from 0.05 to 0.15 m s-1 for small poeciliids and fundulids to 0.1 to 1.8 m s-1 for L. marginatus. Speed was generally highest late in the wet season and lowest in the dry season, possibly tied to dispersal behaviours linked to finding and remaining in dry-season refuges. These speed estimates can be used to estimate the diffusive movement rate, which is commonly employed in spatial ecological models.

  15. 75 FR 67794 - Self-Regulatory Organizations; Chicago Board Options Exchange, Incorporated; Order Granting...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-03

    ... commodities or commodity futures, options on commodities, or other commodity derivatives or Commodity-Based... options or other derivatives on any of the foregoing; or (b) interest rate futures or options or... derivatives on any of the foregoing; or (b) interest rate futures or options or derivatives on the foregoing...

  16. Metabolically Derived Human Ventilation Rates: A Revised Approach Based Upon Oxygen Consumption Rates (Final Report, 2009)

    EPA Science Inventory

    EPA announced the availability of the final report, Metabolically Derived Human Ventilation Rates: A Revised Approach Based Upon Oxygen Consumption Rates. This report provides a revised approach for calculating an individual's ventilation rate directly from their oxygen c...

  17. Increased odds and predictive rates of MMPI-2-RF scale elevations in patients with psychogenic non-epileptic seizures and observed sex differences.

    PubMed

    Del Bene, Victor A; Arce Rentería, Miguel; Maiman, Moshe; Slugh, Mitch; Gazzola, Deana M; Nadkarni, Siddhartha S; Barr, William B

    2017-07-01

    The Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF) is a self-report instrument, previously shown to differentiate patients with epileptic seizures (ES) and psychogenic non-epileptic seizures (PNES). At present, the odds of MMPI-2-RF scale elevations in PNES patients, as well as the diagnostic predictive value of such scale elevations, remain largely unexplored. This can be of clinical utility, particularly when a diagnosis is uncertain. After looking at mean group differences, we applied contingency table derived odds ratios to a sample of ES (n=92) and PNES (n=77) patients from a video EEG (vEEG) monitoring unit. We also looked at the positive and negative predictive values (PPV, NPV), as well as the false discovery rate (FDR) and false omission rate (FOR) for scales found to have increased odds of elevation in PNES patients. This was completed for the overall sample, as well as the sample stratified by sex. The odds of elevations related to somatic concerns, negative mood, and suicidal ideation in the PNES sample ranged from 2 to 5 times more likely. Female PNES patients had 3-6 times greater odds of such scale elevations, while male PNES patients had odds of 5-15 times more likely. PPV rates ranged from 53.66% to 84.62%, while NPV rates ranged from 47.52% to 90.91%. FDR across scales ranged from 15.38% to 50%, while the FOR ranged from 9.09% to 52.47%. Consistent with prior research, PNES patients have greater odds of MMPI-2-RF scale elevations, particularly related to somatic concerns and mood disturbance. Female PNES patients endorsed greater emotional distress, including endorsement of suicide related items. Elevations of these scales could aid in differentiating PNES from ES patients, although caution is warranted due to the possibility of both false positives and the incorrect omissions of PNES cases. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. MAJOR-MERGER GALAXY PAIRS IN THE COSMOS FIELD-MASS-DEPENDENT MERGER RATE EVOLUTION SINCE z = 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, C. Kevin; Zhao, Yinghe; Gao, Y.

    2012-03-10

    We present results of a statistical study of the cosmic evolution of the mass-dependent major-merger rate since z = 1. A stellar mass limited sample of close major-merger pairs (the CPAIR sample) was selected from the archive of the COSMOS survey. Pair fractions at different redshifts derived using the CPAIR sample and a local K-band-selected pair sample show no significant variations with stellar mass. The pair fraction exhibits moderately strong cosmic evolution, with the best-fitting function of f{sub pair} = 10{sup -1.88({+-}0.03)}(1 + z){sup 2.2({+-}0.2)}. The best-fitting function for the merger rate is R{sub mg} (Gyr{sup -1}) = 0.053 Multiplication-Signmore » (M{sub star}/10{sup 10.7} M{sub Sun} ){sup 0.3}(1 + z){sup 2.2}/(1 + z/8). This rate implies that galaxies of M{sub star} {approx} 10{sup 10}-10{sup 11.5} M{sub Sun} have undergone {approx}0.5-1.5 major mergers since z = 1. Our results show that, for massive galaxies (M{sub star} {>=} 10{sup 10.5} M{sub Sun }) at z {<=} 1, major mergers involving star-forming galaxies (i.e., wet and mixed mergers) can account for the formation of both ellipticals and red quiescent galaxies (RQGs). On the other hand, major mergers cannot be responsible for the formation of most low mass ellipticals and RQGs of M{sub star} {approx}< 10{sup 10.3} M{sub Sun }. Our quantitative estimates indicate that major mergers have significant impact on the stellar mass assembly of the most massive galaxies (M{sub star} {>=} 10{sup 11.3} M{sub Sun }), but for less massive galaxies the stellar mass assembly is dominated by the star formation. Comparison with the mass-dependent (ultra)luminous infrared galaxies ((U)LIRG) rates suggests that the frequency of major-merger events is comparable to or higher than that of (U)LIRGs.« less

  19. Mass loss from hot, luminous stars

    NASA Astrophysics Data System (ADS)

    Burnley, Adam Warwick

    A general enquiry into the physics of mass loss from hot, luminous stars is presented. Ha spectroscopy of 64 Galactic early-type stars has been obtained using the telescopes of the Isaac Newton Group (ING) and the Anglo-Australian Observatory (AAO). The sample was selected to include objects with published radio and/or mm fluxes. The Halpha observations are quantitatively modelled using a modified version of the FORSOL code developed by Puls et al. (1996). FORSOL has been coupled with the PIKAIA subroutine (Charbonneau and Knapp, 1996) to create PHALTEE (Program for Halpha Line Transfer with Eugenic Estimation), in order to search a specified parameter space for the 'best' (quasi- least-squares) model fit to the data, using a genetic algorithm. This renders Ha modelling both more objective and automated. Where possible, both mass-loss rates and velocity field beta-exponents are determined for the sample. New mm-wave observations of nineteen Galactic early-type stars, including a subset of the Halpha sample, have been obtained using the Sub-millimetre Common User Bolometer Array (SCUBA). Where possible, mean fluxes are calculated, and these data used with the results of a literature survey of mm and cm fluxes to determine mass-loss rates for a larger sample, of 53 Galactic early-type stars. The incidence of nonthermal emission is examined, with 23% of the sample exhibiting strong evidence for nonthermal flux. The occurrence of binarity and excess X-ray emission amongst the nonthermal emitters is also investigated. For the subset of 36 stars common to both the Halpha and mm/radio samples, the results permit a comparison of mass-loss rates derived using diagnostics that probe the wind conditions at different radial depths. A mean value of log (Mradio/MHalpha) = 0.02 +/- 0.05 is obtained for the thermal radio emitters. The wind-momentum-luminosity relationship (WLR) for the sample is also investigated.

  20. A combined parasitological molecular approach for noninvasive characterization of parasitic nematode communities in wild hosts.

    PubMed

    Budischak, Sarah A; Hoberg, Eric P; Abrams, Art; Jolles, Anna E; Ezenwa, Vanessa O

    2015-09-01

    Most hosts are concurrently or sequentially infected with multiple parasites; thus, fully understanding interactions between individual parasite species and their hosts depends on accurate characterization of the parasite community. For parasitic nematodes, noninvasive methods for obtaining quantitative, species-specific infection data in wildlife are often unreliable. Consequently, characterization of gastrointestinal nematode communities of wild hosts has largely relied on lethal sampling to isolate and enumerate adult worms directly from the tissues of dead hosts. The necessity of lethal sampling severely restricts the host species that can be studied, the adequacy of sample sizes to assess diversity, the geographic scope of collections and the research questions that can be addressed. Focusing on gastrointestinal nematodes of wild African buffalo, we evaluated whether accurate characterization of nematode communities could be made using a noninvasive technique that combined conventional parasitological approaches with molecular barcoding. To establish the reliability of this new method, we compared estimates of gastrointestinal nematode abundance, prevalence, richness and community composition derived from lethal sampling with estimates derived from our noninvasive approach. Our noninvasive technique accurately estimated total and species-specific worm abundances, as well as worm prevalence and community composition when compared to the lethal sampling method. Importantly, the rate of parasite species discovery was similar for both methods, and only a modest number of barcoded larvae (n = 10) were needed to capture key aspects of parasite community composition. Overall, this new noninvasive strategy offers numerous advantages over lethal sampling methods for studying nematode-host interactions in wildlife and can readily be applied to a range of study systems. © 2015 John Wiley & Sons Ltd.

  1. Identification and analysis of damaged or porous hair.

    PubMed

    Hill, Virginia; Loni, Elvan; Cairns, Thomas; Sommer, Jonathan; Schaffer, Michael

    2014-06-01

    Cosmetic hair treatments have been referred to as 'the pitfall' of hair analysis. However, most cosmetic treatments, when applied to the hair as instructed by the product vendors, do not interfere with analysis, provided such treatments can be identified by the laboratory and the samples analyzed and reported appropriately for the condition of the hair. This paper provides methods for identifying damaged or porous hair samples using digestion rates of hair in dithiothreitol with and without proteinase K, as well as a protein measurement method applied to dithiothreitol-digested samples. Extremely damaged samples may be unsuitable for analysis. Aggressive and extended aqueous washing of hair samples is a proven method for removing or identifying externally derived drug contamination of hair. In addition to this wash procedure, we have developed an alternative wash procedure using 90% ethanol for washing damaged or porous hair. The procedure, like the aqueous wash procedure, requires analysis of the last of five washes to evaluate the effectiveness of the washing procedure. This evaluation, termed the Wash Criterion, is derived from studies of the kinetics of washing of hair samples that have been experimentally contaminated and of hair from drug users. To study decontamination methods, in vitro contaminated drug-negative hair samples were washed by both the aqueous buffer method and a 90% ethanol method. Analysis of cocaine and methamphetamine was by liquid chromatography-tandem mass spectrometry (LC/MS/MS). Porous hair samples from drug users, when washed in 90% ethanol, pass the wash criterion although they may fail the aqueous wash criterion. Those samples that fail both the ethanolic and aqueous wash criterion are not reported as positive for ingestion. Similar ratios of the metabolite amphetamine relative to methamphetamine in the last wash and the hair is an additional criterion for assessing contamination vs. ingestion of methamphetamine. Copyright © 2014 John Wiley & Sons, Ltd.

  2. Underway Sampling of Marine Inherent Optical Properties on the Tara Oceans Expedition as a Novel Resource for Ocean Color Satellite Data Product Validation

    NASA Technical Reports Server (NTRS)

    Werdell, P. Jeremy; Proctor, Christopher W.; Boss, Emmanuel; Leeuw, Thomas; Ouhssain, Mustapha

    2013-01-01

    Developing and validating data records from operational ocean color satellite instruments requires substantial volumes of high quality in situ data. In the absence of broad, institutionally supported field programs, organizations such as the NASA Ocean Biology Processing Group seek opportunistic datasets for use in their operational satellite calibration and validation activities. The publicly available, global biogeochemical dataset collected as part of the two and a half year Tara Oceans expedition provides one such opportunity. We showed how the inline measurements of hyperspectral absorption and attenuation coefficients collected onboard the R/V Tara can be used to evaluate near-surface estimates of chlorophyll-a, spectral particulate backscattering coefficients, particulate organic carbon, and particle size classes derived from the NASA Moderate Resolution Imaging Spectroradiometer onboard Aqua (MODISA). The predominant strength of such flow-through measurements is their sampling rate-the 375 days of measurements resulted in 165 viable MODISA-to-in situ match-ups, compared to 13 from discrete water sampling. While the need to apply bio-optical models to estimate biogeochemical quantities of interest from spectroscopy remains a weakness, we demonstrated how discrete samples can be used in combination with flow-through measurements to create data records of sufficient quality to conduct first order evaluations of satellite-derived data products. Given an emerging agency desire to rapidly evaluate new satellite missions, our results have significant implications on how calibration and validation teams for these missions will be constructed.

  3. Moments and Root-Mean-Square Error of the Bayesian MMSE Estimator of Classification Error in the Gaussian Model.

    PubMed

    Zollanvari, Amin; Dougherty, Edward R

    2014-06-01

    The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.

  4. Workforce perceptions of hospital safety culture: development and validation of the patient safety climate in healthcare organizations survey.

    PubMed

    Singer, Sara; Meterko, Mark; Baker, Laurence; Gaba, David; Falwell, Alyson; Rosen, Amy

    2007-10-01

    To describe the development of an instrument for assessing workforce perceptions of hospital safety culture and to assess its reliability and validity. Primary data collected between March 2004 and May 2005. Personnel from 105 U.S. hospitals completed a 38-item paper and pencil survey. We received 21,496 completed questionnaires, representing a 51 percent response rate. Based on review of existing safety climate surveys, we developed a list of key topics pertinent to maintaining a culture of safety in high-reliability organizations. We developed a draft questionnaire to address these topics and pilot tested it in four preliminary studies of hospital personnel. We modified the questionnaire based on experience and respondent feedback, and distributed the revised version to 42,249 hospital workers. We randomly divided respondents into derivation and validation samples. We applied exploratory factor analysis to responses in the derivation sample. We used those results to create scales in the validation sample, which we subjected to multitrait analysis (MTA). We identified nine constructs, three organizational factors, two unit factors, three individual factors, and one additional factor. Constructs demonstrated substantial convergent and discriminant validity in the MTA. Cronbach's alpha coefficients ranged from 0.50 to 0.89. It is possible to measure key salient features of hospital safety climate using a valid and reliable 38-item survey and appropriate hospital sample sizes. This instrument may be used in further studies to better understand the impact of safety climate on patient safety outcomes.

  5. In situ gamma-spectrometry several years after deposition of radiocesium. II. Peak-to-valley method.

    PubMed

    Gering, F; Hillmann, U; Jacob, P; Fehrenbacher, G

    1998-12-01

    A new method is introduced for deriving radiocesium soil contaminations and kerma rates in air from in situ gamma-ray spectrometric measurements. The approach makes use of additional information about gamma-ray attenuation given by the peak-to-valley ratio, which is the ratio of the count rates for primary and forward scattered photons. In situ measurements are evaluated by comparing the experimental data with the results of Monte Carlo simulations of photon transport and detector response. The influence of photons emitted by natural radionuclides on the calculation of the peak-to-valley ratio is carefully analysed. The new method has been applied to several post-Chernobyl measurements and the results agreed well with those of soil sampling.

  6. The differential path phase comparison method for determining pressure derivatives of elastic constants of solids

    NASA Astrophysics Data System (ADS)

    Peselnick, L.

    1982-08-01

    An ultrasonic method is presented which combines features of the differential path and the phase comparison methods. The proposed differential path phase comparison method, referred to as the `hybrid' method for brevity, eliminates errors resulting from phase changes in the bond between the sample and buffer rod. Define r(P) [and R(P)] as the square of the normalized frequency for cancellation of sample waves for shear [and for compressional] waves. Define N as the number of wavelengths in twice the sample length. The pressure derivatives r'(P) and R' (P) for samples of Alcoa 2024-T4 aluminum were obtained by using the phase comparison and the hybrid methods. The values of the pressure derivatives obtained by using the phase comparison method show variations by as much as 40% for small values of N (N < 50). The pressure derivatives as determined from the hybrid method are reproducible to within ±2% independent of N. The values of the pressure derivatives determined by the phase comparison method for large N are the same as those determined by the hybrid method. Advantages of the hybrid method are (1) no pressure dependent phase shift at the buffer-sample interface, (2) elimination of deviatoric stress in the sample portion of the sample assembly with application of hydrostatic pressure, and (3) operation at lower ultrasonic frequencies (for comparable sample lengths), which eliminates detrimental high frequency ultrasonic problems. A reduction of the uncertainties of the pressure derivatives of single crystals and of low porosity polycrystals permits extrapolation of such experimental data to deeper mantle depths.

  7. Fast Extraction and Detection of 4-Methylimidazole in Soy Sauce Using Magnetic Molecularly Imprinted Polymer by HPLC.

    PubMed

    Feng, Zufei; Lu, Yan; Zhao, Yingjuan; Ye, Helin

    2017-11-02

    On the basis of magnetic molecularly imprinted polymer (MMIP) solid-phase extraction coupled with high performance liquid chromatography, we established a new method for the determination of the 4-methylimidazole (4-MEI) in soy sauce. Scanning electron microscopy (SEM), Fourier transform infrared (FT-IR), X-ray diffraction (XRD) and vibrating sample magnetometer (VSM) were used to characterize the synthesized MMIPs. To evaluate the polymers, batch rebinding experiments were carried out. The binding strength and capacity were determined from the derived Freundlich isotherm (FI) equation. The selective recognition capability of MMIPs was investigated with a reference compound and a structurally similar compound. As a selective pre-concentration sorbents for 4-methylimidazole in soy sauce, the MMIPs showed a satisfied recoveries rate of spiked samples, ranged from 97% to 105%. As a result, the prepared MMIPs could be applied to selectively pre-concentrate and determine 4-methylimidazole in soy sauce samples.

  8. Personality in cyberspace: personal Web sites as media for personality expressions and impressions.

    PubMed

    Marcus, Bernd; Machilek, Franz; Schütz, Astrid

    2006-06-01

    This research examined the personality of owners of personal Web sites based on self-reports, visitors' ratings, and the content of the Web sites. The authors compared a large sample of Web site owners with population-wide samples on the Big Five dimensions of personality. Controlling for demographic differences, the average Web site owner reported being slightly less extraverted and more open to experience. Compared with various other samples, Web site owners did not generally differ on narcissism, self-monitoring, or self-esteem, but gender differences on these traits were often smaller in Web site owners. Self-other agreement was highest with Openness to Experience, but valid judgments of all Big Five dimensions were derived from Web sites providing rich information. Visitors made use of quantifiable features of the Web site to infer personality, and the cues they utilized partly corresponded to self-reported traits. Copyright 2006 APA, all rights reserved.

  9. Sampling errors in the measurement of rain and hail parameters

    NASA Technical Reports Server (NTRS)

    Gertzman, H. S.; Atlas, D.

    1977-01-01

    Attention is given to a general derivation of the fractional standard deviation (FSD) of any integrated property X such that X(D) = cD to the n. This work extends that of Joss and Waldvogel (1969). The equation is applicable to measuring integrated properties of cloud, rain or hail populations (such as water content, precipitation rate, kinetic energy, or radar reflectivity) which are subject to statistical sampling errors due to the Poisson distributed fluctuations of particles sampled in each particle size interval and the weighted sum of the associated variances in proportion to their contribution to the integral parameter to be measured. Universal curves are presented which are applicable to the exponential size distribution permitting FSD estimation of any parameters from n = 0 to n = 6. The equations and curves also permit corrections for finite upper limits in the size spectrum and a realistic fall speed law.

  10. Influence of the Sampling Rate and Noise Characteristics on Prediction of the Maximal Safe Laser Exposure in Human Skin Using Pulsed Photothermal Radiometry

    NASA Astrophysics Data System (ADS)

    Vidovič, L.; Milanič, M.; Majaron, B.

    2013-09-01

    Pulsed photothermal radiometry (PPTR) allows for noninvasive determination of the laser-induced temperature depth profile in strongly scattering samples, including human skin. In a recent experimental study, we have demonstrated that such information can be used to derive rather accurate predictions of the maximal safe radiant exposure on an individual patient basis. This has important implications for efficacy and safety of several laser applications in dermatology and aesthetic surgery, which are often compromised by risk of adverse side effects (e.g., scarring, and dyspigmentation) resulting from nonselective absorption of strong laser light in epidermal melanin. In this study, the differences between the individual maximal safe radiant exposure values as predicted from PPTR temperature depth profiling performed using a commercial mid-IR thermal camera (as used to acquire the original patient data) and our customized PPTR setup are analyzed. To this end, the latter has been used to acquire 17 PPTR records from three healthy volunteers, using 1 ms laser irradiation at 532 nm and a signal sampling rate of 20 000 . The laser-induced temperature profiles are reconstructed first from the intact PPTR signals, and then by binning the data to imitate the lower sampling rate of the IR camera (1000 fps). Using either the initial temperature profile in a dedicated numerical model of heat transfer or protein denaturation dynamics, the predicted levels of epidermal thermal damage and the corresponding are compared. A similar analysis is performed also with regard to the differences between noise characteristics of the two PPTR setups.

  11. Super-Eddington accreting massive black holes as long-lived cosmological standards.

    PubMed

    Wang, Jian-Min; Du, Pu; Valls-Gabaud, David; Hu, Chen; Netzer, Hagai

    2013-02-22

    Super-Eddington accreting massive black holes (SEAMBHs) reach saturated luminosities above a certain accretion rate due to photon trapping and advection in slim accretion disks. We show that these SEAMBHs could provide a new tool for estimating cosmological distances if they are properly identified by hard x-ray observations, in particular by the slope of their 2-10 keV continuum. To verify this idea we obtained black hole mass estimates and x-ray data for a sample of 60 narrow line Seyfert 1 galaxies that we consider to be the most promising SEAMBH candidates. We demonstrate that the distances derived by the new method for the objects in the sample get closer to the standard luminosity distances as the hard x-ray continuum gets steeper. The results allow us to analyze the requirements for using the method in future samples of active black holes and to demonstrate that the expected uncertainty, given large enough samples, can make them into a useful, new cosmological ruler.

  12. Global variations in abyssal peridotite compositions

    NASA Astrophysics Data System (ADS)

    Warren, Jessica M.

    2016-04-01

    Abyssal peridotites are ultramafic rocks collected from mid-ocean ridges that are the residues of adiabatic decompression melting. Their compositions provide information on the degree of melting and melt-rock interaction involved in the formation of oceanic lithosphere, as well as providing constraints on pre-existing mantle heterogeneities. This review presents a compilation of abyssal peridotite geochemical data (modes, mineral major elements, and clinopyroxene trace elements) for > 1200 samples from 53 localities on 6 major ridge systems. On the basis of composition and petrography, peridotites are classified into one of five lithological groups: (1) residual peridotite, (2) dunite, (3) gabbro-veined and/or plagioclase-bearing peridotite, (4) pyroxenite-veined peridotite, and (5) other types of melt-added peridotite. Almost a third of abyssal peridotites are veined, indicating that the oceanic lithospheric mantle is more fertile, on average, than estimates based on residual peridotites alone imply. All veins appear to have formed recently during melt transport beneath the ridge, though some pyroxenites may be derived from melting of recycled oceanic crust. A limited number of samples are available at intermediate and fast spreading rates, with samples from the East Pacific Rise indicating high degrees of melting. At slow and ultra-slow spreading rates, residual abyssal peridotites define a large (0-15% modal clinopyroxene and spinel Cr# = 0.1-0.6) compositional range. These variations do not match the prediction for how degree of melting should vary as a function of spreading rate. Instead, the compositional ranges of residual peridotites are derived from a combination of melting, melt-rock interaction and pre-existing compositional variability, where melt-rock interaction is used here as a general term to refer to the wide range of processes that can occur during melt transport in the mantle. Globally, 10% of abyssal peridotites are refractory (0% clinopyroxene, spinel Cr# > 0.5, bulk Al2O3 < 1 wt.%) and some ridge sections are dominated by harzburgites while lacking a significant basaltic crust. Abyssal ultramafic samples thus indicate that the mantle is multi-component, probably consisting of at least three components (lherzolite, harzburgite, and pyroxenite). Overall, the large compositional range among residual and melt-added peridotites implies that the oceanic lithospheric mantle is heterogeneous, which will lead to the generation of further heterogeneities upon subduction back into the mantle.

  13. Convective Heat Transfer Scaling of Ignition Delay and Burning Rate with Heat Flux and Stretch Rate in the Equivalent Low Stretch Apparatus

    NASA Technical Reports Server (NTRS)

    Olson, Sandra

    2011-01-01

    To better evaluate the buoyant contributions to the convective cooling (or heating) inherent in normal-gravity material flammability test methods, we derive a convective heat transfer correlation that can be used to account for the forced convective stretch effects on the net radiant heat flux for both ignition delay time and burning rate. The Equivalent Low Stretch Apparatus (ELSA) uses an inverted cone heater to minimize buoyant effects while at the same time providing a forced stagnation flow on the sample, which ignites and burns as a ceiling fire. Ignition delay and burning rate data is correlated with incident heat flux and convective heat transfer and compared to results from other test methods and fuel geometries using similarity to determine the equivalent stretch rates and thus convective cooling (or heating) rates for those geometries. With this correlation methodology, buoyant effects inherent in normal gravity material flammability test methods can be estimated, to better apply the test results to low stretch environments relevant to spacecraft material selection.

  14. Estimating sedimentation rates and sources in a partially urbanized catchment using caesium-137

    NASA Astrophysics Data System (ADS)

    Ormerod, L. M.

    1998-06-01

    While there has been increased interest in determining sedimentation rates and sources in agricultural and forested catchments in recent years, there have been few studies dealing with urbanized catchments. A study of sedimentation rates and sources within channel and floodplain deposits of a partially urbanized catchment has been undertaken using the 137Cs technique. Results for sedimentation rates showed no particular downstream pattern. This may be partially explained by underestimation of sedimentation rates at some sites by failure to sample the full 137Cs profile, floodplain erosion and deliberate removal of sediment. Evidence of lateral increases in net sedimentation rates with distance from the channel may be explained by increased floodplain erosion at sites closer to the channel and floodplain formation by lateral deposition. Potential sediment sources for the catchment were considered to be forest topsoil, subsurface material and sediments derived from urban areas, which were found to be predominantly subsurface material. Tracing techniques showed an increase in subsurface material for downstream sites, confirming expectations that subsurface material would increase in the downstream direction in response to the direct and indirect effects of urbanization.

  15. Speaking rate effects on locus equation slope.

    PubMed

    Berry, Jeff; Weismer, Gary

    2013-11-01

    A locus equation describes a 1st order regression fit to a scatter of vowel steady-state frequency values predicting vowel onset frequency values. Locus equation coefficients are often interpreted as indices of coarticulation. Speaking rate variations with a constant consonant-vowel form are thought to induce changes in the degree of coarticulation. In the current work, the hypothesis that locus slope is a transparent index of coarticulation is examined through the analysis of acoustic samples of large-scale, nearly continuous variations in speaking rate. Following the methodological conventions for locus equation derivation, data pooled across ten vowels yield locus equation slopes that are mostly consistent with the hypothesis that locus equations vary systematically with coarticulation. Comparable analyses between different four-vowel pools reveal variations in the locus slope range and changes in locus slope sensitivity to rate change. Analyses across rate but within vowels are substantially less consistent with the locus hypothesis. Taken together, these findings suggest that the practice of vowel pooling exerts a non-negligible influence on locus outcomes. Results are discussed within the context of articulatory accounts of locus equations and the effects of speaking rate change.

  16. Validation of satellite-retrieved MBL cloud properties using DOE ARM AMF measurements at the Azores

    NASA Astrophysics Data System (ADS)

    Xi, B.; Dong, X.; Minnis, P.; Sun-Mack, S.

    2013-05-01

    Marine Boundary Layer (MBL) cloud properties derived for the Clouds and the Earth's Radiant Energy System (CERES) Project using Terra and Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) data are compared with observations taken at the Atmospheric Radiation Measurement (ARM) AMF AZORES site from June 2009 through December 2010. Retrievals from ARM surface-based data were averaged over a 1-hour interval centered at the time of each satellite overpass, and the CERES-MODIS Ed4 cloud properties were averaged within a 30-km x 30-km box centered on the ARM AZORES site. Two datasets were analyzed: all of the single-layered unbroken decks (SL) and those cases without temperature inversions. The CERES-MODIS cloud top/base heights were determined from cloud top/base temperature by using a lapse rate method normalized to the 24-h mean surface air temperature. The preliminary results show: for all SL MBL at daytime, they are, on average, 0.148 km (cloud top) and 0.087 km (cloud base) higher than the ARM radar-lidar observed cloud top and base, respectively. At nighttime, they are 0.446 km (cloud top) and 0.334 km (cloud base). For those cases without temperature inversions, the comparisons are close to their SL counterparts. For cloud temperatures, the MODIS-derived cloud-top and -base temperatures are 1.6 K lower and 0.4 K higher than the surface values with correlations of 0.92 during daytime. At nighttime, the differences are slightly larger and correlations are lower than daytime comparisons. Variations in the height difference are mainly caused by uncertainties in the surface air temperatures and lapse rates. Based on a total of 61 daytime and 87 nighttime samples (ALL SL cases), the temperature inversion layers occur about 72% during daytime and 83% during nighttime. The difference of surface-observed lapse rate and the satellite derived lapse rate can be 1.6 K/km for daytime and 3.3K/km for nighttime. From these lapse rates, we can further analyze the surface air temperature difference that used to calculate these lapse rate, which are ~3K difference between surface-observed and the satellite derived during the daytime and 5.1 K during nighttime. Further studies of the cause of the temperature inversions that may help the cloud heights retrievals by satellite. The preliminary comparisons in MBL microphysical properties have shown that the averaged CERES-MODIS derived MBL cloud-droplet effective radius is only 1.5 μm larger than ARM retrieval (13.2 μm), and LWP values are also very close to each other (112 vs. 124 gm-2) with a relative large difference in optical depth (10.6 vs. 14.4).

  17. The Interior Angular Momentum of Core Hydrogen Burning Stars from Gravity-mode Oscillations

    NASA Astrophysics Data System (ADS)

    Aerts, C.; Van Reeth, T.; Tkachenko, A.

    2017-09-01

    A major uncertainty in the theory of stellar evolution is the angular momentum distribution inside stars and its change during stellar life. We compose a sample of 67 stars in the core hydrogen burning phase with a {log} g value from high-resolution spectroscopy, as well as an asteroseismic estimate of the near-core rotation rate derived from gravity-mode oscillations detected in space photometry. This assembly includes 8 B-type stars and 59 AF-type stars, covering a mass range from 1.4 to 5 M ⊙, I.e., it concerns intermediate-mass stars born with a well-developed convective core. The sample covers projected surface rotation velocities v\\sin I\\in [9,242] km s-1 and core rotation rates up to 26 μHz, which corresponds to 50% of the critical rotation frequency. We find deviations from rigid rotation to be moderate in the single stars of this sample. We place the near-core rotation rates in an evolutionary context and find that the core rotation must drop drastically before or during the short phase between the end of the core hydrogen burning and the onset of core helium burning. We compute the spin parameter, which is the ratio of twice the rotation rate to the mode frequency (also known as the inverse Rossby number), for 1682 gravity modes and find the majority (95%) to occur in the sub-inertial regime. The 10 stars with Rossby modes have spin parameters between 14 and 30, while the gravito-inertial modes cover the range from 1 to 15.

  18. Inferring the expression variability of human transposable element-derived exons by linear model analysis of deep RNA sequencing data.

    PubMed

    Zhang, Wensheng; Edwards, Andrea; Fan, Wei; Fang, Zhide; Deininger, Prescott; Zhang, Kun

    2013-08-28

    The exonization of transposable elements (TEs) has proven to be a significant mechanism for the creation of novel exons. Existing knowledge of the retention patterns of TE exons in mRNAs were mainly established by the analysis of Expressed Sequence Tag (EST) data and microarray data. This study seeks to validate and extend previous studies on the expression of TE exons by an integrative statistical analysis of high throughput RNA sequencing data. We collected 26 RNA-seq datasets spanning multiple tissues and cancer types. The exon-level digital expressions (indicating retention rates in mRNAs) were quantified by a double normalized measure, called the rescaled RPKM (Reads Per Kilobase of exon model per Million mapped reads). We analyzed the distribution profiles and the variability (across samples and between tissue/disease groups) of TE exon expressions, and compared them with those of other constitutive or cassette exons. We inferred the effects of four genomic factors, including the location, length, cognate TE family and TE nucleotide proportion (RTE, see Methods section) of a TE exon, on the exons' expression level and expression variability. We also investigated the biological implications of an assembly of highly-expressed TE exons. Our analysis confirmed prior studies from the following four aspects. First, with relatively high expression variability, most TE exons in mRNAs, especially those without exact counterparts in the UCSC RefSeq (Reference Sequence) gene tables, demonstrate low but still detectable expression levels in most tissue samples. Second, the TE exons in coding DNA sequences (CDSs) are less highly expressed than those in 3' (5') untranslated regions (UTRs). Third, the exons derived from chronologically ancient repeat elements, such as MIRs, tend to be highly expressed in comparison with those derived from younger TEs. Fourth, the previously observed negative relationship between the lengths of exons and the inclusion levels in transcripts is also true for exonized TEs. Furthermore, our study resulted in several novel findings. They include: (1) for the TE exons with non-zero expression and as shown in most of the studied biological samples, a high TE nucleotide proportion leads to their lower retention rates in mRNAs; (2) the considered genomic features (i.e. a continuous variable such as the exon length or a category indicator such as 3'UTR) influence the expression level and the expression variability (CV) of TE exons in an inverse manner; (3) not only the exons derived from Alu elements but also the exons from the TEs of other families were preferentially established in zinc finger (ZNF) genes.

  19. Decay resistance of wood treated with boric acid and tall oil derivates.

    PubMed

    Temiz, Ali; Alfredsen, Gry; Eikenes, Morten; Terziev, Nasko

    2008-05-01

    In this study, the effect of two boric acid concentrations (1% and 2%) and four derivates of tall oil with varying chemical composition were tested separately and in combination. The tall oil derivates were chosen in a way that they consist of different amounts of free fatty, resin acids and neutral compounds. Decay tests using two brown rot fungi (Postia placenta and Coniophora puteana) were performed on both unleached and leached test samples. Boric acid showed a low weight loss in test samples when exposed to fungal decay before leaching, but no effect after leaching. The tall oil derivates gave better efficacy against decay fungi compared to control, but are not within the range of the efficacy needed for a wood preservative. Double impregnation with boric acid and tall oil derivates gave synergistic effects for several of the double treatments both in unleached and leached samples. In the unleached samples the double treatment gave a better efficacy against decay fungi than tall oil alone. In leached samples a better efficacy against brown rot fungi were achieved than in samples with boron alone and a nearly similar or better efficacy than for tall oil alone. Boric acid at 2% concentration combined with the tall oil derivate consisting of 90% free resin acids (TO-III) showed the best performance against the two decay fungi with a weight loss less than 3% after a modified pure culture test.

  20. The effect of elevated methane pressure on methane hydrate dissociation

    USGS Publications Warehouse

    Circone, S.; Stern, L.A.; Kirby, S.H.

    2004-01-01

    Methane hydrate, equilibrated at P, T conditions within the hydrate stability field, was rapidly depressurized to 1.0 or 2.0 MPa and maintained at isobaric conditions outside its stability field, while the extent and rate of hydrate dissociation was measured at fixed, externally maintained temperatures between 250 and 288 K. The dissociation rate decreases with increasing pressure at a given temperature. Dissociation rates at 1.0 MPa parallel the complex, reproducible T-dependence previously observed between 250 and 272 K at 0.1 MPa. The lowest rates were observed near 268 K, such that >50% of the sample can persist for more than two weeks at 0.1 MPa to more than a month at 1 and 2 MPa. Varying the pressure stepwise in a single experiment increased or decreased the dissociation rate in proportion to the rates observed in the isobaric experiments, similar to the rate reversibility previously observed with stepwise changes in temperature at 0.1 MPa. At fixed P, T conditions, the rate of methane hydrate dissociation decreases monotonically with time, never achieving a steady rate. The relationship between time (t) and the extent of hydrate dissociation is empirically described by: Evolved gas (%) = A??tB where the pre-exponential term A ranges from 0 to 16% s-B and the exponent B is generally <1. Based on fits of the dissociation results to Equation 1 for the full range of temperatures (204 to 289 K) and pressures (0.1 to 2.0 MPa) investigated, the derived parameters can be used to predict the methane evolution curves for pure, porous methane hydrate to within ??5%. The effects of sample porosity and the presence of quartz sand and seawater on methane hydrate dissociation are also described using Equation 1.

  1. A review of radioactive isotopes and other residence time tracers in understanding groundwater recharge: Possibilities, challenges, and limitations

    NASA Astrophysics Data System (ADS)

    Cartwright, Ian; Cendón, Dioni; Currell, Matthew; Meredith, Karina

    2017-12-01

    Documenting the location and magnitude of groundwater recharge is critical for understanding groundwater flow systems. Radioactive tracers, notably 14C, 3H, 36Cl, and the noble gases, together with other tracers whose concentrations vary over time, such as the chlorofluorocarbons or sulfur hexafluoride, are commonly used to estimate recharge rates. This review discusses some of the advantages and problems of using these tracers to estimate recharge rates. The suite of tracers allows recharge to be estimated over timescales ranging from a few years to several hundred thousand years, which allows both the long-term and modern behaviour of groundwater systems to be documented. All tracers record mean residence times and mean recharge rates rather than a specific age and date of recharge. The timescale over which recharge rates are averaged increases with the mean residence time. This is an advantage in providing representative recharge rates but presents a problem in comparing recharge rates derived from these tracers with those from other techniques, such as water table fluctuations or lysimeters. In addition to issues relating to the sampling and interpretation of specific tracers, macroscopic dispersion and mixing in groundwater flow systems limit how precisely groundwater residence times and recharge rates may be estimated. Additionally, many recharge studies have utilised existing infrastructure that may not be ideal for this purpose (e.g., wells with long screens that sample groundwater several kilometres from the recharge area). Ideal recharge studies would collect sufficient information to optimise the use of specific tracers and minimise the problems of mixing and dispersion.

  2. Methane in the lunar exosphere: Implications for solar wind carbon escape

    NASA Astrophysics Data System (ADS)

    Hodges, R. Richard

    2016-07-01

    A positive identification of methane in the lunar exosphere has been made in data from the neutral mass spectrometer on the Lunar Atmosphere and Dust Environment Explorer (LADEE) spacecraft. Like argon-40, methane is adsorbed on the lunar surface during nighttime. However, higher activation energies for methane delay its desorption at sunrise by about an hour local time, creating a postsunrise bulge with peak concentration of approximately 400-450 molecules cm-3 at a reference altitude of 12 km, which is just above the highest topographic feature on the Moon. The rate of escape of carbon as methane derived from the LADEE data is estimated to be in the range 1.5-4.5 × 1021 s-1. A lower bound for solar carbon escape derived separately from Apollo sample analyses is 3.4 × 1021 s-1.

  3. Kinetically limited weathering at low denudation rates in semi-arid climates

    NASA Astrophysics Data System (ADS)

    Vanacker, V.; Schoonejans, J.; Opfergelt, S.; Ameijeiras-Marino, Y.; Christl, M.

    2016-12-01

    On Earth, the Critical Zone supports terrestrial life, being the near-surface environment where interactions between the atmosphere, lithosphere, hydrosphere, and biosphere take place Quantitative understanding of the interaction between mechanical rock breakdown, chemical weathering, and physical erosion is essential for unraveling Earth's biogeochemical cycles. In this study, we explore the role of soil water balance on regulating soil chemical weathering under water deficit regimes. Weathering rates and intensities were evaluated for nine soil profiles located on convex ridge crests of three mountain ranges in the Spanish Betic Cordillera. We present and compare quantitative information on soil weathering, chemical depletion and total denudation that were derived based on geochemical mass balance, 10Be cosmogenic nuclides and U-series disequilibria. Soil production rates determined based on U-series isotopes (238U, 234U, 230Th and 226Ra) are of the same order of magnitude as 10Be-derived denudation rates, suggesting steady state soil thickness, in two out of three sampling sites. The chemical weathering intensities are relatively low (˜5 to 30% of the total denudation of the soil) and negatively correlated with the magnitude of the water deficit in soils. Soil weathering extents increase (nonlinearly) with soil thickness and decrease with increasing surface denudation rates, consistent with kinetically limited or controlled weathering. Our study suggests that soil residence time and water availability limit weathering processes in semi-arid climates, which has not been validated previously with field data. An important implication of this finding is that climatic regimes may strongly regulate soil weathering by modulating soil solute fluxes.

  4. How fast is the denudation of the Taiwan Mountains? (Invited)

    NASA Astrophysics Data System (ADS)

    Siame, L. L.; Derrieux, F.; KANG, C.; Bourles, D. L.; Braucher, R.; Léanni, L.; Chen, R.; Lee, J.; Chu, H.; Chang, C.; Byrne, T. B.

    2013-12-01

    Orogenic settings are particularly well suited to study and quantify the coupling relations between tectonics, topography, climate and erosion since they record tectonic evolution along convergent margins and the connection between deep and surface processes. However, the interaction of deep and shallow processes is still poorly understood and the role they play in the exhumation of rocks, the structural and kinematic evolution of orogenic wedges, and the relation between tectonics and climate-dependent surface processes are still debated. Therefore, quantification of denudation rates in a wide range of climatic and tectonic settings, as well as at various time and space scales, is a critical step in calibrating and validating landscape evolution models. In this study, we focus on the mountains of the arc-continent collision in Taiwan, which serve as one of the best examples in the world to understand and study mountain building processes. We investigate the pattern and magnitude of denudation rates at the scale of the orogenic system, deriving denudation rates from in situ-produced cosmogenic nuclide 10Be concentrations measured in (1) river-borne quartz minerals sampled at major watersheds outlets, and (2) bedrock outcrops along ridge crests and at summits located along the major drainage divide of the belt. We determined a denudation pattern showing a clear discrepancy between the western (1.7×0.2 mm/yr) and eastern (4.1×0.5 mm/yr) sides of the range. Conversely, bedrock denudation determined along ridge crests, summits and flat surfaces preserved at high elevations are characterized by significantly lower denudation rates on the order of 0.24×0.03 mm/yr. Altogether, the cosmogenic-derived denudation pattern at the orogen-scale reflects fundamental mountain building processes from frontal accretion in the Western Foothills to basal accretion and fast exhumation in the Central Range. Applied to the whole orogen, such field-based approach thus provides important input data to validate and calibrate the parameters to be supplied to landscape evolution models. Moreover, the comparison between cosmogenic bedrock-derived and basin-derived denudation rates allows discussing how the topographic relief of Taiwan has evolved through the last thousands of years, and thus documenting whether or not the Taiwan Mountains are in a topographic steady state.

  5. First estimate of annual mercury flux at the Kilauea main vent

    NASA Technical Reports Server (NTRS)

    Siegel, S. M.; Siegel, B. Z.

    1984-01-01

    Mercury and sulphur dioxide analyses were conducted from 1971 to 1980 on air samples collected immediately downwind of Halemaumau, the Kilauea main vent, in Hawaii. On the basis of these measurements, an Hg/SO2 ratio of 0.00051 has been derived which, when applied to the recently determined SO2 mass output of Halemaumau, yields a calculated Hg flux of 2.6 x 10 to the 8th g annually. This rate is consistent with Varekamp and Busek's (1981) evidence suggesting that volcanogenic Hg significantly contributes to the atmospheric total.

  6. Study of the SRF-derived ashes melting behavior and the effects generated by the optimization of their composition on the furnaces energy efficiency in the incineration plants.

    PubMed

    Mercurio, Vittorio; Venturelli, Chiara; Paganelli, Daniele

    2014-12-01

    As regards the incineration process of the urban solid waste, the composition correct management allows not only the valorization of precise civil and industrial groups of waste as alternative fuels but also a considerable increase of the furnace work temperature leading to a remarkable improvement of the related energy efficiency. In this sense, the study of the melting behavior of ashes deriving from several kinds of fuels that have to be processed to heat treatment is really important. This approach, indeed, ensures to know in depth the features defining the melting behavior of these analyzed samples, and as a consequence, gives us the necessary data in order to identify the best mixture of components to be incinerated as a function of the specific working temperatures of the power plant. Firstly, this study aims to find a way to establish the softening and melting temperatures of the ashes because they are those parameters that strongly influence the use of fuels. For this reason, in this work, the fusibility of waste-derived ashes with different composition has been investigated by means of the heating microscope. This instrument is fundamental to prove the strict dependence of the ashes fusion temperature on the heating rate that the samples experienced during the thermal cycle. In addition, in this work, another technological feature of the instrument has been used allowing to set an instantaneous heating directly on the sample in order to accurately reproduce the industrial conditions which characterize the incineration plants. The comparison between the final results shows that, in effect, the achievement of the best performances of the furnace is due to the a priori study of the melting behavior of the single available components.

  7. Measurement of gluconeogenesis using glucose fragments and mass spectrometry after ingestion of deuterium oxide.

    PubMed

    Chacko, Shaji K; Sunehag, Agneta L; Sharma, Susan; Sauer, Pieter J J; Haymond, Morey W

    2008-04-01

    We report a new method to measure the fraction of glucose derived from gluconeogenesis using gas chromatography-mass spectrometry and positive chemical ionization. After ingestion of deuterium oxide by subjects, glucose derived from gluconeogenesis is labeled with deuterium. Our calculations of gluconeogenesis are based on measurements of the average enrichment of deuterium on carbon 1, 3, 4, 5, and 6 of glucose and the deuterium enrichment in body water. In a sample from an adult volunteer after ingestion of deuterium oxide, fractional gluconeogenesis using the "average deuterium enrichment method" was 48.3 +/- 0.5% (mean +/- SD) and that with the C-5 hexamethylenetetramine (HMT) method by Landau et al. (Landau BR, Wahren J, Chandramouli V, Schumann WC, Ekberg K, Kalhan SC; J Clin Invest 98: 378-385, 1996) was 46.9 +/- 5.4%. The coefficient of variation of 10 replicate analyses using the new method was 1.0% compared with 11.5% for the C-5 HMT method. In samples derived from an infant receiving total parenteral nutrition, fractional gluconeogenesis was 13.3 +/- 0.3% using the new method and 13.7 +/- 0.8% using the C-5 HMT method. Fractional gluconeogenesis measured in six adult volunteers after 66 h of continuous fasting was 83.7 +/- 2.3% using the new method and 84.2 +/- 5.0% using the C-5 HMT method. In conclusion, the average deuterium enrichment method is simple, highly reproducible, and cost effective. Furthermore, it requires only small blood sample volumes. With the use of an additional tracer, glucose rate of appearance can also be measured during the same analysis. Thus the new method makes measurements of gluconeogenesis available and affordable to large numbers of investigators under conditions of low and high fractional gluconeogenesis ( approximately 10 to approximately 90) in all subject populations.

  8. Innovative flow controller for time integrated passive sampling using SUMMA canisters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simon, P.; Farant, J.P.; Cole, H.

    1996-12-31

    To restrict the entry of gaseous contaminants inside evacuated vessels such as SUMMA canisters, mechanical flow controllers are used to collect integrated atmospheric samples. From the passive force generated by the pressure gradient, the motion of gas can be controlled to obtain a constant flow rate. Presently, devices based on the principle of critical orifices are used and they are all limited to an upper integrated sampling time. A novel flow controller which can be designed to achieve any desired sampling time when used on evacuated vessels was recently developed. It can extend the sampling time for hours, days, weeksmore » or even months for the benefits of environmental, engineering and toxicological professionals. The design of the controller is obtained from computer simulations done with an original set of equations derived from fluid mechanic and gas kinetic laws. To date, the experimental results have shown excellent agreement, with predictions obtained from the mathematical model. This new controller has already found numerous applications. Units able to deliver a constant sampling rate between vacuum and approximately -10 inches Hg during continuous long term duration have been used with SUMMA canisters of different volumes (500 ml, 1 litre and 61). Essentially, any combination of sampling time and sampler volume is possible. The innovative flow controller has contributed to an air quality assessment around a sanitary landfill (indoor/outdoor), and inside domestic wastewater and pulpmill sludge treatment facilities. It is presently being used as an alternative methodology for atmospheric sampling in the Russian orbital station Mir. This device affords true long term passive monitoring of selected gaseous air pollutants for environmental studies. 14 refs., 3 figs.« less

  9. Turkish and Japanese Mycobacterium tuberculosis sublineages share a remote common ancestor.

    PubMed

    Refrégier, Guislaine; Abadia, Edgar; Matsumoto, Tomoshige; Ano, Hiromi; Takashima, Tetsuya; Tsuyuguchi, Izuo; Aktas, Elif; Cömert, Füsun; Gomgnimbou, Michel Kireopori; Panaiotov, Stefan; Phelan, Jody; Coll, Francesc; McNerney, Ruth; Pain, Arnab; Clark, Taane G; Sola, Christophe

    2016-11-01

    Two geographically distant M. tuberculosis sublineages, Tur from Turkey and T3-Osaka from Japan, exhibit partially identical genotypic signatures (identical 12-loci MIRU-VNTR profiles, distinct spoligotyping patterns). We investigated T3-Osaka and Tur sublineages characteristics and potential genetic relatedness, first using MIRU-VNTR locus analysis on 21 and 25 samples of each sublineage respectively, and second comparing Whole Genome Sequences of 8 new samples to public data from 45 samples uncovering human tuberculosis diversity. We then tried to date their Most Recent Common Ancestor (MRCA) using three calibrations of SNP accumulation rate (long-term=0.03SNP/genome/year, derived from a tuberculosis ancestor of around 70,000years old; intermediate=0.2SNP/genome/year derived from a Peruvian mummy; short-term=0.5SNP/genome/year). To disentangle between these scenarios, we confronted the corresponding divergence times with major human history events and knowledge on human genetic divergence. We identified relatively high intrasublineage diversity for both T3-Osaka and Tur. We definitively proved their monophyly; the corresponding super-sublineage (referred to as "T3-Osa-Tur") shares a common ancestor with T3-Ethiopia and Ural sublineages but is only remotely related to other Euro-American sublineages such as X, LAM, Haarlem and S. The evolutionary scenario based on long-term evolution rate being valid until T3-Osa-Tur MRCA was not supported by Japanese fossil data. The evolutionary scenario relying on short-term evolution rate since T3-Osa-Tur MRCA was contradicted by human history and potential traces of past epidemics. T3-Osaka and Tur sublineages were found likely to have diverged between 800y and 2000years ago, potentially at the time of Mongol Empire. Altogether, this study definitively proves a strong genetic link between Turkish and Japanese tuberculosis. It provides a first hypothesis for calibrating TB Euro-American lineage molecular clock; additional studies are needed to reliably date events corresponding to intermediate depths in tuberculosis phylogeny. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Bioactivity and cell proliferation in radiopaque gel-derived CaO-P2O5-SiO2-ZrO2 glass and glass-ceramic powders.

    PubMed

    Montazerian, Maziar; Yekta, Bijan Eftekhari; Marghussian, Vahak Kaspari; Bellani, Caroline Faria; Siqueira, Renato Luiz; Zanotto, Edgar Dutra

    2015-10-01

    In this study, 10 mol% ZrO2 was added to a 27CaO-5P2O5-68SiO2 (mol%) base composition synthesized via a simple sol-gel method. This composition is similar to that of a frequently investigated bioactive gel-glass. The effects of ZrO2 on the in vitro bioactivity and MG-63 cell proliferation of the glass and its derivative polycrystalline (glass-ceramic) powder were investigated. The samples were characterized using thermo-gravimetric and differential thermal analysis (TG/DTA), X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR), and scanning electron microscopy (SEM) coupled to energy dispersive X-ray spectroscopy (EDS). Release of Si, Ca, P and Zr into simulated body fluid (SBF) was determined by inductively coupled plasma (ICP). Upon heat treatment at 1000 °C, the glass powder crystallized into an apatite-wollastonite-zirconia glass-ceramic powder. Hydroxycarbonate apatite (HCA) formation on the surface of the glass and glass-ceramic particles containing ZrO2 was confirmed by FTIR and SEM. Addition of ZrO2 to the base glass composition decreased the rate of HCA formation in vitro from one day to three days, and hence, ZrO2 could be employed to control the rate of apatite formation. However, the rate of HCA formation on the glass-ceramic powder containing ZrO2 crystal was equal to that in the base glassy powder. Tests with a cultured human osteoblast-like MG-63 cells revealed that the glass and glass-ceramic materials stimulated cell proliferation, indicating that they are biocompatible and are not cytotoxic in vitro. Moreover, zirconia clearly increased osteoblast proliferation over that of the Zr-free samples. This increase is likely associated with the lower solubility of these samples and, consequently, a smaller variation in the media pH. Despite the low solubility of these materials, bioactivity was maintained, indicating that these glassy and polycrystalline powders are potential candidates for bone graft substitutes and bone cements with the special feature of radiopacity. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. GAS, STARS, AND STAR FORMATION IN ALFALFA DWARF GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang Shan; Haynes, Martha P.; Giovanelli, Riccardo

    2012-06-15

    We examine the global properties of the stellar and H I components of 229 low H I mass dwarf galaxies extracted from the ALFALFA survey, including a complete sample of 176 galaxies with H I masses <10{sup 7.7} M{sub Sun} and H I line widths <80 km s{sup -1}. Sloan Digital Sky Survey (SDSS) data are combined with photometric properties derived from Galaxy Evolution Explorer to derive stellar masses (M{sub *}) and star formation rates (SFRs) by fitting their UV-optical spectral energy distributions (SEDs). In optical images, many of the ALFALFA dwarfs are faint and of low surface brightness; onlymore » 56% of those within the SDSS footprint have a counterpart in the SDSS spectroscopic survey. A large fraction of the dwarfs have high specific star formation rates (SSFRs), and estimates of their SFRs and M{sub *} obtained by SED fitting are systematically smaller than ones derived via standard formulae assuming a constant SFR. The increased dispersion of the SSFR distribution at M{sub *} {approx}< 10{sup 8} M{sub Sun} is driven by a set of dwarf galaxies that have low gas fractions and SSFRs; some of these are dE/dSphs in the Virgo Cluster. The imposition of an upper H I mass limit yields the selection of a sample with lower gas fractions for their M{sub *} than found for the overall ALFALFA population. Many of the ALFALFA dwarfs, particularly the Virgo members, have H I depletion timescales shorter than a Hubble time. An examination of the dwarf galaxies within the full ALFALFA population in the context of global star formation (SF) laws is consistent with the general assumptions that gas-rich galaxies have lower SF efficiencies than do optically selected populations and that H I disks are more extended than stellar ones.« less

  12. Shrinkage Estimators for a Composite Measure of Quality Conceptualized as a Formative Construct

    PubMed Central

    Shwartz, Michael; Peköz, Erol A; Christiansen, Cindy L; Burgess, James F; Berlowitz, Dan

    2013-01-01

    Objective To demonstrate the value of shrinkage estimators when calculating a composite quality measure as the weighted average of a set of individual quality indicators. Data Sources Rates of 28 quality indicators (QIs) calculated from the minimum dataset from residents of 112 Veterans Health Administration nursing homes in fiscal years 2005–2008. Study Design We compared composite scores calculated from the 28 QIs using both observed rates and shrunken rates derived from a Bayesian multivariate normal-binomial model. Principal Findings Shrunken-rate composite scores, because they take into account unreliability of estimates from small samples and the correlation among QIs, have more intuitive appeal than observed-rate composite scores. Facilities can be profiled based on more policy-relevant measures than point estimates of composite scores, and interval estimates can be calculated without assuming the QIs are independent. Usually, shrunken-rate composite scores in 1 year are better able to predict the observed total number of QI events or the observed-rate composite scores in the following year than the initial year observed-rate composite scores. Conclusion Shrinkage estimators can be useful when a composite measure is conceptualized as a formative construct. PMID:22716650

  13. Masters theses from a university medical college: publication in indexed scientific journals.

    PubMed

    Dhaliwal, Upreet; Singh, Navjeevan; Bhatia, Arati

    2010-01-01

    The thesis is an integral part of postgraduate medical education in India. Publication of the results of the thesis in an indexed journal is desirable; it validates the research and makes results available to researchers worldwide. To determine publication rates in indexed journals, of works derived from theses, and factors affecting publication. Postgraduate theses submitted over a five-year period (2001-05) in a university medical college were analyzed in a retrospective, observational study. Data retrieved included name and gender of postgraduate student, names, department and hierarchy of supervisor and co-supervisor(s), year submitted, study design, sample size, and statistically significant difference between groups. To determine subsequent publication in an indexed journal, Medline search was performed up to December 2007. Chi square test was used to compare publication rates based on categorical variables; Student's t-test was used to compare differences based on continuous variables. One hundred and sixty theses were retrieved, forty-eight (30%) were published. Papers were published 8-74 (33.7+/-17.33) months after thesis submission; the postgraduate student was first author in papers from 26 (54%) of the published theses. Gender of the student, department of origin, year of thesis submission, hierarchy of the supervisor, number and department of co-supervisors, and thesis characteristics did not influence publication rates. Rate of publication in indexed journals, of papers derived from postgraduate theses is 30%. In this study we were unable to identify factors that promote publication.

  14. On the Exploitation of Sensitivity Derivatives for Improving Sampling Methods

    NASA Technical Reports Server (NTRS)

    Cao, Yanzhao; Hussaini, M. Yousuff; Zang, Thomas A.

    2003-01-01

    Many application codes, such as finite-element structural analyses and computational fluid dynamics codes, are capable of producing many sensitivity derivatives at a small fraction of the cost of the underlying analysis. This paper describes a simple variance reduction method that exploits such inexpensive sensitivity derivatives to increase the accuracy of sampling methods. Three examples, including a finite-element structural analysis of an aircraft wing, are provided that illustrate an order of magnitude improvement in accuracy for both Monte Carlo and stratified sampling schemes.

  15. Is Obsidian Hydration Dating Affected by Relative Humidity?

    USGS Publications Warehouse

    Friedman, I.; Trembour, F.W.; Smith, G.I.; Smith, F.L.

    1994-01-01

    Experiments carried out under temperatures and relative humidities that approximate ambient conditions show that the rate of hydration of obsidian is a function of the relative humidity, as well as of previously established variables of temperature and obsidian chemical composition. Measurements of the relative humidity of soil at 25 sites and at depths of between 0.01 and 2 m below ground show that in most soil environments, at depths below about 0.25 m, the relative humidity is constant at 100%. We have found that the thickness of the hydrated layer developed on obsidian outcrops exposed to the sun and to relative humidities of 30-90% is similar to that formed on other portions of the outcrop that were shielded from the sun and exposed to a relative humidity of approximately 100%. Surface samples of obsidian exposed to solar heating should hydrate more rapidly than samples buried in the ground. However, the effect of the lower mean relative humidity experiences by surface samples tends to compensate for the elevated temperature, which may explain why obsidian hydration ages of surface samples usually approximate those derived from buried samples.

  16. Implicit discount rates of vascular surgeons in the management of abdominal aortic aneurysms.

    PubMed

    Enemark, U; Lyttkens, C H; Troëng, T; Weibull, H; Ranstam, J

    1998-01-01

    A growing empirical literature has investigated attitudes towards discounting of health benefits with regard to social choices of life-saving and health-improving measures and individuals' time preferences for the management of their own health. In this study, the authors elicited the time preferences of vascular surgeons in the context of management of small abdominal aortic aneurysms, for which the choice between early elective surgery and watchful waiting is not straightforward. They interviewed 25 of a random sample of 30 Swedish vascular surgeons. Considerable variation in the time preferences was found in the choices between watchful waiting and surgical intervention among the otherwise very homogeneous group of surgeons. The discount rates derived ranged from 5.3% to 19.4%. The median discount rate (10.4%) is similar to those usually reported for social choices concerning life-saving measures. The surgeons who were employed in university hospitals had higher discount rates than did their colleagues in county and district hospitals.

  17. Rate constants for proteins binding to substrates with multiple binding sites using a generalized forward flux sampling expression

    NASA Astrophysics Data System (ADS)

    Vijaykumar, Adithya; ten Wolde, Pieter Rein; Bolhuis, Peter G.

    2018-03-01

    To predict the response of a biochemical system, knowledge of the intrinsic and effective rate constants of proteins is crucial. The experimentally accessible effective rate constant for association can be decomposed in a diffusion-limited rate at which proteins come into contact and an intrinsic association rate at which the proteins in contact truly bind. Reversely, when dissociating, bound proteins first separate into a contact pair with an intrinsic dissociation rate, before moving away by diffusion. While microscopic expressions exist that enable the calculation of the intrinsic and effective rate constants by conducting a single rare event simulation of the protein dissociation reaction, these expressions are only valid when the substrate has just one binding site. If the substrate has multiple binding sites, a bound enzyme can, besides dissociating into the bulk, also hop to another binding site. Calculating transition rate constants between multiple states with forward flux sampling requires a generalized rate expression. We present this expression here and use it to derive explicit expressions for all intrinsic and effective rate constants involving binding to multiple states, including rebinding. We illustrate our approach by computing the intrinsic and effective association, dissociation, and hopping rate constants for a system in which a patchy particle model enzyme binds to a substrate with two binding sites. We find that these rate constants increase as a function of the rotational diffusion constant of the particles. The hopping rate constant decreases as a function of the distance between the binding sites. Finally, we find that blocking one of the binding sites enhances both association and dissociation rate constants. Our approach and results are important for understanding and modeling association reactions in enzyme-substrate systems and other patchy particle systems and open the way for large multiscale simulations of such systems.

  18. The Stratigraphy and Evolution of the Lunar Crust

    NASA Technical Reports Server (NTRS)

    McCallum, I. Stewart

    1998-01-01

    Reconstruction of stratigraphic relationships in the ancient lunar crust has proved to be a formidable task. The intense bombardment during the first 700 m.y. of lunar history has severely perturbed the original stratigraphy and destroyed the primary textures of all but a few nonmare rocks. However, a knowledge of the crustal stratigraphy as it existed prior to the cataclysmic bombardment about 3.9 Ga is essential to test the major models proposed for crustal origin, i.e., crystal fractionation in a global magmasphere or serial magmatism in a large number of smaller bodies. Despite the large difference in scale implicit in these two models, both require an efficient separation of plagioclase and mafic minerals to form the anorthositic crust and the mafic mantle. Despite the havoc wreaked by the large body impactors, these same impact processes have brought to the lunar surface crystalline samples derived from at least the upper half of the lunar crust, thereby providing an opportunity to reconstruct the stratigraphy in areas sampled by the Apollo missions. As noted, ejecta from the large multiring basins are dominantly, or even exclusively, of crustal origin. Given the most recent determinations of crustal thicknesses, this implies an upper limit to the depth of excavation of about 60 km. Of all the lunar samples studied, a small set has been recognized as "pristine", and within this pristine group, a small fraction have retained some vestiges of primary features formed during the earliest stages of crystallization or recrystallization prior to 4.0 Ga. We have examined a number of these samples that have retained some record of primary crystallization to deduce thermal histories from an analysis of structural, textural, and compositional features in minerals from these samples. Specifically, by quantitative modeling of (1) the growth rate and development of compositional profiles of exsolution lamellae in pyroxenes and (2) the rate of Fe-Mg ordering in orthopyroxenes, we can constrain the cooling rates of appropriate lunar samples. These cooling rates are used to compute depths of burial at the time of crystallization, which enable us to reconstruct parts of the crustal stratigraphy as it existed during the earliest stages of lunar history.

  19. Metabolically-Derived Human Ventilation Rates: A Revised Approach Based Upon Oxygen Consumption Rates (External Review Draft)

    EPA Science Inventory

    EPA has released a draft report entitled, Metabolically-Derived Human Ventilation Rates: A Revised Approach Based Upon Oxygen Consumption Rates, for independent external peer review and public comment. NCEA published the Exposure Factors Handbook in 1997. This comprehens...

  20. 10Be erosion rates controlled by normal fault activity through incision and landslide occurrence

    NASA Astrophysics Data System (ADS)

    Roda-Boluda, Duna; D'Arcy, Mitch; Whittaker, Alex; Gheorghiu, Delia; Rodes, Angel

    2017-04-01

    Quantifying erosion rates, and how they compare to rock uplift rates, is fundamental for understanding the evolution of relief and the associated sediment fluxes. The competing effects of rock uplift and erosion are clearly captured by river incision and landsliding, but linking these four important landscape processes remains a major challenge. We address these questions using field data from southern Italy, and quantify the geomorphic response to tectonic forcing. We present 15 new 10Be catchment-averaged erosion rates, collected from catchments along five active normal faults with excellent slip rate constraints. We find that erosion rates are strongly controlled by fault slip rates and that this relationship is mediated by the degree of catchment incision and landslide activity. We find that 10Be samples from low-relief, unincised areas above knickpoints yield consistent erosion rates of ˜ 0.12 mm/yr, while samples collected below knickpoints have erosion rates of ˜ 0.2 - 1.0 mm/yr. This comparison allows us to quantify the impact that transient incisional response has on erosion rates. We demonstrate that in this area incision is associated with frequent, shallow landsliding, and we show that the volumes of landslides stored in the catchments are highly correlated with 10Be-derived sediment flux estimates, suggesting that landslides are likely to be a major contributor to erosional fluxes. Despite widespread landsliding, CRN samples from the studied catchments do provide reliable estimates of catchment-averaged erosion rates, as these are consistent with fault throw patterns and rates. We suggest that this is because landslides are frequent, small and shallow, and are stored on the hillslopes for up to ˜ 103 yrs, representing the integrated record of landsliding over several seismic cycles; and test this hypothesis using a numerical model of landsliding and CRN dynamics. Our results show that adequate CRN mixing can occur through runoff as landslides are stored on the hillslopes, as long as landslide recurrence intervals are short, which is supported by the erosion rate magnitudes and previous landslide studies in the area. This study contributes to our understanding of erosion and sediment supply in tectonically-active areas, and offers novel insights into the use of CRN to infer erosion rates in areas of intense landslide activity.

  1. Evaluation of photostability of solid-state nicardipine hydrochloride polymorphs by using Fourier-transformed reflection-absorption infrared spectroscopy - effect of grinding on the photostability of crystal form.

    PubMed

    Teraoka, Reiko; Otsuka, Makoto; Matsuda, Yoshihisa

    2004-11-22

    Photostability and physicochemical properties of nicardipine hydrochloride polymorphs (alpha- and beta-form) were studied by using Fourier-transformed reflection-absorption infrared spectroscopy (FT-IR-RAS) of the tablets, X-ray powder diffraction analysis, differential scanning calorimetry (DSC), and color difference measurement. It was clear from the results of FT-IR-RAS spectra after irradiation that nicardipine hydrochloride in the solid state decomposed to its pyridine derivative when exposed to light. The photostability of the ground samples of two forms was also measured in the same manner. The two crystalline forms of the drug changed to nearly amorphous form after 150 min grinding in a mixer mill. X-ray powder diffraction patterns of those ground samples showed almost halo patterns. The nicardipine hydrochloride content on the surface of the tablet was determined based on the absorbance at 1700 cm(-1) attributable to the C=O stretch vibration in FT-IR-RAS spectra before and after irradiation by fluorescent lamp (3500 lx). The photodegradation followed apparently the first-order kinetics for any sample. The apparent photodegradation rate constant of beta-form was greater than that of alpha-form. The ground samples decomposed rapidly under the same light irradiation as compared with the intact crystalline forms. The photodegradation rate constant decreased with increase of the heat of fusion. copyright 2004 Elsevier B.V.

  2. No evidence of extraterrestrial noble metal and helium anomalies at Marinoan glacial termination

    NASA Astrophysics Data System (ADS)

    Peucker-Ehrenbrink, Bernhard; Waters, Christine A.; Kurz, Mark D.; Hoffman, Paul F.

    2016-03-01

    High concentrations of extraterrestrial iridium have been reported in terminal Sturtian and Marinoan glacial marine sediments and are used to argue for long (likely 3-12 Myr) durations of these Cryogenian glaciations. Reanalysis of the Marinoan sedimentary rocks used in the original study, supplemented by sedimentary rocks from additional terminal Marinoan sections, however, does not confirm the initial report. New platinum group element concentrations, and 187Os/188Os and 3He/4He signatures are consistent with crustal origin and minimal extraterrestrial contributions. The discrepancy is likely caused by different sample masses used in the two studies, with this study being based on much larger samples that better capture the stochastic distribution of extraterrestrial particles in marine sediments. Strong enrichment of redox-sensitive elements, particularly rhenium, up-section in the basal postglacial cap carbonates, may indicate a return to more fully oxygenated seawater in the aftermath of the Marinoan snowball earth. Sections dominated by hydrogenous osmium indicate increasing submarine hydrothermal sources and/or continental inputs that are increasingly dominated by young mantle-derived rocks after deglaciation. Sedimentation rate estimates for the basal cap carbonates yield surprisingly slow rates of a few centimeters per thousand years. This study highlights the importance of using sedimentary rock samples that represent sufficiently large area-time products to properly sample extraterrestrial particles representatively, and demonstrates the value of using multiple tracers of extraterrestrial matter.

  3. Emergency Heart Failure Mortality Risk Grade score performance for 7-day mortality prediction in patients with heart failure attended at the emergency department: validation in a Spanish cohort.

    PubMed

    Gil, Víctor; Miró, Òscar; Schull, Michael J; Llorens, Pere; Herrero-Puente, Pablo; Jacob, Javier; Ríos, José; Lee, Douglas S; Martín-Sánchez, Francisco J

    2018-06-01

    The Emergency Heart Failure Mortality Risk Grade (EHMRG) scale, derived in 86 Canadian emergency departments (EDs), stratifies patients with acute-decompensated heart failure (ADHF) according to their 7-day mortality risk. We evaluated its external validity in a Spanish cohort. We applied the EHMRG scale to ADHF patients consecutively included in the Epidemiology of Acute Heart Failure in Emergency departments (EAHFE) registry (29 Spanish EDs) and measured its performance. Patients were distributed into quintiles according to the original and their self-defined score cutoffs. The 7-day mortality rates were compared internally among different categories and with categories of Canadian cohorts. The EAHFE group [n: 1553 patients; 80 (10) years; 55.6% women] had a 5.5% 7-day mortality rate and the EHMRG scale c-statistic was 0.741 (95% confidence interval: 0.688-0.793) compared with 0.807 (0.761-0.842) and 0.804 (0.763-0.840) obtained in the Canadian derivation and validation cohorts. The mortality rate of the EAHFE group mortality increased progressively as the quintile categories increased using intervals defined by either the Canadian or the Spanish EHMRG score cutoffs, although with more regular increments with the EAHFE-defined intervals; using the latter, patients at quintiles 2, 3, 4, 5a and 5b had (compared with quintile 1) odds ratios of 1.77, 3.36, 4.44, 9.39 and 16.19, respectively. The EHMRG scale stratified risk in an ADHF cohort that included both palliative and nonpalliative patients in Spanish EDs, showing an extrapolation to a higher mortality risk cohort than the original derivation sample. Stratification improved when the score was recalibrated in the Spanish cohort.

  4. Total Lightning Characteristics with Respect to Radar-Derived Mesocyclone Strength

    NASA Technical Reports Server (NTRS)

    Stough, Sarah M.; Carey, Lawrence D.; Schultz, Christopher J.

    2015-01-01

    Recent work investigating the microphysical and kinematic relationship between a storm's updraft, its total lightning production, and manifestations of severe weather has resulted in development of tools for improved nowcasting of storm intensity. The total lightning jump algorithm, which identifies rapid increases in total lightning flash rate that often precede severe events, has shown particular potential to benefit warning operations. Maximizing this capability of total lightning and its operational implementation via the lightning jump may best be done through its fusion with radar and radar-derived intensity metrics. Identification of a mesocyclone, or quasi-steady rotating updraft, in Doppler velocity is the predominant radar-inferred early indicator of severe potential in a convective storm. Fused lightning-radar tools that capitalize on the most robust intensity indicators would allow enhanced situational awareness for increased warning confidence. A foundational step toward such tools comes from a better understanding of the updraft-centric relationship between intensification of total lightning production and mesocyclone development and strength. The work presented here utilizes a sample of supercell case studies representing a spectrum of severity. These storms are analyzed with respect to total lightning flash rate and the lightning jump alongside mesocyclone strength derived objectively from the National Severe Storms Laboratory (NSSL) Mesocyclone Detection Algorithm (MDA) and maximum azimuthal shear through a layer. Early results indicate that temporal similarities exist in the trends between total lightning flash rate and low- to mid-level rotation in supercells. Other characteristics such as polarimetric signatures of rotation, flash size, and cloud-to-ground flash ratio are explored for added insight into the significance of these trends with respect to the updraft and related processes of severe weather production.

  5. Tracking spatial variation in river load from Andean highlands to inter-Andean valleys

    NASA Astrophysics Data System (ADS)

    Tenorio, Gustavo E.; Vanacker, Veerle; Campforts, Benjamin; Álvarez, Lenín; Zhiminaicela, Santiago; Vercruysse, Kim; Molina, Armando; Govers, Gerard

    2018-05-01

    Mountains play an important role in the denudation of continents and transfer erosion and weathering products to lowlands and oceans. The rates at which erosion and weathering processes take place in mountain regions have a substantial impact on the morphology and biogeochemistry of downstream reaches and lowlands. The controlling factors of physical erosion and chemical weathering and the coupling between the two processes are not yet fully understood. In this study, we report physical erosion and chemical weathering rates for five Andean catchments located in the southern Ecuadorian Andes and investigate their mutual interaction. During a 4-year monitoring period, we sampled river water at biweekly intervals, and we analyzed water samples for major ions and suspended solids. We derived the total annual dissolved, suspended sediment, and ionic loads from the flow frequency curves and adjusted rating curves and used the dissolved and suspended sediment yields as proxies for chemical weathering and erosion rates. In the 4-year period of monitoring, chemical weathering exceeds physical erosion in the high Andean catchments. Whereas physical erosion rates do not exceed 30 t km-2 y-1 in the relict glaciated morphology, chemical weathering rates range between 22 and 59 t km-2 y-1. The variation in chemical weathering is primarily controlled by intrinsic differences in bedrock lithology. Land use has no discernible impact on the weathering rate but leads to a small increase in base cation concentrations because of fertilizer leaching in surface water. When extending our analysis with published data on dissolved and suspended sediment yields from the northern and central Andes, we observe that the river load composition strongly changes in the downstream direction, indicating large heterogeneity of weathering processes and rates within large Andean basins.

  6. Synthetic Constraint of Ecosystem C Models Using Radiocarbon and Net Primary Production (NPP) in New Zealand Grazing Land

    NASA Astrophysics Data System (ADS)

    Baisden, W. T.

    2011-12-01

    Time-series radiocarbon measurements have substantial ability to constrain the size and residence time of the soil C pools commonly represented in ecosystem models. Radiocarbon remains unique in the ability to constrain the large stabilized C pool with decadal residence times. Radiocarbon also contributes usefully to constraining the size and turnover rate of the passive pool, but typically struggles to constrain pools with residence times less than a few years. Overall, the number of pools and associated turnover rates that can be constrained depends upon the number of time-series samples available, the appropriateness of chemical or physical fractions to isolate unequivocal pools, and the utility of additional C flux data to provide additional constraints. In New Zealand pasture soils, we demonstrate the ability to constrain decadal turnover times with in a few years for the stabilized pool and reasonably constrain the passive fraction. Good constraint is obtained with two time-series samples spaced 10 or more years apart after 1970. Three or more time-series samples further improve the level of constraint. Work within this context shows that a two-pool model does explain soil radiocarbon data for the most detailed profiles available (11 time-series samples), and identifies clear and consistent differences in rates of C turnover and passive fraction in Andisols vs Non-Andisols. Furthermore, samples from multiple horizons can commonly be combined, yielding consistent residence times and passive fraction estimates that are stable with, or increase with, depth in different sites. Radiocarbon generally fails to quantify rapid C turnover, however. Given that the strength of radiocarbon is estimating the size and turnover of the stabilized (decadal) and passive (millennial) pools, the magnitude of fast cycling pool(s) can be estimated by subtracting the radiocarbon-based estimates of turnover within stabilized and passive pools from total estimates of NPP. In grazing land, these estimates can be derived primarily from measured aboveground NPP and calculated belowground NPP. Results suggest that only 19-36% of heterotrophic soil respiration is derived from the soil C with rapid turnover times. A final logical step in synthesis is the analysis of temporal variation in NPP, primarily due to climate, as driver of changes in plant inputs and resulting in dynamic changes in rapid and decadal soil C pools. In sites with good time series samples from 1959-1975, we examine the apparent impacts of measured or modelled (Biome-BGC) NPP on soil Δ14C. Ultimately, these approaches have the ability to empirically constrain, and provide limited verification, of the soil C cycle as commonly depicted ecosystem biogeochemistry models.

  7. Land Cover Change in the Boston Mountains, 1973-2000

    USGS Publications Warehouse

    Karstensen, Krista A.

    2009-01-01

    The U.S. Geological Survey (USGS) Land Cover Trends project is focused on understanding the rates, trends, causes, and consequences of contemporary U.S. land-cover change. The objectives of the study are to: (1) to develop a comprehensive methodology for using sampling and change analysis techniques and Landsat Multispectral Scanner (MSS), Thematic Mapper (TM), and Enhanced Thematic Mapper Plus (ETM+) data to measure regional land-cover change across the United States; (2) to characterize the types, rates, and temporal variability of change for a 30-year period; (3) to document regional driving forces and consequences of change; and (4) to prepare a national synthesis of land-cover change (Loveland and others, 1999). The 1999 Environmental Protection Agency (EPA) Level III ecoregions derived from Omernik (1987) provide the geographic framework for the geospatial data collected between 1973 and 2000. The 27-year study period was divided into five temporal periods: 1973-1980, 1980-1986, 1986-1992, 1992-2000, and 1973-2000, and the data are evaluated using a modified Anderson Land Use Land Cover Classification System (Anderson and others, 1976) for image interpretation. The rates of land-cover change are estimated using a stratified, random sampling of 10-kilometer (km) by 10-km blocks allocated within each ecoregion. For each sample block, satellite images are used to interpret land-cover change for the five time periods previously mentioned. Additionally, historic aerial photographs from similar time frames and other ancillary data, such as census statistics and published literature, are used. The sample block data are then incorporated into statistical analyses to generate an overall change matrix for the ecoregion. Field data of the sample blocks include direct measurements of land cover, particularly ground-survey data collected for training and validation of image classifications (Loveland and others, 2002). The field experience allows for additional observations of the character and condition of the landscape, assistance in sample block interpretation, ground truthing of Landsat imagery, and determination of the driving forces of change identified in an ecoregion.

  8. Microcraters on lunar samples

    NASA Technical Reports Server (NTRS)

    Fechtig, H.; Gentner, W.; Hartung, J. B.; Nagel, K.; Neukum, G.; Schneider, E.; Storzer, D.

    1977-01-01

    The lunar microcrater phenomenology is described. The morphology of the lunar craters is in almost all aspects simulated in laboratory experiments in the diameter range from less than 1 nu to several millimeters and up to 60 km/s impact velocity. An empirically derived formula is given for the conversion of crater diameters into projectile diameters and masses for given impact velocities and projectile and target densities. The production size frequency distribution for lunar craters in the crater size range from approximately 1 nu to several millimeters in diameter is derived from various microcrater measurements within a factor of up to 5. Particle track exposure age measurements for a variety of lunar samples have been performed. They allow the conversion of the lunar crater size frequency production distributions into particle fluxes. The development of crater populations on lunar rocks under self-destruction by subsequent meteoroid impacts and crater overlap is discussed and theoretically described. Erosion rates on lunar rocks on the order of several millimeters per 10 yr are calculated. Chemical investigations of the glass linings of lunar craters yield clear evidence of admixture of projectile material only in one case, where the remnants of an iron-nickel micrometeorite have been identified.

  9. Combined biodegradation and ozonation for removal of tannins and dyes for the reduction of pollution loads.

    PubMed

    Kanagaraj, James; Mandal, Asit Baran

    2012-01-01

    Tannins and dyes pose major threat to the environment by generating huge pollution problem. Biodegradation of wattle extract, chrome tannin and dye compounds using suitable fungal culture namely Aspergillus niger, Penicillium sp. were carried out. In addition to these, ozone treatment was carried out to get higher degradation rate. The results were monitored by carrying out chemical oxygen demand (COD), total organic carbon (TOC), and UV-Vis analysis. The results showed that wattle extract (vegetable tannin) gave better biodegradation rate than dye and chromium compounds. Biodegradation plus ozone showed degradation rates of 92-95%, 94-95%, and 85-87% for the wattle extract, dyes, chromium compounds, respectively. UV-Vis showed that there were no peaks observed for biodegraded samples indicating better degradation rates as compared to the control samples. FT-IR spectra analysis suggested that the formation of flavanoid derivatives, chromic oxide and NH(2) compounds during degradation of wattle extract, chromium and dye compounds, respectively, at the peaks of 1,601-1,629 cm(-1), 1,647 cm(-1), and 1,610-1,680 cm(-1). The present investigation shows that combination of biodegradation with ozone is the effective method for the removal of dyes and tannins. The biodegradation of the said compounds in combination with ozonation showed better rate of degradation than by chemical methods. The combination of biodegradation with ozone helps to reduce pollution problems in terms of COD, TOC, total dissolved solids and total suspended solids.

  10. Protection reduces loss of natural land-cover at sites of conservation importance across Africa.

    PubMed

    Beresford, Alison E; Eshiamwata, George W; Donald, Paul F; Balmford, Andrew; Bertzky, Bastian; Brink, Andreas B; Fishpool, Lincoln D C; Mayaux, Philippe; Phalan, Ben; Simonetti, Dario; Buchanan, Graeme M

    2013-01-01

    There is an emerging consensus that protected areas are key in reducing adverse land-cover change, but their efficacy remains difficult to quantify. Many previous assessments of protected area effectiveness have compared changes between sets of protected and unprotected sites that differ systematically in other potentially confounding respects (e.g. altitude, accessibility), have considered only forest loss or changes at single sites, or have analysed changes derived from land-cover data of low spatial resolution. We assessed the effectiveness of protection in reducing land-cover change in Important Bird Areas (IBAs) across Africa using a dedicated visual interpretation of higher resolution satellite imagery. We compared rates of change in natural land-cover over a c. 20-year period from around 1990 at a large number of points across 45 protected IBAs to those from 48 unprotected IBAs. A matching algorithm was used to select sample points to control for potentially confounding differences between protected and unprotected IBAs. The rate of loss of natural land-cover at sample points within protected IBAs was just 42% of that at matched points in unprotected IBAs. Conversion was especially marked in forests, but protection reduced rates of forest loss by a similar relative amount. Rates of conversion increased from the centre to the edges of both protected and unprotected IBAs, but rates of loss in 20-km buffer zones surrounding protected IBAs and unprotected IBAs were similar, with no evidence of displacement of conversion from within protected areas to their immediate surrounds (leakage).

  11. THE IMPORTANCE OF {sup 56}Ni IN SHAPING THE LIGHT CURVES OF TYPE II SUPERNOVAE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakar, Ehud; Poznanski, Dovi; Katz, Boaz

    2016-06-01

    What intrinsic properties shape the light curves of SNe II? To address this question we derive observational measures that are robust (i.e., insensitive to detailed radiative transfer) and constrain the contribution from {sup 56}Ni as well as a combination of the envelope mass, progenitor radius, and explosion energy. By applying our methods to a sample of SNe II from the literature, we find that a {sup 56}Ni contribution is often significant. In our sample, its contribution to the time-weighted integrated luminosity during the photospheric phase ranges between 8% and 72% with a typical value of 30%. We find that themore » {sup 56}Ni relative contribution is anti-correlated with the luminosity decline rate. When added to other clues, this in turn suggests that the flat plateaus often observed in SNe II are not a generic feature of the cooling envelope emission, and that without {sup 56}Ni many of the SNe that are classified as II-P would have shown a decline rate that is steeper by up to 1 mag/100 days. Nevertheless, we find that the cooling envelope emission, and not {sup 56}Ni contribution, is the main driver behind the observed range of decline rates. Furthermore, contrary to previous suggestions, our findings indicate that fast decline rates are not driven by lower envelope masses. We therefore suggest that the difference in observed decline rates is mainly a result of different density profiles of the progenitors.« less

  12. Photocatalytic and antibacterial properties of Au-TiO{sub 2} nanocomposite on monolayer graphene: From experiment to theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Wangxiao; Huang, Hongen; Yan, Jin

    The formation of the Au-TiO{sub 2} nanocomposite on monolayer Graphene (GTA) by sequentially depositing titanium dioxide particles and gold nanoparticles on graphene sheet was synthesized and analyzed in our work. The structural, morphological, and physicochemical properties of samples were thoroughly investigated by UV-Vis spectrophotometer, Raman spectroscopy, Fourier transform infrared spectroscopy, atomic force microscopy, scanning electron microscope, and transmission electron microscope. Photocatalytic performance of GTA, graphene (GR), TiO{sub 2,} and TiO{sub 2} -graphene nanocomposite (GT) were comparatively studied for degradation of methyl orange, and it was found that GTA had highest performance among all samples. More importantly, antibacterial performance of thismore » novel composite against Gram-positive bacteria, Gram-negative bacteria, and fungus was predominant compared to GR, TiO{sub 2}, and GT. And the result of biomolecules oxidation tests suggested that antimicrobial actions were contributed by oxidation stress on both membrane and antioxidant systems. Besides, the rate of two decisive processes during photocatalytic reaction, the rate of the charge transfer (k{sub CT}) and the rate of the electron-hole recombination (k{sub R}) have been studied by Perturbation theory, Radiation theory, and Schottky barrier theory. Calculation and derivation results show that GTA possesses superior charge separation and transfer rate, which gives an explanation for the excellent oxidation properties of GTA.« less

  13. Drive: Theory and Construct Validation

    PubMed Central

    Petrides, K. V.

    2016-01-01

    This article explicates the theory of drive and describes the development and validation of two measures. A representative set of drive facets was derived from an extensive corpus of human attributes (Study 1). Operationalised using an International Personality Item Pool version (the Drive:IPIP), a three-factor model was extracted from the facets in two samples and confirmed on a third sample (Study 2). The multi-item IPIP measure showed congruence with a short form, based on single-item ratings of the facets, and both demonstrated cross-informant reliability. Evidence also supported the measures’ convergent, discriminant, concurrent, and incremental validity (Study 3). Based on very promising findings, the authors hope to initiate a stream of research in what is argued to be a rather neglected niche of individual differences and non-cognitive assessment. PMID:27409773

  14. Multisensor data fusion across time and space

    NASA Astrophysics Data System (ADS)

    Villeneuve, Pierre V.; Beaven, Scott G.; Reed, Robert A.

    2014-06-01

    Field measurement campaigns typically deploy numerous sensors having different sampling characteristics for spatial, temporal, and spectral domains. Data analysis and exploitation is made more difficult and time consuming as the sample data grids between sensors do not align. This report summarizes our recent effort to demonstrate feasibility of a processing chain capable of "fusing" image data from multiple independent and asynchronous sensors into a form amenable to analysis and exploitation using commercially-available tools. Two important technical issues were addressed in this work: 1) Image spatial registration onto a common pixel grid, 2) Image temporal interpolation onto a common time base. The first step leverages existing image matching and registration algorithms. The second step relies upon a new and innovative use of optical flow algorithms to perform accurate temporal upsampling of slower frame rate imagery. Optical flow field vectors were first derived from high-frame rate, high-resolution imagery, and then finally used as a basis for temporal upsampling of the slower frame rate sensor's imagery. Optical flow field values are computed using a multi-scale image pyramid, thus allowing for more extreme object motion. This involves preprocessing imagery to varying resolution scales and initializing new vector flow estimates using that from the previous coarser-resolution image. Overall performance of this processing chain is demonstrated using sample data involving complex too motion observed by multiple sensors mounted to the same base. Multiple sensors were included, including a high-speed visible camera, up to a coarser resolution LWIR camera.

  15. Modeling the transport of organic chemicals between polyethylene passive samplers and water in finite and infinite bath conditions.

    PubMed

    Tcaciuc, A Patricia; Apell, Jennifer N; Gschwend, Philip M

    2015-12-01

    Understanding the transfer of chemicals between passive samplers and water is essential for their use as monitoring devices of organic contaminants in surface waters. By applying Fick's second law to diffusion through the polymer and an aqueous boundary layer, the authors derived a mathematical model for the uptake of chemicals into a passive sampler from water, in finite and infinite bath conditions. The finite bath model performed well when applied to laboratory observations of sorption into polyethylene (PE) sheets for various chemicals (polycyclic aromatic hydrocarbons, polychlorinated biphenyls [PCBs], and dichlorodiphenyltrichloroethane [DDT]) and at varying turbulence levels. The authors used the infinite bath model to infer fractional equilibration of PCB and DDT analytes in field-deployed PE, and the results were nearly identical to those obtained using the sampling rate model. However, further comparison of the model and the sampling rate model revealed that the exchange of chemicals was inconsistent with the sampling rate model for partially or fully membrane-controlled transfer, which would be expected in turbulent conditions or when targeting compounds with small polymer diffusivities and small partition coefficients (e.g., phenols, some pesticides, and others). The model can be applied to other polymers besides PE as well as other chemicals and in any transfer regime (membrane, mixed, or water boundary layer-controlled). Lastly, the authors illustrate practical applications of this model such as improving passive sampler design and understanding the kinetics of passive dosing experiments. © 2015 SETAC.

  16. Genetic diversity in Trypanosoma theileri from Sri Lankan cattle and water buffaloes.

    PubMed

    Yokoyama, Naoaki; Sivakumar, Thillaiampalam; Fukushi, Shintaro; Tattiyapong, Muncharee; Tuvshintulga, Bumduuren; Kothalawala, Hemal; Silva, Seekkuge Susil Priyantha; Igarashi, Ikuo; Inoue, Noboru

    2015-01-30

    Trypanosoma theileri is a hemoprotozoan parasite that infects various ruminant species. We investigated the epidemiology of this parasite among cattle and water buffalo populations bred in Sri Lanka, using a diagnostic PCR assay based on the cathepsin L-like protein (CATL) gene. Blood DNA samples sourced from cattle (n=316) and water buffaloes (n=320) bred in different geographical areas of Sri Lanka were PCR screened for T. theileri. Parasite DNA was detected in cattle and water buffaloes alike in all the sampling locations. The overall T. theileri-positive rate was higher in water buffaloes (15.9%) than in cattle (7.6%). Subsequently, PCR amplicons were sequenced and the partial CATL sequences were phylogenetically analyzed. The identity values for the CATL gene were 89.6-99.7% among the cattle-derived sequences, compared with values of 90.7-100% for the buffalo-derived sequences. However, the cattle-derived sequences shared 88.2-100% identity values with those from buffaloes. In the phylogenetic tree, the Sri Lankan CATL gene sequences fell into two major clades (TthI and TthII), both of which contain CATL sequences from several other countries. Although most of the CATL sequences from Sri Lankan cattle and buffaloes clustered independently, two buffalo-derived sequences were observed to be closely related to those of the Sri Lankan cattle. Furthermore, a Sri Lankan buffalo sequence clustered with CATL gene sequences from Brazilian buffalo and Thai cattle. In addition to reporting the first PCR-based survey of T. theileri among Sri Lankan-bred cattle and water buffaloes, the present study found that some of the CATL gene fragments sourced from water buffaloes shared similarity with those determined from cattle in this country. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Graphene/dodecanol floating solidification microextraction for the preconcentration of trace levels of cinnamic acid derivatives in traditional Chinese medicines.

    PubMed

    Hu, Shuang; Yang, Xiao; Xue, Jiao; Chen, Xuan; Bai, Xiao-Hong; Yu, Zhi-Hui

    2017-07-01

    A novel graphene/dodecanol floating solidification microextraction followed by HPLC with diode-array detection has been developed to extract trace levels of four cinnamic acid derivatives in traditional Chinese medicines. Several parameters affecting the performance were investigated and optimized. Also, possible microextraction mechanism was analyzed and discussed. Under the optimum conditions (amount of graphene in dodecanol: 0.25 mg/mL; volume of extraction phase: 70 μL; pH of sample phase: 3; extraction time: 30   min; stirring rate: 1000 rpm; salt amount: 26.5% NaCl; volume of sample phase: 10 mL, and without dispersant addition), the enrichment factors of four cinnamic acid derivatives ranged from 26 to 112, the linear ranges were 1.0 × 10 -2 -10.0 μg/mL for caffeic acid, 1.3 × 10 -3 -1.9 μg/mL for p-hydroxycinnamic acid, 2.8 × 10 -3 -4.1 μg/mL for ferulic acid, and 2.7 × 10 -3 -4.1 μg/mL for cinnamic acid, with r 2 ≥ 0.9993. The detection limits were found to be in the range of 0.1-1.0 ng/mL, and satisfactory recoveries (92.5-111.2%) and precisions (RSDs 1.1-9.5%) were also achieved. The results showed that the approach is simple, effective and sensitive for the preconcentration and determination of trace levels of cinnamic acid derivatives in Chinese medicines. The proposed method was compared with conventional dodecanol floating solidification microextraction and other extraction methods. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Interest rates in quantum finance: Caps, swaptions and bond options

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.

    2010-01-01

    The prices of the main interest rate options in the financial markets, derived from the Libor (London Interbank Overnight Rate), are studied in the quantum finance model of interest rates. The option prices show new features for the Libor Market Model arising from the fact that, in the quantum finance formulation, all the different Libor payments are coupled and (imperfectly) correlated. Black’s caplet formula for quantum finance is given an exact path integral derivation. The coupon and zero coupon bond options as well as the Libor European and Asian swaptions are derived in the framework of quantum finance. The approximate Libor option prices are derived using the volatility expansion. The BGM-Jamshidian (Gatarek et al. (1996) [1], Jamshidian (1997) [2]) result for the Libor swaption prices is obtained as the limiting case when all the Libors are exactly correlated. A path integral derivation is given of the approximate BGM-Jamshidian approximate price.

  19. Coal liquefaction process streams characterization and evaluation: Analysis of Black Thunder coal and liquefaction products from HRI Bench Unit Run CC-15

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pugmire, R.J.; Solum, M.S.

    This study was designed to apply {sup 13}C-nuclear magnetic resonance (NMR) spectrometry to the analysis of direct coal liquefaction process-stream materials. {sup 13}C-NMR was shown to have a high potential for application to direct coal liquefaction-derived samples in Phase II of this program. In this Phase III project, {sup 13}C-NMR was applied to a set of samples derived from the HRI Inc. bench-scale liquefaction Run CC-15. The samples include the feed coal, net products and intermediate streams from three operating periods of the run. High-resolution {sup 13}C-NMR data were obtained for the liquid samples and solid-state CP/MAS {sup 13}C-NMR datamore » were obtained for the coal and filter-cake samples. The {sup 1}C-NMR technique is used to derive a set of twelve carbon structural parameters for each sample (CONSOL Table A). Average molecular structural descriptors can then be derived from these parameters (CONSOL Table B).« less

  20. Determination of airborne carbonyls: comparison of a thermal desorption/GC method with the standard DNPH/HPLC method.

    PubMed

    Ho, Steven Sai Hang; Yu, Jian Zhen

    2004-02-01

    The standard method for the determination of gaseous carbonyls is to collect carbonyls onto 2,4-dinitrophenyl hydrazine (DNPH) coated solid sorbent followed by solvent extraction of the solid sorbent and analysis of the derivatives using high-pressure liquid chromatography (HPLC). This paper describes a newly developed approach that involves collection of the carbonyls onto pentafluorophenyl hydrazine (PFPH) coated solid sorbents followed by thermal desorption and gas chromatographic (GC) analysis of the PFPH derivatives with mass spectrometric (MS) detection. Sampling tubes loaded with 510 nmol of PFPH on Tenax sorbent effectively collect gaseous carbonyls, including formaldehyde, acetaldehyde, propanal, butanal, heptanal, octanal, acrolein, 2-furfural, benzaldehyde, p-tolualdehyde, glyoxal, and methylglyoxal, at a flow rate of at least up to 100 mL/min. All of the tested carbonyls are shown to have method detection limits (MDLs) of subnanomoles per sampling tube, corresponding to air concentrations of <0.3 ppbv for a sampled volume of 24 L. These limits are 2-12 times lower than those that can be obtained using the DNPH/HPLC method. The improvement of MDLs is especially pronounced for carbonyls larger than formaldehyde and acetaldehyde. The PFPH/GC method also offers better peak separation and more sensitive and specific detection through the use of MS detection. Comparison studies on ambient samples and kitchen exhaust samples have demonstrated that the two methods do not yield systematic differences in concentrations of the carbonyls that are above their respective MDLs in both methods, including formaldehyde, acetaldehyde, acrolein, and butanal. The lower MDLs afforded by the PFPH/ GC method also enable the determination of a few more carbonyls in both applications.

Top